Category Archives: News

Slack admits to leaking hashed passwords for five years

Popular collaboration tool Slack (not to be confused with the nickname of the world’s longest-running Linux distro, Slackware) has just owned up to a long-running cybersecurity SNAFU.

According to a news bulletin entitled Notice about Slack password resets, the company admitted that it had inadvertently been oversharing personal data “when users created or revoked a shared invitation link for their workspace.”

From 2017-04-17 to 2022-07-17 (we assume both dates are inclusive), Slack said that the data sent to the recipients of such invitations included…

…wait for it…

…the sender’s hashed password.

What went wrong?

Slack’s security advisory doesn’t explain the breach very clearly, saying merely that “[t]his hashed password was not visible to any Slack clients; discovering it required actively monitoring encrypted network traffic coming from Slack’s servers.”

We’re guessing that this translates as follows:

“Most recipients wouldn’t have noticed that the data they received included any hashed password information, because that information, although included in the network packets sent, was never deliberately displayed to them. And because the data was sent over a TLS connection, eavesdroppers wouldn’t have been able to sniff it out along the way, because it wouldn’t get decrypted until it reached the other end of the connection.”

That’s the good news.

But network packets often include data that’s never normally used or seen by recipients.

HTTP headers are a good example of this, given that they’re meant to be instructions to your browser, not data for display in the web page you’re looking at.

And data that’s irrelevant or invisible to users often ends up in logs anyway, especially in firewall logs, where it could be preserved indefinitely.

That’s the bad news.

Salt, hash and stretch…

According to Slack, the leaked data was not merely hashed, but salted too, meaning that each user’s password was first mixed together with random data unique to that user before the hash function was applied.

Hashes are essentially “non-reversible” mathematical functions that are easy to calculate in one direction, but not in the other.

For example, it’s easy to calculate that:

 SHA256("DUCK") = 7FB376..DEAD4B3AF008

But the only way to work “backwards” from 7FB376..DEAD4B3AF008 to DUCK is to work forwards from every possible word in the dictionary and see if any of them come out with the value you’re trying to match:

 SHA256("AARDVARK") = 5A9394..467731D0526A [X] SHA256("AARON") = C4DDDE..12E4CFE7B4FD [X] SHA256("ABACUS") = BEDDD8..1FE4DE25AAD7 [X] . . . 3400 skipped SHA256("BABBLE") = 70E837..CEAD4B1FA777 [X] SHA256("BADGER") = 946D0D..7B3073C1C094 [X] SHA256("BAGPIPE") = 359DBE..BE193FCCB111 [X] . . . 3200 skipped SHA256("CABAL") = D78CF4..85BE02967565 [X] SHA256("CACHE") = C118F9..22F3269E7B32 [X] SHA256("CAGOULE") = 5EA530..5A26C5B56DCF [X] . . . 5400 skipped SHA256("DAB") = BBCC8E..E8B98CAB5128 [X] SHA256("DAFFODIL") = 75121D..D6401AB24A98 [X] SHA256("DANGER") = 0BD727..4C86037BB065 [X] . . . 3500 skipped SHA256("DUCK") = 7FB376..DEAD4B3AF008 [FOUND!]

And by including a per-user salt, which doesn’t need to be secret, merely unique to each user, you ensure that even if two users choose the same password, they won’t end up with the same password hash.

You can see the effect of salting here, when we hash the word DUCK with three different prefixes:

 SHA256("RANDOM1-DUCK") = E355DB..349E669BB9A2 SHA256("RANDOM2-DUCK") = 13D538..FEA0DC6DBB5C <-- Changing just one input byte produces a wildly different hash SHA256("ARXXQ3H-DUCK") = 52AD92..544208A19449

This also means that attackers can’t create a precomputed list of likely hashes, or create a table of partial hash calculations, known as as a rainbow table, that can accelerate hash checking. (They’d need a brand new hashlist, or a unique set of rainbow tables, for every possible salt.)

In other words, hashed-and-salted passwords can’t trivially be cracked to recover the original input, especially if the the original password was complex and randomly chosen.

What Slack didn’t say is whether they’d stretched the password hashes, too, and if so, how.

Stretching is a jargon term that means repeating the password hashing process over and over again, for example, 100,000 times, in order to extend the time needed to try out a bunch of dictionary words against known password hashes.

If it would take one second to put 100,000 dictionary words through a plain salt-and-hash process, then attackers who know your password hash could try 6 million different dictionary words and deriviatives every minute, or take more than one billion guesses every three hours.

On the other hand, if the salt-and-hash computations were stretched to take one second each, then the extra one-second delay when you tried to log in would cause little or no annoyance to you…

…but would reduce an attacker to just 3600 tries an hour, making it much less likely that they’d get enough time to guess anything but the most obvious passwords.

Several well-respected salt-hash-and-stretch algorithms are known, notably PBKDF2, bcrypt, scrypt and Argon2, all of which can be adjusted to increase the time needed to try individual password guesses in order to reduce the viability of so-called dictionary and brute force attacks.

A dictionary attack means you’re trying likely passwords only, such as every word you can think of from aardvark to zymurgy, and then giving up. A brute-force attack means trying every possible input, even weird and unpronouncable ones, from AAA..AAAA to ZZZ..ZZZZ (or from 0000..000000 to FFFF..FFFFFF if you think in hexadecimal byte-by-byte terms).

What to do?

Slack says that about 1 in 200 of its users (0.5%, presumably based on records of how many shared invitation links were generated in the danger period), and that it will be forcing those users to reset their passwords.

Some further advice:

  • If you’re a Slack user, you might as well reset your password even if you weren’t notified by the company to do so. When a company admits it has been careless with its password database by leaking hashes, especially over such a long period, you might as well assume that yours was affected, even if the company thinks it wasn’t. As soon as you change your password, you make the old hash useless to attackers.
  • If you’re not using a password manager, consider getting one. A password manager helps to pick proper passwords, thus ensuring that your password ends up very, very far down the list of passwords that might get cracked in an incident like this. Attackers typically can’t do a true brute force attack, because there are just too many possible passwords to try out. So, they try the most likely passwords first, such as words or obvious word-and-number combinations, getting longer and more complex as the attack proceeds. A password manager can remember a random, 20-character password as easily as you can remember your cat’s name.
  • Turn on 2FA if you can. 2FA, or two-factor authentication, means that you need not only your password to login, but also a one-time code that changes every time. These codes are typically sent to (or generated by) your mobile phone, and are valid only for a few minutes each. This means that even if cybercrooks do crack your password, it’s not enough on its own for them to take over your account.
  • Choose a reputable salt-hash-and-stretch algorithm when handling passwords yourself.. In the unfortunate event that your password database gets breached, you will be able to give your customers precise details of the algorithm and the security settings you used. This will help well-informed users to judge for themselves how likely it is that their stolen hashes might have been cracked in the time available to attackers so far.

Traffic Light Protocol for cybersecurity responders gets a revamp

The word “protocol” crops up all over the place in IT, usually describing the details of how to exchange data between requester and replier.

Thus we have HTTP, short for hypertext transfer protocol, which explains how to communicate with a webserver; SMTP, or simple mail transfer protocol, which governs sending and receiving email; and BGP, the border gateway protocol, by means of which ISPs tell each other which internet destinations they can help deliver data to, and how quickly.

But there is also an important protocol that helps humans in IT, including researchers, responders, sysadmins, managers and users, to be circumspect in how they handle information about cybersecurity threats.

That protocol is known as TLP, short for the Traffic Light Protocol, devised as a really simple way of labelling cybersecurity information so that the recipient can easily figure out how sensitive it is, and how widely it can be shared without making a bad thing worse.

Interestingly, not everyone subscribes to the idea that the dissemination of cybersecurity information should ever be restricted, even voluntarily.

Enthusiasts of so-called full disclosure insist that publishing as much information as possible, as widely as possible, as quickly as possible, is actually the best way to deal with vulnerabilities, exploits, cyberattacks, and the like.

Full-disclosure advocates will freely admit that this sometimes plays into the hands of cybercriminals, by clearly identifying the information they need (and giving away knowledge they might not previously have had) to initiate attacks right away, before anyone is ready.

Full disclosure can also disrupt cyberdefences by forcing sysadmins everywhere to stop whatever they are doing and divert their attention immediately to something that could otherwise safely have been scheduled for attention a bit later on, if only it hadn’t been shouted from the rooftops.

Simple, easy and fair

Nevertheless, supporters of full disclosure will tell you that nothing could be simpler, easier or fairer than just telling everybody at the same time.

After all, if you tell some people but not others, so that they can start preparing potential defences in comparative secrecy and therefore perhaps get ahead of the cybercriminals, you might actually make things worse for the world at large.

If even one of the people in the inner circle turns out to be a rogue, or inadvertently gives away the secret simply by the nature of how they respond, or by the plans they suddenly decide to put into action, then the crooks may very well reverse engineer the secret information for themselves anyway…

…and then everyone else who isn’t part of the inner circle will be thrown to the wolves.

Anyway, who decides which individuals or organisations get admitted into the inner circle (or the “Old Boy’s Club”, if you want to be pejorative about it)?

Additionally, the full disclosure doctrine ensures that companies can’t get away with sweeping issues under the carpet and doing nothing about them.

In the words of the infamous (and problematic, but that’s an argument for another day) 1992 hacker film Sneakers: “No more secrets, Marty.”

Responsible disclosure

Full disclosure, howver, isn’t how cybersecurity response is usually done these days.

Indeed, some types of cyberthreat-related data simply can’t be shared ethically or legally, if doing so might harm someone’s privacy, or put the recipients themselves in violation of data protection or data possession regulations.

Instead, the cybersecurity industry has largely settled on a sort-of middle ground for reporting cybersecurity information, known informally as responsible disclosure.

This process is based around the idea that the safest and fairest way to get cybersecurity problems fixed without blurting them out to the whole world right away is to give the people who created the problems “first dibs” on fixing them.

For example, if you find a hole in a remote access product that could lead to a security bypass, or if you find a bug in a server that could lead to remote code execution, you report it privately to the vendor of the product (or the team who look after it, if it’s open source).

You then agree with them a period of secrecy, typically lasting anywhere from a few days to a few months, during which they can sort it out secretly, if they like, and disclose the gory details only after their fixes are ready.

But if the agreed period expires without a result, you switch to full disclosure mode and reveal the details to everyone anyway, thus ensuring that the problem can’t simply be swept under the carpet and ignored indefinitely.

Controlled sharing

Of course, responsible disclosure doesn’t mean that the organisation that received the initial report is compelled to keep the information to itself

The initial recipients of a private report may decide that they want or need to share the news anyway, perhaps in a limited fashion.

For example, if you have a critical patch that will require several parts of your organisation to co-operate, you’ll have little choice but to share the information internally.

And if you have a patch coming out that you know will fix a recently-discovered security hole, but only if your customers make some configuration changes before they roll it out, you might want to give them an early warning so they can get ready.

At the same time, you might want to ask them nicely not to tell the rest of the world all about the issue just yet.

Or you might be investigating an ongoing cyberattack, and you might want to reveal different amounts of detail to different audiences as the investigation unfolds.

You might have general advice that can safely and usefully be shared right now with the whole world.

You may have specific data (such as IP blocklists or other indicators of compromise) that you want to share with just one company, because the information unavoidably reveals them as a victim.

And you may want to reveal everything you know, as soon as you know it, to individual law enforcement investigators whom you trust to go after the criminals involved.

How to label the information?

How to label these different levels of cybersecurity information unambiguously?

Law enforcement, security services, militaries and official international bodies typically have their own jargon, known as protective marking, for this sort of thing, with labels that we all know from spy movies, such as SECRET, TOP SECRET, FOR YOUR EYES ONLY, NO FOREIGN NATIONALS, and so on.

But different labels mean different things in different parts of the world, so this sort of protective marking doesn’t translate well for public use in many different languages, regions and cybersecurity cultures.

(Sometimes these labels can be linguistically challenging. Should a confidential document produced by the United Nations, for instance, be labelled UN - CLASSIFIED? Or would that be misinterpreted as UNCLASSIFIED and get shared widely?)

What about a labelling system that uses simple words and an obvious global metaphor?

That’s where the Traffic Light Protocol comes in.

The metaphor, as you will have guessed, is the humble traffic signal, which uses the same colours, with much the same meanings, in almost every country in the world.

RED means stop, and nothing but stop; AMBER means stop unless doing so would itself be dangerous; and GREEN means that you’re allowed to go, assuming it’s safe to do so.

Modern traffic signals, which use LEDs to produce specific light frequencies, instead of filters to remove unwanted colour bands from incandescent lamps, are so bright and precisely targeted that some jurisdictions no longer bother to test prospective drivers for so-called colour blindness, because the three frequency bands emitted are so narrow as to be almost impossible to mix up, and their meanings are so well-established.

Even if you live in a country where traffic lights have additional “in-between” signals, such as green+amber together, red+amber together, or one colour flashing continuously on its own, pretty much everyone in the world understands traffic light metaphors based on just those three main colours.

Indeed, even if you’re used to calling the middle light YELLOW instead of AMBER, as some countries do, it’s obvious what AMBER refers to, if only because it’s the one in te middle that isn’t RED or GREEN.

TLP Version 2.0

The Traffic Light Protocol was first introduced in 1999, and by following the principle of Keep It Simple and Straightforward (KISS), has become a useful labelling system for cyubersecurity reports.

Ultimately, the TLP required four levels, not three, so the colour WHITE was added to mean “you can share this with anyone”, and the designators were defined very speficially as the text strings TLP:RED (all capitals, no spaces), TLP:AMBER, TLP:GREEN and TLP:WHITE.

By keeping spaces out of the labels and forcing them into upper case, they stand out clearly in email subject lines, are easy to use when sorting and searching, and won’t get split between lines by mistake.

Well, after more than 20 years of service, the TLP has undergone a minor update, so that from August 2022, we have Traffic Light Protocol 2.0.

Firstly, the colour WHITE has been replaced with CLEAR.

White not only has racial and ethnic overtones that common decency invites us to avoid, but also confusingly represents all the other colours mixed together, as though it might mean go-and-stop-at-the-same-time.

So CLEAR is not only a word that fits more comfortably in society today, but also one that suits its intended purpose more (ahem) clearly.

And a fifth marker has been added, namely TLP:AMBER+STRICT.

The levels are interpreted as follows:

TLP:RED “For the eyes and ears of individual recipients only.” This is pretty easy to interpret: if you receive a TLP:RED cybersecurity document, you can act on it, but you must not forward it to anyone else. Thus there is no need for you to try to figure out whether you should be letting any friends, colleagues or fellow researchers know. This level is reserved for information that could cause “significant risk for the privacy, reputation, or operations of the organisations involved.”
TLP:AMBER+STRICT You may share this information, but only with other people inside your organisation. So you can discuss it with programming teams, or with the IT department. But you must keep it “in house”. Notably, you must not forward it to your customers, business partners or suppliers. Unfortunately, the TLP documentation doesn’t try to define whether a contractor or a service provider is in-house or external. We suggest that you treat the phrase “restrict sharing to the organisation only as strictly as you possibly can, as the name of this security level suggests, but we suspect that some companies will end up with a more liberal interpretation of this rule.
TLP:AMBER Like TLP:AMBER+STRICT, but you may share the information with customers (the TLP document actially uses the word clients) if necessary.
TLP:GREEN You may share this information within your community. The TLP leaves it up to you to be reasonable about which people constitute your community, noting only that “when ‘community’ is not defined, assume the cybersecurity/defence community.” In practice, you might as well assume that anything published as TLP:GREEN will end up as public knowledge, but the onus is on you to be thoughtful about how you yourself share it.
TLP:CLEAR Very simply, you are clear to share this information with anyone you like. As the TLP puts it: “Recipients can spread this to the world; there is no limit on disclosure.” This label is particularly useful when you are sharing two or more documents with a trusted party, and at least one of the documents is marked for restricted sharing. Putting TLP:CLEAR on the content that they can share, and perhaps that you want them to share in order to increase awareness, makes it your attentions abundantly clear, if you will pardon the pun.

Just to be clear (sorry!), we don’t put TLP:CLEAR on every Naked Security article we publish, given that this website is publicly accessible already, but we invite you to assume it.


S3 Ep94: This sort of crypto (graphy), and the other sort of crypto (currency!) [Audio + Text]

Click-and-drag on the soundwaves below to skip to any point. You can also listen directly on Soundcloud.

With Doug Aamoth and Paul Ducklin.

Intro and outro music by Edith Mudge.

You can listen to us on Soundcloud, Apple Podcasts, Google Podcasts, Spotify, Stitcher and anywhere that good podcasts are found. Or just drop the URL of our RSS feed into your favourite podcatcher.


READ THE TRANSCRIPT

DOUG.  A critical Samba bug, yet another crypto theft, and Happy SysAdmin Day.

All that and more, on the Naked Security podcast.

[MUSICAL MODEM]

Welcome to the podcast, everybody.

I am Doug Aamoth.

With me, as always, is Paul Ducklin… Paul, how do you do today?


DUCK.  Excellent, thank you, Douglas.


DOUG.  We like to start the show with some tech history.

And this week, Paul, we’re going way back to 1858!

This week in 1858, the first transatlantic telegraph cable was completed.

It was spearheaded by American merchant Cyrus Westfield, and the cable ran from Trinity Bay, Newfoundland, to Valencia, Ireland, some 2000 miles across, and more than 2 miles deep.

This would be the fifth attempt, and unfortunately, the cable only worked for about a month.

But it did function long enough for then President James Buchanan and Queen Victoria to exchange pleasantries.


DUCK.  Yes, I believe that it was, how can I put it… faint. [LAUGHTER]

1858!

What hath God wrought?, Doug! [WORDS SENT IN FIRST EVER TELEGRAPH MESSAGE]


DOUG.  [LAUGHS] Speaking of things that have been wrought, there is a critical Samba bug that has since been patched.

I’m not an expert by any means, but this bug would let anyone become a Domain Admin… that sounds bad.


DUCK.  Well, it sounds bad, Doug, mainly for the reason that it *is* rather bad!


DOUG.  There you go!


DUCK.  Samba… just to be clear, before we start, let’s go through the versions you want.

If you’re on the 4.16 flavour, you need 4.16.4 or later; if you’re on 4.15, you need 4.15.9 or later; and if you’re on 4.14, you need 4.14.14 or later.

Those bug fixes, in total, patched six different bugs that were considered serious enough to get CVE numbers – official designators.

The one that stood out is CVE-2022-32744.

And the title of the bug says it all: Samba Active Directory users can forge password change requests for any user.


DOUG.  Yes, that sounds bad.


DUCK.  So, as the full bug report in the security advisory, the change log says, in rather orotund fashion:

“A user could change the password of the administrator account and gain total control over the domain. Full loss of confidentiality and integrity would be possible, as well as of availability by denying users access to their accounts.”

And as our listeners probably know, the so-called “holy trinity” (air quotes) of computer security is: availability, confidentiality and integrity.

You’re supposed to have them all, not just one of them.

So, integrity means nobody else can get in and mess with your stuff without you noticing.

Availability says you can always get at your stuff – they can’t prevent you getting at it when you want to.

And confidentiality means they can’t look at it unless they’re supposed to be permitted.

Any one of those, or any two of those, isn’t much use on its own.

So this really was a trifecta, Doug!

And annoyingly, it’s in the very part of Samba that you might use not just if you’re trying to connect a Unix computer to a Windows domain, but if you’re trying to set up an Active Directory domain for Windows computers to use on a bunch of Linux or Unix computers.


DOUG.  That’s ticking all the boxes in all the wrong ways!

But there is a patch out – and we always say, “Patch early, patch often.”

Is there some sort of workaround that people can use if they can’t patch right away for some reason, or is this a just-do-it type of thing?


DUCK.  Well, my understanding is that this bug is in the password authentication service called kpasswd.

Essentially what that service does is it looks for a password change request, and verifies that it’s signed or authorised by some kind of trusted party.

And unfortunately, following a certain series of error conditions, that trusted party could include yourself.

So it’s kind of like a Print Your Own Passport bug, if you like.

You have to produce a passport… it can be a real one that was issued by your own government, or it can be one that you knocked up at home on your inkjet printer, and both of them woulds pass muster. [LAUGHTER]

The trick is, if you don’t actually rely on this password authentication service in your use of Samba, you can prevent that kpasswd service from running.

Of course, if you’re actually relying on the whole Samba system to provide your Active Directory authentication and your password changes, the workaround would break your own system.

So the best defence, of course, is indeed the patch that *removes* the bug rather than simply *avoiding* it.


DOUG.  Very good.

You can read more about that on the site: nakedscurity.sophos.com.

And we move right along to the most wonderful time of the year!

We just celebrated SysAdmin Day, Paul, and I won’t telegraph the punchline here… but you had quite a write up.


DUCK.  Well, once a year, it’s not too much to ask that we should go round to the IT department and smile at everybody who has put in all this hidden background work…

… to keep [GETTING FASTER AND FASTER] our computers, and our servers, and our cloud services, and our laptops, and our phones, and our network switches [DOUG LAUGHS], and our DSL connections, and our Wi-Fi kit in good working order.

Available! Confidential! Full of integrity, all year round!

If you didn’t do it on the last Friday of July, which is SysAdmin Appreciation Day, then why not go and do it today?

And even if you did do it, there’s nothing that says you can’t appreciate your SysAdmins every day of the year.

You don’t have to do it only in July, Doug.


DOUG.  Good point!


DUCK.  So here is what to do, Doug.

I’m going to call this a “poem” or “verse”… I think technically it’s doggerel [LAUGHTER], but I’m going to pretend that it has all the joy and warmth of a Shakespearean sonnet.

It *isn’t* a sonnet, but it’ll have to do.


DOUG.  Perfect.


DUCK.  Here you go, Doug.

If your mouse is out of batteries
Or your webcam light won't glow If you can't recall your password
Or your email just won't show If you've lost your USB drive
Or your meeting will not start If you can't produce a histogram
Or draw a nice round chart If you hit [Delete] by accident
Or formatted your disk If you meant to make a backup
But instead just took a risk If you know the culprit's obvious
And the blame points back to you Don't give up hope and be downcast
There's one thing left to do! Take chocolates, wine, some cheer, a smile
And mean it when you say: "I've just popped in to wish you all
A great SysAdmin Day!"

DOUG.  [CLAPPING] Really good! One of your best!


DUCK.  So much of what SysAdmins do is invisible, and so much of it is surprisingly difficult to do well and reliably…

…and to do without fixing one thing and breaking another.

That smile is the least they deserve, Doug.


DOUG.  The very least!


DUCK.  So, to all SysAdmins all over the world, I hope you enjoyed last Friday.

And if you didn’t get enough smiles, then take one now.


DOUG.  Happy SysAdmin Day, everybody, and read that poem, which is great…it’s on the site.

All right, moving on to something not so great: a memory mismanagement bug in GnuTLS.


DUCK.  Yes, I thought this was worth writing up on Naked Security, because when people think of open-source cryptography, they tend to think of OpenSSL.

Because (A) that’s the one that everybody’s heard of, and (B) it’s the one that’s probably had the most publicity in recent years over bugs, because of Heartbleed.

Even if you weren’t there at the time (it was eight years ago), you’ve probably heard of Heartbleed, which was a sort of data leakage and memory leakage bug in OpenSSL.

It had been in the code for ages and nobody noticed.

And then somebody did notice, and they gave it the fancy name, and they gave the bug a logo, and they gave the bug a website, and they made this massive PR thing out of it.


DOUG.  [LAUGHS] That’s how you know it’s real…


DUCK.  OK, they were doing it because they wanted to draw attention to the fact that they discovered it, and they were very proud of that fact.

And the flipside was that people went out and fixed this bug that they might otherwise not have done… because, well, it’s just a bug.

It doesn’t seem terribly dramatic – it’s not remote code execution. so they can’t just steam in and instantly take over all of my websites, etc. etc.

But it did make OpenSSL into a household name, not necessarily for all the right reasons.

However, there are many open source cryptographic libraries out there, not just OpenSSL, and at least two of them are surprisingly widely used, even if you’ve never heard of them.

There’s NSS, short for Network Security Service, which is Mozilla’s own cryptographic library.

You can download and use that independently of any specific Mozilla projects, but you will find it, notably, in Firefox and Thunderbird, doing all the encryption in there – they don’t use OpenSSL.

And there’s GnuTLS, which is an open-source library under the GNU project, which essentially, if you like, is a competitor or an alternative to OpenSSL, and that is used (even if you don’t realise it) by a surprising number of open-source projects and products…

…including by code, whatever platform you’re on, that you’ve probably got on your system.

So that includes anything to do with, say: FFmpeg; Mencoder; GnuPGP (the GNU key management tool); QEMU, Rdesktop; Samba, which we just spoke about in the previous bug; Wget, which a lot of people use for web downloading; Wireshark’s network sniffing tools; Zlib.

There are loads and loads of tools out there that need a cryptographic library, and have decided either to use GnuTLS *instead* of OpenSSL, or perhaps even *as well as*, depending on supply-chain issues of which subpackages they’ve pulled in.

You may have a project where some parts of it use GnuTLS for their cryptography, and some parts of it use OpenSSL, and it’s hard to choose one over the other.

So you end up, for better or for worse, with both of them.

And unfortunately, GnuTLS (the version you want is 3.7.7 or later) had a type of bug which is known as a double-free… believe it or not in the very part of the code that does TLS certificate validation.

So, in the sort of irony we’ve seen in cryptographic libraries before, code that uses TLS for encrypted transmissions but doesn’t bother verifying the other end… code that goes, “Certificate validation, who needs it?”

That’s generally regarded as an extremely bad idea, rather shabby from a security point of view… but any code that does that won’t be vulnerable to this bug, because it doesn’t call the buggy code.

So, sadly, code that’s trying to do the *right* thing could be tricked by a rogue certificate.

And just to explain simply, a double-free is the kind of bug where you ask the operating system or the system, “Hey, give me some memory. I need some memory temporarily. In this case, I’ve got all this certificate data, I want to store it temporarily, validate it, and then when I’m done, I’ll hand the memory back so it can be used by another part of the program.”

If you’re a C programmer, you’ll be familiar with the functions malloc(), short for “memory allocate”, and free(), which is “hand it back”.

And we know that there’s a type of bug called use-after-free, which is where you hand the data back, but then carry on using that memory block anyway, forgetting that you gave it up.

But a double-free is a little different – it’s where you hand the memory back, and you dutifully avoid using it again, but then at a later stage, you go, “Hang on, I’m sure I didn’t hand that memory back yet. I’d better hand it back just in case.”

And so you tell the operating system, “OK, free this memory up again.”

So it looks as though it’s a legitimate request to free up the data *that some other part of the program might actually be relying upon*.

And as you can imagine, bad things can happen, because that means you may get two parts of the program that are unknowingly relying on the same chunk of memory at the same time.

The good news is that I don’t believe that a working exploit was found for this bug, and therefore, if you patch, you’ll get ahead of the crooks rather than simply be catching up with them.

But, of course, the bad news is, when bug fixes like this do come out, there’s usually a slew of people who go looking at them, trying to analyse what went wrong, in the hope of rapidly understanding what they can do to exploit the bug against all those people who have been slow to patch.

In other words: Don’t delay. Do it today.


DOUG.  All right, the latest version of GnuTLS is 3.7.7… please update.

You can read more about that on the site.


DUCK.  Oh, and Doug, apparently the bug was introduced in GnuTLS 3.6.0.


DOUG.  OK.


DUCK.  So, in theory, if you’ve got an earlier version than that, you’re not vulnerable to this bug…

…but please don’t use that as an excuse to go, “I don’t need to update yet.”

You might as well jump forward over all the other updates that have come out, for all the other security issues, between 3.6.0 and 3.7.6.

So the fact that you don’t fall into the category of this bug – don’t use that as an excuse for doing nothing.

Use it as the impetus to get yourself to the present day… that’s my advice.


DOUG.  OK!

And our final story of the week: we’re talking about another crypto heist.

This time, only $200 million, though, Paul.

This is chump change compared to some of the other ones we’ve talked about.


DUCK.  I almost don’t want to say this, Doug, but one of the reasons I wrote this up is that I looked at it and I found myself thinking, “Oh, only 200 million? That’s quite a small ti… WHAT AM I THINKING!?” [LAUGHTER]

$200 million, basically… well, not “down the toilet”, rather “out of the bank vault”.

This service Nomad is from a company that goes by the name of Illusory Systems Incorporated.

And I think you’ll agree that, certainly from a security point of view, the word “illusory” is perhaps the right kind of metaphor.

It’s a service that essentially allows you to do what’s in the jargon known as bridging.

You’re basically actively trading one cryptocurrency for another.

So you put some cryptocurrency of your own into some giant bucket along with loads of other people… and then we can do all these fancy, “decentralised finance” automated smart contracts.

We can trade Bitcoin for Ether or Ether for Monero, or whatever.

Unfortunately, during a recent code update, it seems that they fell into the same sort of hole that perhaps the Samba guys did with the bug we talked about in Samba.

There’s basically a Print Your Own Passport, or an Authorise Your Own Transaction bug that they introduced.

There’s a point in the code where a cryptographic hash, a 256-bit cryptographic hash, is supposed to be validated… something that nobody but an authorised approver could possibly come up with.

Except that if you just happened to use the value zero, then you would pass muster.

You could basically take anybody else’s existing transaction, rewrite the recipient’s name with yours (“Hey, pay *my* cryptocurrency wallet”), and just replay the transaction.

And the system will go, “OK.”

You just have to get the data in the right format, that’s my understanding.

And the easiest way of creating a transaction that would pass muster is simply to take someone else’s pre-completed, existing transaction, replay it, but cross out their name, or their account number, and put in your own.

So, as cryptocurrency analyst @samczsun said on Twitter, “Attackers abused this to copy and paste transactions and quickly drained the bridge in a frenzied free-for-all.”

In other words, people just went crazy withdrawing money from the ATM that would accept anybody’s bank card, provided you put in a PIN of zero.

And not just until the ATM was drained… the ATM was basically directly connected to the side of the bank vault, and the money was simply pouring out.


DOUG.  Arrrrgh!


DUCK.  As you say, apparently they lost somewhere up to $200 million in just a short time.

Oh, dear.


DOUG.  Well, we have some advice, and it’s pretty straightforward…


DUCK.  The only advice you can really give is, “Don’t be in too much of a hurry to join in this decentralised finance revolution.”

As we may have said before, make sure that if you *do* get into this “trade online; lend us cryptocurrency and we’ll pay you interest; put your stuff in a hot wallet so you can act within seconds; get into the whole smart contract scene; buy my nonfungible tokens [NFTs]” – all of that stuff…

…if you decide that marketplace *is* for you, please make sure you go in with your eyes wide open, not with your eyes wide shut!

And the simple reason is that in cases like this, it’s not just like the crooks might be able to drain *some* of the bank’s ATMs.

In this case, firstly, it sounds like they’ve drained almost everything, and secondly, unlike with conventional banks, there just aren’t the regulatory protections that you would enjoy if a real life bank went bust.

In the case of decentralised finance, the whole idea of it being decentralised, and being new, and cool, and something that you want to rush into…

…is that it *doesn’t* have these annoying regulatory protections.

You could, and possibly might – because we’ve spoken about this more often than I’m comfortable doing, really – you might lose *everything*.

And the flip side of that is, if you have lost stuff in some decentralised finance or “Web 3.0 brand new super-trading website” implosion like this, then be very careful of people coming along saying, “Hey, don’t worry. Despite the lack of regulation, there are expert companies that can get your money back. All you need to do is contact company X, individual Y, or social media account Z”.

Because, whenever there’s a disaster of this sort, the secondary scammers come running pretty jolly quickly, offering to “find a way” to get your money back.

There are plenty of scammers hovering around, so be very wary.

If you have lost money, don’t go out of your way to throw good money after bad (or bad money after good, whichever way around it is).


DOUG.  OK, you can read more about that: Cryptocoin “token swapper” Nomad loses $200 million in coding blunder.

And if we hear from one of our readers on this story, an anonymous commenter writes, and I agree… I don’t understand how this works:

“What’s amazing is that an online startup had that much to lose in the first place. $200,000, you can imagine. But $200 million seems unbelievable.”

And I think we kind of answered that question, but where is all this money is coming from, to just grab $200 million?


DUCK.  I can’t answer that, Doug.


DOUG.  No.


DUCK.  Is it that the world is more credulous than it used to be?

Is it that there’s an awful lot of ill-gotten gains sloshing around in the cryptocurrency community?

So there are people who didn’t actually put their own money into this, but they ended up with a whole load of cryptocurrency by foul means rather than fair. (We know that ransomware payments generally come as cryptocurrencies, don’t they?)

So that it’s like funny-money… the person who’s losing the “money” maybe didn’t put in cash up front?

Is it just an almost religious zeal on the part of people going, “No, no, *this* is the way to do it. We need to break the stranglehold way that the old-school, fuddy-duddy, highly regulated financial organisations do things. We’ve got to break free of The Man”?

I don’t know, maybe $200 million just isn’t a lot of money anymore, Doug?


DOUG.  [LAUGHS] Well, of course!


DUCK.  I suspect that there are just people going in with their eyes wide shut.

They’re going, “I *am* prepared to take this risk because it’s just so cool.”

And the problem is that if you’re going to lose $200, or $2000, and you can afford to lose it, that’s one thing.

But if you’ve gone in for $2000 and you think, “You know what. Maybe I should go in for $20,000?” And then you think, “You know what. Maybe I should go in for $200,000? Maybe I should go all in?”

Then, I think you need to be very careful indeed!

Precisely for the reasons that the regulatory protections you might feel that you have, like you do have when something bad happens on your credit card and you just phone up and dispute it and they go. “OK”, and they cross that $52.23 off the bill…

…that’s not going to happen in this case.

And it’s unlikely to be $52, it’s probably going to be a lot more than that.

So take care out there, folks!


DOUG.  Take care, indeed.

All right, thank you for the comment.

And if you have an interesting story, comment or question you’d like to submit, we’d love to read it on the podcast.

You can email tips@sophos.com; you can comment on any one of our articles; you can hit us up on social: @NakedSecurity.

That’s our show for today – thanks very much for listening.

For Paul Ducklin, I’m Doug Aamoth, reminding you, until next time to…


BOTH.  Stay secure!

[MUSICAL MODEM]


GitHub blighted by “researcher” who created thousands of malicious projects

Just over a year ago, we wrote about a “cybersecurity researcher” who posted almost 4000 pointlessly poisoned Python packages to the popular repository PyPI.

This person went by the curious nickname of Remind Supply Chain Risks, and the packages had project names that were generally similar to well-known projects, presumably in the hope that some of them would get installed by mistake, thanks to users using slightly incorrect search terms or making minor typing mistakes when typing in PyPI URLs.

These pointless packages weren’t overtly malicious, but they did call home to a server hosted in Japan, presumably so that the perpetrator could collect statistics on this “experiment” and write it up while pretending it counted as science.

A month after that, we wrote about a PhD student (who should have known better) and their supervisor (who is apparently an Assistant Professor of Computer Science at a US university, and very definitely should have known better) who went out of their way to introduce numerous apparently legitimate but not-strictly-needed patches into the Linux kernel.

They called these patches hypocrite commits, and the idea was to show that two peculiar patches submitted at different times could, in theory, be combined later on to introduce a security hole, effectively each contributing a sort of “half-vulnerability” that wouldn’t be spotted as a bug on its own.

As you can imagine, the Linux kernel team did not take kindly to being experimented on in this way without permission, not least because they were faced with cleaning up the mess:

Please stop submitting known-invalid patches. Your professor is playing around with the review process in order to achieve a paper in some strange and bizarre way. This is not ok, it is wasting our time, and we will have to report this, AGAIN, to your university…

GitHub splattered with hostile code

Today, open source enthusiast Steve Lacy reported something similar, but worse (and much more extensive) than either of the aforementioned examples of bogoscience / pseudoresearch.

A GitHub source code search that Lacy carried out in good faith led him to a legitimate-looking project…

…that turned out to be not at all what it seemed, being a cloned copy of an unxeceptionable package that was identical except for a few sneakily added lines that converted the code into outright malware.

As Lacy explained, “thousands of fake infected projects [were] on GitHub, impersonating real projects. All of these were created in the last [three weeks or so]”.

As you can see, Lacy also noted that the organisations allegedly behind these fake projects were “clones designed to have legitimate sounding names”, such that “legitimate user accounts [were] (probably) not compromised”, but where “the attacker amended the last commit on [the cloned repositories] with infected code”:

Malware infection included

According to Lacy and source code testing company Checkmarx, who grabbed some of the infected projects and wrote them up before they were purged from GitHub by Microsoft, the malware implants included code to carry out tasks such as:

  • Performing an HTTP POST to exfiltrate the current server’s process environment. On both Unix and Windows, the environment is a memory-based key-value database of useful information such as hostname, username and system directory. The environment often includes run-time secrets such as temporary authentication tokens that are only ever kept in memory so that they never get written to disk by mistake. (The infamous Log4Shell bug was widely abused to steal data such as access tokens for Amazon Web Services by exfiltrating environment variables.)
  • Running arbitrary shell commands in the HTTP reply sent to the above POST request. This essentially gives the attacker complete remote control of any server on which the infected project is installed and used. The attacker’s commands run with the same access privileges as the now-infected program incorporating the poisoned project.

Fortunately, as we mentioned above, Microsoft acted quickly to search and delete as many of these bogus projects as possible, a reaction about which Lacy tweeted:

The mystery deepens

Following the outing (and the ousting) of these malware projects, the owner of a brand new Twitter account under the bizarre name pl0x_plox_chiken_p0x popped up to claim:

this is a mere bugbounty effort. no harm done. report will be released.

Pull the other one, Chiken P0x!

Just calling home to track your victims like Remind Supply Chain Risks did last year is bad enough.

Enumerating your victims without consent doesn’t constitute research – the best you could call it is probably a misguidedly creepy privacy violation.

But knowingly calling home to steal private data, perhaps including live access tokens, is unauthorised access, which is a surprisingly serious cybercrime in many jurisdictions.

And knowingly installing a backdoor Trojan allowing you to implant and execute code without permission is at least unauthorised modification, which sits alongside the crime of unauthorised access in many legal systems, and typically tacks on a few extra years to the maximum prison sentence that could be imposed if you get busted.

What to do?

This sort of thing isn’t “research” by any stretch of the imagination, and it’s hard to imagine any geniune cybersecurity researcher, any cybercrime investigator, any jury, or any criminal court magistrate buying that suggestion.

So, if you’ve ever been tempted to do anything like this under the misapprehension that you are helping the community…

…please DON’T.

In particular:

  • Don’t pollute the open-source software ecosystem with your own self-serving cyber­sewage, just to “prove” a point. Even if all you do is include code that prints some sort of smug warning or anonymously keeps track of the people you caught out, you’re still making wasteful work for those in the community who have to tidy up after you.
  • Don’t casually distribute malware and then try to justify it as cybersecurity “research”. If you openly leech other people’s trustworthy code and reupload it as if it were a legitimate project after deliberately infecting it with data stealing malware and remote code execution backdoors, don’t expect anyone to buy your excuses.
  • Don’t expect sympathy if you do either of the above. The point you pretend you’re trying to make has been made many times before. The open-source community didn’t thank the perpetrators last time, and it won’t thank you now.

Not that we feel strongly about it.


Post-quantum cryptography – new algorithm “gone in 60 minutes”

We’ve written about PQC, short for post-quantum cryptography, several times before.

In case you’ve missed all the media excitement of the past few years about so-called quantum computing…

…it is (if you will pardon what some experts will probably consider a reckless oversimplification) a way of building computing devices that can keep track of multiple possible outcomes of a calculation at the same time.

With a lot of care, and perhaps a bit of luck, this means that you can rewrite some types of algorithm to home in on the right answer, or at least correctly discard a whole slew of wrong answers, without trying and testing every possible outcome one-by-one.

Two interesting cryptanalytical speedups are possible using a quantum computing device, assuming a suitably powerful and reliable one can actually be constructed:

  • Grover’s quantum search algorithm. Usually, if you want to search a randomly-ordered set of answers to see if yours is on the list, you would expect to plough through entire list, at worst, before getting a definitive answer. For example, if you wanted to find the 128-bit AES decryption key to unscramble a document, you’d need to search the list of all possible keys, starting at 000..001, ..2, ..3, and so on, all the way up to FFF..FFF (16 bytes’ worth of FF), to be certain of completing the problem. In other words, you’ve have to budget to try all 2128 possible keys before either finding the right key, or determining that there wasn’t one. Grover’s algorithm, however, given a big and powerful enough quantum computer, claims to be able to complete the same feat with the square root of the usual effort, thus cracking the code, in theory, in just 264 tries instead.
  • Shor’s quantum factorisation algorithm. Several contemporary encryption algorithms rely on the fact that multiplying two large prime numbers together can be done quickly, whereas dividing their product back into the two numbers that you started with is as good as impossible. To get a feel for this, try multiplying 59×87 using pen-and-paper. It might take a minute or so to get it out (5133 is the answer), but it’s not that hard. Now try the other way. Divide, say, 4171 back into its two factors. Much harder! (It’s 43×97.) Now imagine doing this with a number that’s 600 digits long. Loosely speaking, you’re stuck with trying to divide the 600 digit number by every possible 300 digit prime number until you hit the jackpot, or find there isn’t an answer. Shor’s algorithm, however, promises to solve this problem with the logarithm of the usual effort. Thus factoring a number of 2048 binary digits should take just twice as long as factoring a 1024-bit number, not twice as long as factoring a 2047-bit number, representing a huge speedup.

Countering the threat

The threat from Grover’s algorithm can be countered simply by boosting the size of the the numbers you’re using by squaring them, which means doubling the number of bits in your cryptographic hash or your symmetric encryption key. (In other words, if you think SHA-256 is fine right now, using SHA-512 instead would provide a PQC-resistant alternative.)

But Shor’s algorithm can’t be countered quite so easily.

A public key of 2048 bits would need its size increased exponentially, not simply by squaring, so that instead of a key of 2×2048=4096 bits, either you’d need a new key with the impossible size of 22048 bits…

…or you’d have to adopt a completely new sort of post-quantum encryption system to which Shor’s algorithm didn’t apply.

Well, US standards body NIST has been running a PQC “competition” since late 2017.

The process has been open to everyone, with all participants welcome, all algorithms openly published, and public scrutiny not merely possible but actively encouraged:

Call for Proposals. [Closed 2017-11-30]. […] It is intended that the new public-key cryptography standards will specify one or more additional unclassified, publicly disclosed digital signature, public-key encryption, and key-establishment algorithms that are available worldwide, and are capable of protecting sensitive government information well into the foreseeable future, including after the advent of quantum computers.

After three rounds of submissions and discussions, NIST announced, on 2022-07-05, that it had chosen four algorithms that it considered “standards” with immediate effect, all with delighful-sounding names: CRYSTALS-KYBER, CRYSTALS-Dilithium, FALCON, and SPHINCS+.

The first one (CRYSTALS-KYBER) is used as what’s called a Key Agreement Mechanism (KEM), where two ends of a public communication channel securely concoct a one-time private encryption key for exchanging a session’s worth of data confidentially. (Simply put: snoopers just get shredded cabbage, so they can’t eavesdrop on the conversation.)

The other three algorithms are used for Digital Signatures, whereby you can ensuring that the data you got out at your end matches exactly what the sender put in at the other, thus preventing tampering and assuring integrity. (Simply put: if anyone tries to corrupt or mess with the data, you’ll know.)

More algorithms needed

At the same timeas announcing the new standards, NIST also announced a fourth round of its competition, putting a further four algorithms forward as possible alternative KEMs. (Remember that, at the time of writing, we already have three approved digital signature algorithms to choose from, but only one official KEM.)

These were: BIKE, Classic McEliece, HQC and SIKE.

Intriguingly, the McEliece algorithm was invented way back in the 1970s by American cryptographer Robert Mc Eliece, who died in 2019, well after NIST’s contest was already underway.

It never caught on, however, because it required huge amounts of key material compared to the popular alternative of the day, the Diffie-Hellman-Merkle algorithm (DHM, or sometimes just DH).

Unfortunately, one of the three Round Four algorithms, namely SIKE, appears to have been cracked.

In a brain-twisting paper entitled AN EFFICIENT KEY RECOVERY ATTACK ON SIDH (PRELIMINARY VERSION), Belgian cryptographers Wouter Castryk and Thomas Decru seem to have dealt something of a deadly blow to the SIKE algorithm

In case you’re wondering, SIKE is short for Supersingular Isogeny Key Encapsulation, and SIDH stands for Supersingular Isogeny Diffie-Hellman, a specific use of the SIKE algorithm whereby two ends of a communication channel perform a DHM-like “cryptodance” to exchange a bunch of public data that allows each end to derive a private value to to use as a one-time secret encryption key.

We’re not going to try to explain the attack here; we’ll just repeat what the paper claims, namely that:

Very loosely put, the inputs here include the public data provided by one of the participants in the key establishment cryptodance, along with the pre-determined (and therefore publicly-known) parameters used in the process.

But the output that’s extracted (the information referred to as the isogeny φ above) is supposed to be the never-revealed part of the process – the so-called private key.

In other words, from public information alone, such as the data exchanged opnely during key setup, the cryptographers claim to be able to recover the private key of one of the participants.

And once you know my private key, you can easily and undetectably pretend to be me, so the encryption process is broken.

Apparently, the key-cracking algorithm takes about an hour to do its work, using just a single CPU core with the kind of processing power you’d find in an everyday laptop.

That’s against the SIKE algorithm when configured to meet Level 1, NIST’s basic grade of encryption security.

What to do?

Nothing!

(That’s the good news.)

As the authors of the paper suggest, after noting that their result is still preliminary, “with the current state of affairs, SIDH appears to be fully broken for any publicly generated base curve.”

(That’s the bad news.)

However, give that the SIKE algorithm isn’t officially approved yet, it can now either be adapted to thwart this particular attack (something that the authors admit may be possible), or simply dropped altogether.

Whatever finally happens to SIKE, this is an excellent reminder of why trying to invent your own encryption algorithms is fraught with danger.

It’s also a pointed example of why proprietary encryption systems that rely on the secrecy of the algorithm itself to maintain their security are simply unacceptable in 2022.

If a PQC algorithm such as SIKE survived persual and probing by experts from around the globe for more than five years, despite being disclosed specifically so that it could be subjected to public scrutiny…

…then there’s no need to ask yourself how well your home-made, hidden-from-view encryption algorithms are likely to fare when released into the wild!


go top