US offers reward “up to $10 million” for information about the Conti gang

You’ve almost certainly seen and heard the word Conti in the context of cybercrime.

Conti is the name of a well-known ransomware gang – more precisely, what’s known as a ransomware-as-a-service (RaaS) gang, where the ransomware code, and the blackmail demands, and the receipt of extortion payments from desperate victims are handled by a core group…

…while the attacks themselves are orchestrated by a loosely-knit “team” of affiliates who are typically recruited not for their malware coding abilities, but for their phishing, social engineering and network intrusion skills.

Indeed, we know exactly the sort of “skills”, if that’s an acceptable word to use here, that RaaS operators look for in their affiliates.

About two years ago, the REvil ransomware gang put up a cool $1,000,000 as front money in an underground hacker-recruiting forum, trying to entice new affiliates to join their cybercriminal capers.

Affiliates typically seem to earn about 70% of any blackmail money that’s ultimately extorted by the gang from any victims they attack, which is a significant incentive not only to go in hard, but to go in broad and deep as well, attacking and infecting entire networks in one go.

The attackers often also choose a deliberately difficult time for the company they’re attacking, such as in the early hours of a weekend norning.

The more completely a victim’s network gets derailed and disrupted, the more likely it is that they’ll end up stuck with paying to unlock their precious data and get the business operating again.

As REvil made clear when they spent that $1 million “marketing budget” online, the core RaaS crew was looking for:

 Teams that already have experience and skills in penetration testing, working with msf / cs / koadic, nas / tape, hyper-v and analogues of the listed software and devices.

As you can imagine, the REvil gang had a special interest in technologies such as NAS (networked attached storage), backup tape and Hyper-V (Microsoft’s virtualisation platform) because disrupting any existing backups during an attack, and “unlocking” virtual servers so they can be encrypted along with everything else, makes it harder than ever for victims to recover on their own.

If you suffer a file-scrambling attack only to discover that the criminals trashed or encrypted all your backups first, then your primary route to self-recovery might well already be destroyed.

Strained affiliations

Of course, the symbiotic relationships between the core members of a RaaS gang and the affiliates they rely upon can easily become strained.

The Conti crew, notably, suffered ructions within the ranks just over a year ago, with something of a mutiny amongst the affilates:

Yes, of course they recruit suckers and divide the money among themselves, and the boys are fed with what they will let them know when the victim pays.

As we pointed out at the time, the implication was that at least some affiliates in the Conti ransomware scene were not being paid 70% of the actual ransom amount collected, but 70% of an imaginary but lower number reported to them by the core Conti gang members.

One of the disgruntled affiliates leaked a substantial Conti-crew-related archive file entitled Мануали для работяг и софт.rar (Operating manuals and software).

Turn on your chums

Well, the United States has just upped the ante once more, officially and publicly offering a reward of “up to $10 million” under the single-word headline Conti:

First detected in 2019, Conti ransomware has been used to conduct more than 1,000 ransomware operations targeting U.S. and international critical infrastructure, such as law enforcement agencies, emergency medical services, 9-1-1 dispatch centers, and municipalities. These healthcare and first responder networks are among the more than 400 organizations worldwide victimized by Conti, over 290 of which are located in the United States.

Conti operators typically steal victims’ files and encrypt the servers and workstations in an effort to force a ransom payment from the victim. The ransom letter instructs victims to contact the actors through an online portal to complete the transaction. If the ransom is not paid, the stolen data is sold or published to a public site controlled by the Conti actors. Ransom amounts vary widely, with some ransom demands being as high as $25 million.

The payment is available under a global US anti-crime and anti-terrorism initiative known as Rewards for Justice (RfJ), administered by the US Diplomatic Service on behalf of the US Department of State (the government body that many English-speaking countries refer to as “Foreign Affairs” or “the Foreign Ministry”).

The RfJ program dates back nearly 40 years, during which time it claims to have paid out about $250 million to more than 125 different people worldwide, which reflects mean average payouts of about $2,000,000 about three times each year.

Although this suggests that any individual whistleblower in the Conti saga is unlikely to net the whole $10 million on their own, there’s still plenty of reward money there for the taking.

In fact, RfJ has promoted its $10 million anti-cybercrime reward before, under a general description:

[The RfJ program] is offering a reward of up to $10 million for information leading to the identification or location of any person who, while acting at the direction or under the control of a foreign government, participates in malicious cyber activities against U.S. critical infrastructure in violation of the Computer Fraud and Abuse Act (CFAA).

This time, though, the US Department of State has expressed an explicit interest in five individuals, though they’re only known by their underground names at the moment: Dandis, Professor, Reshaev, Target, and Tramp.

Their mugshots are similarly uncertain, with the RfJ page showing the following image:

Only one snapshot shows an alleged perpetator, though it’s not clear whether the allegation is that he might be one of the five threat actors listed above, or simply a player in the broader gang with an unknown nickname and role:

There’s a curious hat (a party piece, perhaps?) featuring a red star; a shirt with a largely-obscured logo (can you extrapolate the word?); a beer mug in the background; an empty-looking drink in a clear glass bottle (beer, by its size and shape?); an unseen instrumentalist (playing a balalaika, by its tuning pegs?) in the foreground; and a patterned curtain tied back in front of a venetian-style blind at the rear.

Any commenters care to guess what’s going on in that picture?


Zoom for Mac patches get-root bug – update now!

At the well-known DEF CON security shindig in Las Vegas, Nevada, last week, Mac cybersecurity researcher Patrick Wardle revealed a “get-root” elevation of privilege (EoP) bug in Zoom for Mac:

In the tweet, which followed his talk [2022-08-12], Wardle noted:


Zoom immediately worked on a patch for the flaw, which was announced the next day in Zoom security bulletin ZSB-22018, earning a congratulatory reply from Wardle in the process:


Zero-day disclosure

Given the apparent speed and ease with which Zoom was able to emit a patch for the bug, dubbed CVE-2022-28756, you’re probably wondering why Wardle didn’t tell Zoom about the bug in advance, setting the day of his speech as the deadline for revealing the details.

That would have given Zoom time to push out the update to its many Mac users (or at least to make it available to those who believe in patch early/patch often), thus eliminating the gap between Wardle explaining to the world how to abuse the bug, and the patching of the bug.

In fact, it seems that Wardle did do his best to warn Zoom about this bug, plus a bunch of interconnected flaws in Zoom’s autoupdate process, some months ago.

Wardle explains the bug disclosure timeline in the slides from his DEF CON talk, and lists a stream of Zoom updates related to the flaws he discovered.

A double-edged sword

The bugs that Wardle discussed related generally to Zoom’s auto-update mechanism, a part of any software ecosystem that is a bit of a double-edged sword – a more powerful weapon than a regular sword, but correspondingly harder to handle safely.

Auto-updating is a must-have component in any modern client application, given that it makes critical patches easier and quicker to distribute, thus helping users to close off cybersecurity holes reliably.

But auto-updating brings a sea of risks with it, not least because the update tool itself typically needs root-level system access.

That’s because the updater’s job is to overwrite the application software (something that a regular user isn’t supposed to do), and perhaps to launch privileged operating system commands to make configuration or other system-level changes.

In other words, if developers aren’t careful, the very tool that helps them keep their underlying app up-to-date and more secure could become a beachhead from which attackers could subvert security by tricking the updater into running unauthorised commands with system privileges.

Notably, auto-update programs need to take care to verify the authenticity of the update packages they download, to stop attackers simply feeding tham a fake update bundle, complete with added malware.

They also need to maintain the integrity of the update files that they ultimately consume, so that a local attacker can’t sneakily modify the “verified safe” update bundle that’s just been downloaded in the brief period between it being fetched and activated.

Sidestepping the authenticity check

As Wardle explains in his paper, one of the bugs he discovered and disclosed was a flaw in the first step listed above, when Zoom’s auto-updater tried to verify the authenticity of the update package it had just downloaded.

Instead of using the official macOS APIs to validate the digital signature of the download directly, Zoom developers decided to do the authentication indirectly, by running the macOS utility pkgutil --check-signature in the background and examining the output.

Here’s an example of pkgutil output, using an old version of the Zoom.pkg software bundle:

$ pkgutil --check-signature Zoom.pkg
Package "Zoom.pkg": Status: signed by a developer certificate issued by Apple for distribution Signed with a trusted timestamp on: 2022-06-27 01:26:22 +0000 Certificate Chain: 1. Developer ID Installer: Zoom Video Communications, Inc. (BJ4HAAB9B3) Expires: 2027-02-01 22:12:15 +0000 SHA256 Fingerprint: 6D 70 1A 84 F0 5A D4 C1 C1 B3 AE 01 C2 EF 1F 2E AE FB 9F 5C A6 80 48 A4 76 60 FF B5 F0 57 BB 8C ------------------------------------------------------------------------ 2. Developer ID Certification Authority Expires: 2027-02-01 22:12:15 +0000 SHA256 Fingerprint: 7A FC 9D 01 A6 2F 03 A2 DE 96 37 93 6D 4A FE 68 09 0D 2D E1 8D 03 F2 9C 88 CF B0 B1 BA 63 58 7F ------------------------------------------------------------------------ 3. Apple Root CA Expires: 2035-02-09 21:40:36 +0000 SHA256 Fingerprint: B0 B1 73 0E CB C7 FF 45 05 14 2C 49 F1 29 5E 6E DA 6B CA ED 7E 2C 68 C5 BE 91 B5 A1 10 01 F0 24

Unfortunately, as Wardle discovered when he decompiled Zoom’s signature verification code, the Zoom updater didn’t process the pkgutil data in the same way that human observers would.

We’d check the output by following the useful visual sequence in the output.

First, we’d look first for the desired status, e.g. signed by a developer certificate issued by Apple for distribution.

Then we’d finding the sub-heading Certificate Chain:.

Finally, we’d cross-check that the chain consisted of these three signers, in the right order:

 1. Zoom Video Communications, Inc. 2. Developer ID Certification Authority 3. Apple Root CA

Amazingly, Zoom’s code simply verified that each of the above three strings (not even checking for Zoom’s own unique ID BJ4HAAB9B3) showed up somewhere in the output from pkgutil.

So, creating a package with an absurd-but-valid name such as Zoom Video Communications, Inc. Developer ID Certification Authority Apple Root CA.pkg would trick the package verifier into finding the “identity strings” it was looking for.

The full package name is echoed into the pkgutil output header on the first line, where Zoom’s hapless “verifier” would match all three text strings in the wrong part of the output.

Thus the “security” check could trivially be bypassed.

A partial fix

Wardle says that Zoom eventually fixed this bug, more than seven months after he reported it, in time for DEF CON…

…but after applying the patch, he noticed that there was still a gaping hole in the update process.

The updater tried to do the right thing:

  • 1. Move the downloaded package to directory owned by root, and thus theoretically off-limits to any regular user.
  • 2. Verify the cryptographic signature of downloaded package, using official APIs, not via a text-matching bodge against pkgutil output.
  • 3. Unarchive the downloaded package file, in order to verify its version number, to prevent downgrade attacks.
  • 4. Install the downloaded package file, using the root privileges of the auto-update process.

Unfortunately, even though the directory used to store the update package was owned by root, in an attempt to keep it safe from prying users trying to subvert the update file while it was being used…

…the newly downloaded package file was left “world-writable” in its new location (a side-effect of having been downloaded by a regular account, not by root).

This gave local attackers a loophole to modify the update package after its digital signature had been validated (step 2), without affecting the version check details (step 3), but just before the installer took control of the package file in order to process it with root privileges (step 4).

This sort of bug is known as a race condition, because the attackers need to time their finish so they get home just before the installer starts, and are therefore to sneak their malicious changes in just ahead of it.

You’ll also hear this type of vulnerability referred to by the exotic-sounding acronym TOCTOU, short for time-of-check-to-time-of-use, a name that’s a clear reminder that if you check your facts too far in advance, then they might be out of date by the time you rely on them.

The TOCTOU problem is why car rental companies in the UK no longer simply ask to see your driving licence, which could have been issued up to 10 years ago, and could have been suspended or cancelled for a variety of reasons since then, most likely because of unsafe or illegal driving on your part. Along with your physical licence, you also need to present a one-time alphanumeric “proof of recent validity” code, issued within the last 21 days, to reduce the potential TOCTOU gap from 10 years to just three weeks.

The fix is now in

According to Wardle, Zoom has now prevented this bug by changing the access rights on the update package file that’s copied in step 1 above.

The file that’s used for signature checking, version validation, and the final root-level install is now limited to access by the root account only, at all times.

This removes the race condition, because an unprivileged attacker can’t modify the file between the end of step 2 (verification successful) and the start of step 4 (installation begins).

To modify the package file in order to trick the system into giving you root access, you’d need to have root access already, so you wouldn’t need an EoP bug of this sort in the first place.

The TOCTOU problem doesn’t apply because the check in step 2 remains valid until the use of the file begins, leaving no window of opportunity for the check to become invalid.

What to do?

If you’re using Zoom on a Mac, open the app and then, in the menu bar, go to > Check for Updates...

If an update is available, the new version will be shown, and you can click [Install] to apply the patches:

The version you want is 5.11.5 (9788) or later.

S3 Ep95: Slack leak, Github onslaught, and post-quantum crypto [Audio + Text]

Click-and-drag on the soundwaves below to skip to any point. You can also listen directly on Soundcloud.

With Doug Aamoth and Paul Ducklin.

Intro and outro music by Edith Mudge.

Schroedinger’s cat outlines in featured image via Dhatfield under CC BY-SA 3.0.

You can listen to us on Soundcloud, Apple Podcasts, Google Podcasts, Spotify, Stitcher and anywhere that good podcasts are found. Or just drop the URL of our RSS feed into your favourite podcatcher.


DOUG.  Slack leaks, naughty GitHub code, and post-quantum cryptography.

All that, and much more, on the Naked Security podcast.


Welcome to the podcast, everybody.

I am Doug Aamoth.

With me, as always, is Paul Ducklin.

Paul, how do you do today?

DUCK.  Super-duper, as usual, Doug!

DOUG.  I am super-duper excited to get to this week’s Tech History segment, because…

…you were there, man!

This week, on August 11…

DUCK.  Oh, no!

I think the penny’s just dropped…

DOUG.  I don’t even have to say the year!

August 11, 2003 – the world took notice of the Blaster worm, affecting Windows 2000 and Windows XP systems.

Blaster, also known as Lovesan and MsBlast, exploited a buffer overflow and is perhaps best known for the message, “Billy Gates, why do you make this possible? Stop making money and fix your software.”

What happened, Paul?

DUCK.  Well, it was the era before, perhaps, we took security quite so seriously.

And, fortunately, that kind of bug would be much, much more difficult to exploit these days: it was a stack-based buffer overflow.

And if I remember correctly, the server versions of Windows were already being built with what’s called stack protection.

In other words, if you overflow the stack inside a function, then, before the function returns and does the damage with the corrupted stack, it will detect that something bad has happened.

So, it has to shut down the offending program, but the malware doesn’t get to run.

But that protection was not in the client versions of Windows at that time.

And as I remember, it was one of those early malwares that had to guess which version of the operating system you had.

Are you on 2000? Are you on NT? Are you on XP?

And if it got it wrong, then an important part of the system would crash, and you’d get the “Your system is about to shut down” warning.

DOUG.  Ha, I remember those!

DUCK.  So, there was that collateral damage that was, for many people, the sign that you were getting hammered by infections…

…which could be from outside, like if you were just a home user and you didn’t have a router or firewall at home.

But if you were inside a company, the most likely attack was going to come from someone else inside the company, spewing packets on your network.

So, very much like the CodeRed attack we spoke about, which was a couple of years before that, in a recent podcast, it was really the sheer scale, volume and speed of this thing that was the problem.

DOUG.  All right, well, that was about 20 years ago.

And if we turn back the clock to five years ago, that’s when Slack started leaking hashed passwords. [LAUGHTER]

DUCK.  Yes, Slack, the popular collaboration tool…

…it has a feature where you can send an invitation link to other people to join your workspace.

And, you imagine: you click a button that says “Generate a link”, and it’ll create some kind of network packet that probably has some JSON inside it.

If you’ve ever had a Zoom meeting invitation, you’ll know that it has a date, and a time, and the person who is inviting you, and a URL you can use for the meeting, and a passcode, and all that stuff – it has quite a lot of data in there.

Normally, you don’t dig into the raw data to see what’s in there – the client just says, “Hey, here’s a meeting, here are the details. Do you want to Accept / Maybe / Decline?”

It turned out that when you did this with Slack, as you say, for more than five years, packaged up in that invitation was extraneous data not strictly relevant to the invitation itself.

So, not a URL, not a name, not a date, not a time…

…but the *inviting user’s password hash* [LAUGHTER]

DOUG.  Hmmmmm.

DUCK.  I kid you not!

DOUG.  That sounds bad…

DUCK.  Yes, it really does, isn’t it?

The bad news is, how on earth did that get in there?

And, once it was in there, how on earth did it evade notice for five years and three months?

In fact, if you visit the article on Naked Security and look at the full URL of the article, you’ll notice it says at the end, blahblahblah-for-three-months.

Because, when I first read the report, my mind didn’t want to see it as 2017! [LAUGHTER]

It was 17 April to 17 July, and so there were lots of “17”s in there.

And my mind blanked out the 2017 as the starting year – I misread it as “April to July *of this year*” [2022].

I thought, “Wow, *three months* and they didn’t notice.”

And then the first comment on the article was, “Ahem [COUGH]. It was actually 17 April *2017*.”


But somebody figured it out on 17 July [2022], and Slack, to their credit, fixed it the same day.

Like, “Oh, golly, what were we thinking?!”

So that’s the bad news.

The good news is, at least it was *hashed* passwords.

And they weren’t just hashed, they were *salted*, which is where you mix in uniquely chosen, per-user random data with the password.

The idea of this is twofold.

One, if two people choose the same password, they don’t get the same hash, so you can’t make any inferences by looking through the hash database.

And two, you can’t precompute a dictionary of known hashes for known inputs, because you have to create a separate dictionary for each password *for each salt*.

So it’s not a trivial exercise to crack hashed passwords.

Having said that, the whole idea is that they are not supposed to be a matter of public record.

They’re hashed and salted *in case* they leak, not *in order that they can* leak.

So, egg on Slack’s face!

Slack says that about one in 200 users, or 0.5%, were affected.

But if you’re a Slack user, I would assume that if they didn’t realise they were leaking hashed passwords for five years, maybe they didn’t quite enumerate the list of people affected completely either.

So, go and change your password anyway… you might as well.

DOUG.  OK, we also say: if you’re not using a password manager, consider getting one; and turn on 2FA if you can.

DUCK.  I thought you’d like that, Doug.

DOUG.  Yes, I do!

And then, if you are Slack or company like it, choose a reputable salt-hash-and-stretch algorithm when handling passwords yourself.

DUCK.  Yes.

The big deal in Slack’s response, and the thing that I thought was lacking, is that they just said, “Don’t worry, not only did we hash the passwords, we salted them as well.”

My advice is that if you are caught in a breach like this, then you should be willing to declare the algorithm or process you used for salting and hashing, and also ideally what’s called stretching, which is where you don’t just hash the salted password once, but perhaps you hash it 100,000 times to slow down any kind of dictionary or brute force attack.

And if you state what algorithm you are using and with what parameters.. for example, PBKDF2, bcrypt, scrypt, Argon2 – those are the best-known password “salt-hash-stretch” algorithms out there.

If you actually state what algorithm you’re using, then: [A] you’re being more open, and [B] you’re giving potential victims of the problem a chance to assess for themselves how dangerous they think this might have been.

And that sort of openness can actually help a lot.

Slack didn’t do that.

They just said, “Oh, they were salted and hashed.”

But what we don’t know is, did they put in two bytes of salt and then hash them once with SHA-1…

…or did they have something a little more resistant to being cracked?

DOUG.  Sticking to the subject of bad things, we’re noticing a trend developing wherein people are injecting bad stuff into GitHub, just to see what happens, exposing risk…

…we’ve got another one of those stories.

DUCK.  Yes, somebody who now has allegedly came out on Twitter and said, “Don’t worry guys, no harm done. It was just for research. I’m going to write a report, stand out from Blue Alert.”

They created literally thousands of bogus GitHub projects, based on copying existing legit code, deliberately inserting some malware commands in there, such as “call home for further instructions”, and “interpret the body of the reply as backdoor code to execute”, and so on.

So, stuff that really could do harm if you installed one of these packages.

Giving them legit looking names…

…borrowing, apparently, the commit history of a genuine project so that the thing looked much more legit than you might otherwise expect if it just showed up with, “Hey, download this file. You know you want to!”

Really?! Research?? We didn’t know this already?!!?

Now, you can argue, “Well, Microsoft, who own GitHub, what are they doing making it so easy for people to upload this kind of stuff?”

And there’s some truth to that.

Maybe they could do a better job of keeping malware out in the first place.

But it’s going a little bit over the top to say, “Oh, it’s all Microsoft’s fault.”

It’s even worse in my opinion, to say, “Yes, this is genuine research; this is really important; we’ve got to remind people that this could happen.”

Well, [A] we already know that, thank you very much, because loads of people have done this before; we got the message loud and clear.

And [B] this *isn’t* research.

This is deliberately trying to trick people into downloading code that gives a potential attacker remote control, in return for the ability to write a report.

That sounds more like a “big fat excuse” to me than a legitimate motivator for research.

And so my recommendation is, if you think this *is* research, and if you’re determined to do something like this all over again, *don’t expect a whole lot of sympathy* if you get caught.

DOUG.  Alright – we will return to this and the reader comments at the end of the show, so stick around.

But first, let us talk about traffic lights, and what they have to do with cybersecurity.

DUCK.  Ahhh, yes! [LAUGH]

Well, there’s a thing called TLP, the Traffic Light Protocol.

And the TLP is what you might call a “human cybersecurity research protocol” that helps you label documents that you send to other people, to give them a hint of what you hope they will (and, more importantly, what you hope they will *not*) do with the data.

In particular, how widely are they supposed to redistribute it?

Is this something so important that you could declare it to the world?

Or is this potentially dangerous, or does it potentially include some stuff that we don’t want to be public just yet… so keep it to yourself?

And it started off with: TLP:RED, which meant, “Keep it to yourself”; TLP:AMBER, which meant “You can circulate it inside your own company or to customers of yours that you think might urgently need to know this”; TLP:GREEN, which meant, “OK, you can let this circulate widely within the cybersecurity community.”

And TLP:WHITE, which meant, “You can tell anybody.”

Very useful, very simple: RED, AMBER, GREEN… a metaphor that works globally, without worrying about what’s the difference between “secret” and “confidential” and what’s the difference between “confidential” and “classified”, all that complicated stuff that needs a whole lot of laws around it.

Well, the TLP just got some modifications.

So, if you are into cybersecurity research, make sure you are aware of those.

TLP:WHITE has been changed to what I consider a much better term actually, because white has all these unnecessary cultural overtones that we can do without in the modern era.

So, TLP:WHITE has just become TLP:CLEAR, which to my mind is a much better word because it says, “You’re clear to use this data,” and that intention is stated, ahem, very clearly. (Sorry, I couldn’t resist the pun.)

And there’s an additional layer (so it has spoiled the metaphor a bit – it’s now a *five*-colour color traffic light!).

There’s a special level called TLP:AMBER+STRICT, and what that means is, “You can share this inside your company.”

So you might be invited to a meeting, maybe you work for a cybersecurity company, and it’s quite clear that you will need to show this to programmers, maybe to your IT team, maybe to your quality assurance people, so you can do research into the problem or deal with fixing it.

But TLP:AMBER+STRICT means that although you can circulate it inside your organisation, *please don’t tell your clients or your customers*, or even people outside the company that you think might have a need to know.

Keep it within the tighter community to start with.

TLP:AMBER, like before, means, “OK, if you feel you need to tell your customers, you can.”

And that can be important, because sometimes you might want to inform your customers, “Hey, we’ve got the fix coming. You’ll need to take some precautionary steps before the fix arrives. But because it’s kind of sensitive, may we ask that you don’t tell the world just yet?”

Sometimes, telling the world too early actually plays into the hands of the crooks more than it plays into the hands of the defenders.

So, if you’re a cybersecurity responder, I suggest you go:

DOUG.  And you can read more about that on our site,

And if you are looking for some other light reading, forget quantum cryptography… we’re moving on to post-quantum cryptography, Paul!

DUCK.  Yes, we’ve spoken about this a few times before on the podcast, haven’t we?

The idea of a quantum computer, assuming a powerful and reliable enough one could be built, is that certain types of algorithms could be sped up over the state of the art today, either to the tune of the square root… or even worse, the *logarithm* of the scale of the problem today.

In other words, instead of taking 2256 tries to find a file with a particular hash, you might be able to do it in just (“just”!) 2128 tries, which is the square root.

Clearly a lot faster.

But there’s a whole class of problems involving factorising products of prime numbers that the theory says could be cracked in the *logarithm* of the time that they take today, loosely speaking.

So, instead of taking, say, 2128 days to crack [FAR LONGER THAN THE CURRENT AGE OF THE UNIVERSE], it might take just 128 days to crack.

Or you can replace “days” with “minutes”, or whatever.

And unfortunately, that logarithmic time algorithm (called Shor’s Quantum Factorisation Algorithm)… that could be, in theory, applied to some of today’s cryptographic techniques, notably those used for public key cryptography.

And, just in case these quantum computing devices do become feasible in the next few years, maybe we should start preparing now for encryption algorithms that are not vulnerable to these two particular classes of attack?

Particularly the logarithm one, because it speeds up potential attacks so greatly that cryptographic keys that we currently think, “Well, no one will ever figure that out,” might become revealable at some later stage.

Anyway, NIST, the National Institute of Standards and Technology in the USA, has for several years been running a competition to try and standardise some public, unpatented, well-scrutinised algorithms that will be resistant to these magical quantum computers, if ever they show up.

And recently they chose four algorithms that they’re prepared to standardise upon now.

They have cool names, Doug, so I have to read them out: CRYSTALS-KYBER, CRYSTALS-DILITHIUM, FALCON, and SPHINCS+. [LAUGHTER]

So they have cool names, if nothing else.

But, at the same time, NIST figured, “Well, that’s only four algorithms. What we’ll do is we’ll pick four more as potential secondary candidates, and we’ll see if any of those should go through as well.”

So there are four standardised algorithms now, and four algorithms which might get standardised in the future.

Or there *were* four on the 5 July 2022, and one of them was SIKE, short for supersingular isogeny key encapsulation.

(We’ll need several podcasts to explain supersingular isogenies, so we won’t bother. [LAUGHTER])

But, unfortunately, this one, which was hanging in there with a fighting chance of being standardised, it looks as though it has been irremediably broken, despite at least five years of having been open to public scrutiny already.

So, fortunately, just before it did get or could get standardised, two Belgian cryptographers figured out, “You know what? We think we’ve got a way around this using calculations that take about an hour, on a fairly average CPU, using just one core.”

DOUG.  I guess it’s better to find that out now than after standardising it and getting it out in the wild?

DUCK.  Indeed!

I guess if it had been one of the algorithms that already got standardised, they’d have to repeal the standard and come up with a new one?

It seems weird that this didn’t get noticed for five years.

But I guess that’s the whole idea of public scrutiny: you never know when somebody might just hit on the crack that’s needed, or the little wedge that they can use to break in and prove that the algorithm is not as strong as was originally thought.

A good reminder that if you *ever* thought of knitting your own cryptography…

DOUG.  [LAUGHS] I haven’t!

DUCK.  ..despite us having told you on the Naked Security podcast N times, “Don’t do that!”

This should be the ultimate reminder that, even when true experts put out an algorithm that is subject to public scrutiny in a global competition for five years, this still doesn’t necessarily provide enough time to expose flaws that turn out to be quite bad.

So, it’s certainly not looking good for this SIKE algorithm.

And who knows, maybe it will be withdrawn?

DOUG.  We will keep an eye on that.

And as the sun slowly sets on our show for this week, it’s time to hear from one of our readers on the GitHub story we discussed earlier.

Rob writes:

“There’s some chalk and cheese in the comments, and I hate to say it, but I genuinely can see both sides of the argument. Is it dangerous, troublesome, time wasting and resource consuming? Yes, of course it is. Is it what criminally minded types would do? Yes, yes, it is. Is it a reminder to anyone using GitHub, or any other code repository system for that matter, that safely travelling the internet requires a healthy degree of cynicism and paranoia? Yes. As a sysadmin, part of me wants to applaud the exposure of the risk at hand. As a sysadmin to a bunch of developers, I now need to make sure everyone has recently scoured any pulls for questionable entries.”

DUCK.  Yes, thank you, RobB, for that comment, because I guess it’s important to see both sides of the argument.

There were commenters who were just saying, “What the heck is the problem with this? This is great!”

One person said, “No, actually, this pen testing is good and useful. Be glad these are being exposed now instead of rearing their ugly head from an actual attacker.”

And my response to that is that, “Well, this *is* an attack, actually.”

It’s just that somebody has now come out afterwards, saying “Oh, no, no. No harm done! Honestly, I wasn’t being naughty.”

I don’t think you are obliged to buy that excuse!

But anyway, this is not penetration testing.

My response was to say, very simply: “Responsible penetration testers only ever act [A] after receiving explicit permission, and [B] within behavioural limits agreed explicitly in advance.”

You don’t just make up your own rules, and we have discussed this before.

So, as another commenter said, which is, I think, my favorite comment… Ecurb said, “I think somebody should walk house to house and smash windows to show how ineffective door locks really are. This is past due. Someone jump on this, please.”

And then, just in case you didn’t realize that was satire, folks, he says, “Not!”

DUCK.  I get the idea that it’s a good reminder, and I get the idea that if you’re a GitHub user, both as a producer and a consumer, there are things you can do.

We list them in the comments and in the article.

For example, put a digital signature on all your commits so it’s obvious that the changes came from you, and there’s some kind of traceability.

And don’t just blindly consume stuff because you did a search and it “looked like” it might be the right project.

Yes, we can all learn from this, but does this actually count as teaching us, or is that just something we should learn anyway?

I think this is *not* teaching.

It’s just *not of a high enough standard* to count as research.

DOUG.  Great discussion around this article, and thanks for sending that in, Rob.

If you have an interesting story, comment or question you’d like to submit, we’d love to read it on the podcast.

You can email; you can comment on any one of our articles; or you can hit us up on social: @NakedSecurity.

That’s our show for today – thanks very much for listening.

For Paul Ducklin, I’m Doug Aamoth reminding you, until next time, to…

BOTH.  Stay secure!


APIC/EPIC! Intel chips leak secrets even the kernel shouldn’t see…

Here’s this week’s BWAIN, our jocular term for a Bug With An Impressive Name.

BWAIN is an accolade that we hand out when a new cybersecurity flaw not only turns out to be interesting and important, but also turns up with its own logo, domain name and website.

This one is dubbed ÆPIC Leak, a pun on the words APIC and EPIC.

The former is short for Advanced Programmable Interrupt Controller, and the latter is simply the word “epic”, as in giant, massive, extreme, mega, humongous.

The letter Æ hasn’t been used in written English since Saxon times. Its name is æsc, pronounced ash (as in the tree), and it pretty much represents the sound of the A in in the modern word ASH. But we assume you’re supposed to pronounce the word ÆPIC here either as “APIC-slash-EPIC”, or as “ah!-eh?-PIC”.

What’s it all about?

All of this raises five fascinating questions:

  • What is an APIC, and why do I need it?
  • How can you have data that even the kernel can’t peek at?
  • What causes this epic failure in APIC?
  • Does the ÆPIC Leak affect me?
  • What to do about it?

What’s an APIC?

Let’s rewind to 1981, when the IBM PC first appeared.

The PC included a chip called the Intel 8259A Programmable Interrupt Controller, or PIC. (Later models, from the PC AT onwards, had two PICs, chained together, to support more interrupt events.)

The purpose of the PIC was quite literally to interrupt the program running on the PC’s central processor (CPU) whenever something time-critical took place that needed attention right away.

These hardware interrupts included events such as: the keyboard getting a keystroke; the serial port receiving a character; and a repeating hardware timer ticking over.

Without a hardware interrupt system of this sort, the operating system would need to be littered with function calls to check for incoming keystrokes on a regular basis, which would be a waste of CPU power when no one was typing, but wouldn’t be responsive enough when they did.

As you can imagine, the PIC was soon followed by an upgraded chip called the APIC, an advanced sort of PIC built into the CPU itself.

These days, APICs provide much more than just feedback from the keyboard, serial port and system timer.

APIC events are triggered by (and provide real-time data about) events such as overheating, and allow hardware interaction between the different cores in contemporary multicore processors.

And today’s Intel chips, if we may simplifly greatly, can generally be configured to work in two different ways, known as xAPIC mode and x2APIC mode.

Here, xAPIC is the “legacy” way of extracting data from the interrupt controller, and x2APIC is the more modern way.

Simplifying yet further, xAPIC relies on what’s called MMIO, short for memory-mapped input/output, for reading data out of the APIC when it registers an event of interest.

In MMIO mode, you can find out what triggered an APIC event by reading from a specific region of memory (RAM), which mirrors the input/output registers of the APIC chip itself.

This xAPIC data is mapped into a 4096-byte memory block somewhere in the physical RAM of the computer.

This simplifies accessing the data, but it requires an annoying, complex (and, as we shall see, potentially dangerous) interaction between the APIC chip and system memory.

In contrast, x2APIC requires you to read out the APIC data directly from the chip itself, using what are known as Model Specific Registers (MSRs).

According to Intel, avoiding the MMIO part of the process “provides significantly increased processor addressability and some enhancements on interrupt delivery.”

Notably, extracting the APIC data directly from on-chip registers means that the total amount of data supported, and the maximum number of CPU cores that can be managed at the same time, is not limited to the 4096 bytes available in MMIO mode.

How can you have data that even the kernel can’t peek at?

You’ve probably guessed already that the data that ends up in the MMIO memory area when you’re using xAPIC mode isn’t always as carefully managed as it should be…

…and thus that some kind of “data leak” into that MMIO area is the heart of this problem.

But given that you already need sysadmin-level powers to read the MMIO data in the first place, and therefore you could almost certainly get at any and all data in memory anyway…

…why would having other people’s data show up by mistake in the APIC MMIO data area represent an epic leak?

It might make some types of data-stealing or RAM-scraping attack slightly easier in practice, but surely it wouldn’t give you any more memory-snooping ability that you already had in theory?

Unfortunately, that assumption isn’t true if any software on the system is using Intel’s SGX, short for Software Guard Extensions.


SGX is supported by many recent Intel CPUs, and it provides a way for the operating system kernel to “seal” a chunk of code and data into a physical block of RAM so as to form what’s known as an enclave.

This makes it behave, temporarily at least, much like the special security chips in mobile phones that are used to store secrets such as decryption keys.

Once the enclave’s SGX “lock” is set, only program code running inside the sealed-off memory area can read and write the contents of that RAM.

As a result, the internal details of any calculations that happen after the enclave is activated are invisible to any other code, thread, process or user on the system.

Including the kernel itself.

There’s a way to call the code that’s been sealed into the enclave, and a way for it to return the output of of the calculations it might perform, but there’s no way to recover, or to spy on, or to debug, the code and its associated data while it runs.

The enclave effectively turns into a black box to which you can feed inputs, such as a data to be signed with a private key, and extract outputs, such as the digital signature generated, but from which you can’t winkle out the cryptographic keys used in the signing process.

As you can imagine, if data that’s supposed to be sealed up inside an SGX enclave should ever accidentally get duplicated into the MMIO RAM that’s used to “mirror” the APIC data when you’re using xAPIC “memory-mapped” mode…

…that would violate the security of SGX, which says that no data should ever emerge from an SGX enclave after it’s been created, unless it’s deliberately exported by code already running inside the enclave itself.

What causes this epic failure in APIC?

The researchers behind the ÆPIC Leak paper discovered that by arranging to read out APIC data via a cunning and unusual sequence of memory accesses…

…they could trick the processor into filling up the APIC MMIO space not only with data freshly received from the APIC itself, but also with data that just happened to have been used by the CPU recently for some other purpose.

This behaviour is a side-effect of the fact that although the APIC MMIO memory page is 4096 bytes in size, the APIC chip in xAPIC mode doesn’t actually produce 4096 bytes’ worth of data, and the CPU doesnt’t always correctly neutralise the unused parts of the MMIO region by filling it with zeros first.

Instead, old data left over in the CPU cache was written out along with the new data received from the APIC chip itself.

As the researchers put it, the bug boils down to what’s known as an uninitialised memory read, where you accidentally re-use someone else’s leftover data in RAM because neither they nor you flushed it of its previous secrets first.

Does the ÆPIC Leak affect me?

For a full list of chips affected, see Intel’s own advisory.

As far as we can tell, if you have a 10th or 11th generation Intel processor, you’re probably affected.

But if you have a brand-new 12th generation CPU (the very latest at the time of writing), then it seems that only server-class chips are affected.

Ironically, in 12th-generation laptop chips, Intel has given up on SGX, so this bug doesn’t apply because it’s impossible to have any “sealed” SGX enclaves that could leak.

Of course, even on a potentially vulnerable chip, if you’re not relying on any software that uses SGX, then the bug doesn’t apply either.

And the bug, dubbed CVE-2022-21233, can only be exploited by an attacker who already has local, admin-level (root) access to your computer.

Regular users can’t access the APIC MMIO data block, and therefore have no way of peeking at anything at all in there, let alone secret data that might have leaked out from an SGX enclave.

Also, guest virtual machines (VMs) running under the control of a host operating system in a hypervisor such as HyperV, VMWare or VirtualBox almost certainly can’t use this trick to plunder secrets from other guests or the host itself.

That’s because guest VMs generally don’t get access to the real APIC circuitry in the host processor; instead, each guest gets its own simulated APIC that’s unique to that VM.

What to do?

Don’t panic.

On a laptop or desktop computer, you may not be at risk at all, either because you have an older (or, lucky you, a brand new!) computer, or because you aren’t relying on SGX anyway.

And even if you are risk, anyone who gets into your laptop as admin/root probably has enough power to cause you a world of trouble already.

If you have vulnerable servers and you’re relying on SGX as part of your operational security, check Intel security advisory INTEL-SA-00657 for protection and mitigation information.

According to the researchers who wrote this up, “Intel [has] released microcode and SGX Software Development Kit updates to fix the issue.”

The Linux kernel team also seems to be working right now on a patch that will allow you to configure your system so that it will always use x2APIC (which, as you will remember from earlier, doesn’t transmit APIC data via shared memory), and will gracefully prevent the system being forced back into xAPIC mode after bootup.

Slack admits to leaking hashed passwords for five years

Popular collaboration tool Slack (not to be confused with the nickname of the world’s longest-running Linux distro, Slackware) has just owned up to a long-running cybersecurity SNAFU.

According to a news bulletin entitled Notice about Slack password resets, the company admitted that it had inadvertently been oversharing personal data “when users created or revoked a shared invitation link for their workspace.”

From 2017-04-17 to 2022-07-17 (we assume both dates are inclusive), Slack said that the data sent to the recipients of such invitations included…

…wait for it…

…the sender’s hashed password.

What went wrong?

Slack’s security advisory doesn’t explain the breach very clearly, saying merely that “[t]his hashed password was not visible to any Slack clients; discovering it required actively monitoring encrypted network traffic coming from Slack’s servers.”

We’re guessing that this translates as follows:

“Most recipients wouldn’t have noticed that the data they received included any hashed password information, because that information, although included in the network packets sent, was never deliberately displayed to them. And because the data was sent over a TLS connection, eavesdroppers wouldn’t have been able to sniff it out along the way, because it wouldn’t get decrypted until it reached the other end of the connection.”

That’s the good news.

But network packets often include data that’s never normally used or seen by recipients.

HTTP headers are a good example of this, given that they’re meant to be instructions to your browser, not data for display in the web page you’re looking at.

And data that’s irrelevant or invisible to users often ends up in logs anyway, especially in firewall logs, where it could be preserved indefinitely.

That’s the bad news.

Salt, hash and stretch…

According to Slack, the leaked data was not merely hashed, but salted too, meaning that each user’s password was first mixed together with random data unique to that user before the hash function was applied.

Hashes are essentially “non-reversible” mathematical functions that are easy to calculate in one direction, but not in the other.

For example, it’s easy to calculate that:

 SHA256("DUCK") = 7FB376..DEAD4B3AF008

But the only way to work “backwards” from 7FB376..DEAD4B3AF008 to DUCK is to work forwards from every possible word in the dictionary and see if any of them come out with the value you’re trying to match:

 SHA256("AARDVARK") = 5A9394..467731D0526A [X] SHA256("AARON") = C4DDDE..12E4CFE7B4FD [X] SHA256("ABACUS") = BEDDD8..1FE4DE25AAD7 [X] . . . 3400 skipped SHA256("BABBLE") = 70E837..CEAD4B1FA777 [X] SHA256("BADGER") = 946D0D..7B3073C1C094 [X] SHA256("BAGPIPE") = 359DBE..BE193FCCB111 [X] . . . 3200 skipped SHA256("CABAL") = D78CF4..85BE02967565 [X] SHA256("CACHE") = C118F9..22F3269E7B32 [X] SHA256("CAGOULE") = 5EA530..5A26C5B56DCF [X] . . . 5400 skipped SHA256("DAB") = BBCC8E..E8B98CAB5128 [X] SHA256("DAFFODIL") = 75121D..D6401AB24A98 [X] SHA256("DANGER") = 0BD727..4C86037BB065 [X] . . . 3500 skipped SHA256("DUCK") = 7FB376..DEAD4B3AF008 [FOUND!]

And by including a per-user salt, which doesn’t need to be secret, merely unique to each user, you ensure that even if two users choose the same password, they won’t end up with the same password hash.

You can see the effect of salting here, when we hash the word DUCK with three different prefixes:

 SHA256("RANDOM1-DUCK") = E355DB..349E669BB9A2 SHA256("RANDOM2-DUCK") = 13D538..FEA0DC6DBB5C <-- Changing just one input byte produces a wildly different hash SHA256("ARXXQ3H-DUCK") = 52AD92..544208A19449

This also means that attackers can’t create a precomputed list of likely hashes, or create a table of partial hash calculations, known as as a rainbow table, that can accelerate hash checking. (They’d need a brand new hashlist, or a unique set of rainbow tables, for every possible salt.)

In other words, hashed-and-salted passwords can’t trivially be cracked to recover the original input, especially if the the original password was complex and randomly chosen.

What Slack didn’t say is whether they’d stretched the password hashes, too, and if so, how.

Stretching is a jargon term that means repeating the password hashing process over and over again, for example, 100,000 times, in order to extend the time needed to try out a bunch of dictionary words against known password hashes.

If it would take one second to put 100,000 dictionary words through a plain salt-and-hash process, then attackers who know your password hash could try 6 million different dictionary words and deriviatives every minute, or take more than one billion guesses every three hours.

On the other hand, if the salt-and-hash computations were stretched to take one second each, then the extra one-second delay when you tried to log in would cause little or no annoyance to you…

…but would reduce an attacker to just 3600 tries an hour, making it much less likely that they’d get enough time to guess anything but the most obvious passwords.

Several well-respected salt-hash-and-stretch algorithms are known, notably PBKDF2, bcrypt, scrypt and Argon2, all of which can be adjusted to increase the time needed to try individual password guesses in order to reduce the viability of so-called dictionary and brute force attacks.

A dictionary attack means you’re trying likely passwords only, such as every word you can think of from aardvark to zymurgy, and then giving up. A brute-force attack means trying every possible input, even weird and unpronouncable ones, from AAA..AAAA to ZZZ..ZZZZ (or from 0000..000000 to FFFF..FFFFFF if you think in hexadecimal byte-by-byte terms).

What to do?

Slack says that about 1 in 200 of its users (0.5%, presumably based on records of how many shared invitation links were generated in the danger period), and that it will be forcing those users to reset their passwords.

Some further advice:

  • If you’re a Slack user, you might as well reset your password even if you weren’t notified by the company to do so. When a company admits it has been careless with its password database by leaking hashes, especially over such a long period, you might as well assume that yours was affected, even if the company thinks it wasn’t. As soon as you change your password, you make the old hash useless to attackers.
  • If you’re not using a password manager, consider getting one. A password manager helps to pick proper passwords, thus ensuring that your password ends up very, very far down the list of passwords that might get cracked in an incident like this. Attackers typically can’t do a true brute force attack, because there are just too many possible passwords to try out. So, they try the most likely passwords first, such as words or obvious word-and-number combinations, getting longer and more complex as the attack proceeds. A password manager can remember a random, 20-character password as easily as you can remember your cat’s name.
  • Turn on 2FA if you can. 2FA, or two-factor authentication, means that you need not only your password to login, but also a one-time code that changes every time. These codes are typically sent to (or generated by) your mobile phone, and are valid only for a few minutes each. This means that even if cybercrooks do crack your password, it’s not enough on its own for them to take over your account.
  • Choose a reputable salt-hash-and-stretch algorithm when handling passwords yourself.. In the unfortunate event that your password database gets breached, you will be able to give your customers precise details of the algorithm and the security settings you used. This will help well-informed users to judge for themselves how likely it is that their stolen hashes might have been cracked in the time available to attackers so far.

go top