Category Archives: News

Chrome browser gets 11 security fixes with 1 zero-day – update now!

The latest update to Google’s Chrome browser is out, bumping the four-part version number to 104.0.5112.101 (Mac and Linux), or to 104.0.5112.102 (Windows).

According to Google, the new version includes 11 security fixes, one of which is annotated with the remark that “an exploit [for this vulnerability] exists in the wild”, making it a zero-day hole.

The name zero-day is a reminder that there were zero days on which even the most well-informed and proactive user or sysadmin could have been patched ahead of the Bad Guys.

Update details

Details about the updates are scant, given that Google, in common with many other vendors these days, restricts access to bug details “until a majority of users are updated with a fix”.

But Google’s release bulletin explicitly enumerates 10 of the 11 bugs, as follows:

  • CVE-2022-2852: Use after free in FedCM.
  • CVE-2022-2854: Use after free in SwiftShader.
  • CVE-2022-2855: Use after free in ANGLE.
  • CVE-2022-2857: Use after free in Blink.
  • CVE-2022-2858: Use after free in Sign-In Flow.
  • CVE-2022-2853: Heap buffer overflow in Downloads.
  • CVE-2022-2856: Insufficient validation of untrusted input in Intents. (Zero-day.)
  • CVE-2022-2859: Use after free in Chrome OS Shell.
  • CVE-2022-2860: Insufficient policy enforcement in Cookies.
  • CVE-2022-2861: Inappropriate implementation in Extensions API.

As you can see, seven of these bugs were caused by memory mismanagement.

A use-after-free vulnerability means that one part of Chrome handed back a memory block that it wasn’t planning to use any more, so that it could be reallocated for use elsewhere in the software…

…only to carry on using that memory anyway, thus potentially causing one part of Chrome to rely on data it thought it could trust, without realising that another part of the software might still be tampering with that data.

Often, bugs of this sort will cause the software to crash completely, by messing up calculations or memory access in an unrecoverable way.

Sometimes, however, use-after-free bugs can be triggered deliberately in order to misdirect the software so that it misbehaves (for example by skipping a security check, or trusting the wrong block of input data) and provokes unauthorised behaviour.

A heap buffer overflow means asking for a block of memory, but writing out more data than will fit safely into it.

This overflows the officially-allocated buffer and overwrites data in the next block of memory along, even though that memory might already be in use by some other part of the program.

Buffer overflows therefore typically produce similar side-effects to use-after-free bugs: mostly, the vulnerable program will crash; sometimes, however, the program can be tricked into running untrusted code without warning.

The zero-day hole

The zero-day bug CVE-2022-2856 is presented with no more detail than you see above: “Insufficient validation of untrusted input in Intents.”

A Chrome Intent is a mechanism for triggering apps directly from a web page, in which data on the web page is fed into an external app that’s launched to process that data.

Google hasn’t provided any details of which apps, or what sort of data, could be maliciously manipulated by this bug…

…but the danger seems rather obvious if the known exploit involves silently feeding a local app with the sort of risky data that would normally be blocked on security grounds.

What to do?

Chrome will probably update itself, but we always recommend checking anyway.

On Windows and Mac, use More > Help > About Google Chrome > Update Google Chrome.

There’s a separate release bulletin for Chrome for iOS, which goes to version 104.0.5112.99, but no bulletin yet [2022-08-17T12:00Z] that mentions Chrome for Android.

On iOS, check that your App Store apps are up-to-date. (Use the App Store app itself to do this.)

You can watch for any forthcoming update announcement about Android on Google’s Chrome Releases blog

The open-source Chromium variant of the proprietary Chrome browser is also currently at version 104.0.5112.101.

Microsoft Edge security notes, however, currently [2022-08-17T12:00Z] say:

August 16, 2022

Microsoft is aware of the recent exploit existing in the wild. We are actively working on releasing a security patch as reported by the Chromium team.

You can keep your eye out for an Edge update on Microsoft’s official Edge Security Updates page.


US offers reward “up to $10 million” for information about the Conti gang

You’ve almost certainly seen and heard the word Conti in the context of cybercrime.

Conti is the name of a well-known ransomware gang – more precisely, what’s known as a ransomware-as-a-service (RaaS) gang, where the ransomware code, and the blackmail demands, and the receipt of extortion payments from desperate victims are handled by a core group…

…while the attacks themselves are orchestrated by a loosely-knit “team” of affiliates who are typically recruited not for their malware coding abilities, but for their phishing, social engineering and network intrusion skills.

Indeed, we know exactly the sort of “skills”, if that’s an acceptable word to use here, that RaaS operators look for in their affiliates.

About two years ago, the REvil ransomware gang put up a cool $1,000,000 as front money in an underground hacker-recruiting forum, trying to entice new affiliates to join their cybercriminal capers.

Affiliates typically seem to earn about 70% of any blackmail money that’s ultimately extorted by the gang from any victims they attack, which is a significant incentive not only to go in hard, but to go in broad and deep as well, attacking and infecting entire networks in one go.

The attackers often also choose a deliberately difficult time for the company they’re attacking, such as in the early hours of a weekend norning.

The more completely a victim’s network gets derailed and disrupted, the more likely it is that they’ll end up stuck with paying to unlock their precious data and get the business operating again.

As REvil made clear when they spent that $1 million “marketing budget” online, the core RaaS crew was looking for:

 Teams that already have experience and skills in penetration testing, working with msf / cs / koadic, nas / tape, hyper-v and analogues of the listed software and devices.

As you can imagine, the REvil gang had a special interest in technologies such as NAS (networked attached storage), backup tape and Hyper-V (Microsoft’s virtualisation platform) because disrupting any existing backups during an attack, and “unlocking” virtual servers so they can be encrypted along with everything else, makes it harder than ever for victims to recover on their own.

If you suffer a file-scrambling attack only to discover that the criminals trashed or encrypted all your backups first, then your primary route to self-recovery might well already be destroyed.

Strained affiliations

Of course, the symbiotic relationships between the core members of a RaaS gang and the affiliates they rely upon can easily become strained.

The Conti crew, notably, suffered ructions within the ranks just over a year ago, with something of a mutiny amongst the affilates:

Yes, of course they recruit suckers and divide the money among themselves, and the boys are fed with what they will let them know when the victim pays.

As we pointed out at the time, the implication was that at least some affiliates in the Conti ransomware scene were not being paid 70% of the actual ransom amount collected, but 70% of an imaginary but lower number reported to them by the core Conti gang members.

One of the disgruntled affiliates leaked a substantial Conti-crew-related archive file entitled Мануали для работяг и софт.rar (Operating manuals and software).

Turn on your chums

Well, the United States has just upped the ante once more, officially and publicly offering a reward of “up to $10 million” under the single-word headline Conti:

First detected in 2019, Conti ransomware has been used to conduct more than 1,000 ransomware operations targeting U.S. and international critical infrastructure, such as law enforcement agencies, emergency medical services, 9-1-1 dispatch centers, and municipalities. These healthcare and first responder networks are among the more than 400 organizations worldwide victimized by Conti, over 290 of which are located in the United States.

Conti operators typically steal victims’ files and encrypt the servers and workstations in an effort to force a ransom payment from the victim. The ransom letter instructs victims to contact the actors through an online portal to complete the transaction. If the ransom is not paid, the stolen data is sold or published to a public site controlled by the Conti actors. Ransom amounts vary widely, with some ransom demands being as high as $25 million.

The payment is available under a global US anti-crime and anti-terrorism initiative known as Rewards for Justice (RfJ), administered by the US Diplomatic Service on behalf of the US Department of State (the government body that many English-speaking countries refer to as “Foreign Affairs” or “the Foreign Ministry”).

The RfJ program dates back nearly 40 years, during which time it claims to have paid out about $250 million to more than 125 different people worldwide, which reflects mean average payouts of about $2,000,000 about three times each year.

Although this suggests that any individual whistleblower in the Conti saga is unlikely to net the whole $10 million on their own, there’s still plenty of reward money there for the taking.

In fact, RfJ has promoted its $10 million anti-cybercrime reward before, under a general description:

[The RfJ program] is offering a reward of up to $10 million for information leading to the identification or location of any person who, while acting at the direction or under the control of a foreign government, participates in malicious cyber activities against U.S. critical infrastructure in violation of the Computer Fraud and Abuse Act (CFAA).

This time, though, the US Department of State has expressed an explicit interest in five individuals, though they’re only known by their underground names at the moment: Dandis, Professor, Reshaev, Target, and Tramp.

Their mugshots are similarly uncertain, with the RfJ page showing the following image:

Only one snapshot shows an alleged perpetator, though it’s not clear whether the allegation is that he might be one of the five threat actors listed above, or simply a player in the broader gang with an unknown nickname and role:

There’s a curious hat (a party piece, perhaps?) featuring a red star; a shirt with a largely-obscured logo (can you extrapolate the word?); a beer mug in the background; an empty-looking drink in a clear glass bottle (beer, by its size and shape?); an unseen instrumentalist (playing a balalaika, by its tuning pegs?) in the foreground; and a patterned curtain tied back in front of a venetian-style blind at the rear.

Any commenters care to guess what’s going on in that picture?


LEARN MORE ABOUT RANSOMWARE IN 2022


Zoom for Mac patches get-root bug – update now!

At the well-known DEF CON security shindig in Las Vegas, Nevada, last week, Mac cybersecurity researcher Patrick Wardle revealed a “get-root” elevation of privilege (EoP) bug in Zoom for Mac:

In the tweet, which followed his talk [2022-08-12], Wardle noted:

Currently there is no patch [:FRIED-EGG EYES DEPICTING ALARM EMOJI:] [:EDVARD MUNCH SCREAM EMOJI:]

Zoom immediately worked on a patch for the flaw, which was announced the next day in Zoom security bulletin ZSB-22018, earning a congratulatory reply from Wardle in the process:

Mahalos to @Zoom for the (incredibly) quick fix! [:BOTH HANDS RAISED IN CELEBRATION AND WIGGLED ABOUT EMOJI:] [:PALMS PRESSED TOGETHER IN SIGN OF SPIRITUAL GOODWILL EMOJI:]

Zero-day disclosure

Given the apparent speed and ease with which Zoom was able to emit a patch for the bug, dubbed CVE-2022-28756, you’re probably wondering why Wardle didn’t tell Zoom about the bug in advance, setting the day of his speech as the deadline for revealing the details.

That would have given Zoom time to push out the update to its many Mac users (or at least to make it available to those who believe in patch early/patch often), thus eliminating the gap between Wardle explaining to the world how to abuse the bug, and the patching of the bug.

In fact, it seems that Wardle did do his best to warn Zoom about this bug, plus a bunch of interconnected flaws in Zoom’s autoupdate process, some months ago.

Wardle explains the bug disclosure timeline in the slides from his DEF CON talk, and lists a stream of Zoom updates related to the flaws he discovered.

A double-edged sword

The bugs that Wardle discussed related generally to Zoom’s auto-update mechanism, a part of any software ecosystem that is a bit of a double-edged sword – a more powerful weapon than a regular sword, but correspondingly harder to handle safely.

Auto-updating is a must-have component in any modern client application, given that it makes critical patches easier and quicker to distribute, thus helping users to close off cybersecurity holes reliably.

But auto-updating brings a sea of risks with it, not least because the update tool itself typically needs root-level system access.

That’s because the updater’s job is to overwrite the application software (something that a regular user isn’t supposed to do), and perhaps to launch privileged operating system commands to make configuration or other system-level changes.

In other words, if developers aren’t careful, the very tool that helps them keep their underlying app up-to-date and more secure could become a beachhead from which attackers could subvert security by tricking the updater into running unauthorised commands with system privileges.

Notably, auto-update programs need to take care to verify the authenticity of the update packages they download, to stop attackers simply feeding tham a fake update bundle, complete with added malware.

They also need to maintain the integrity of the update files that they ultimately consume, so that a local attacker can’t sneakily modify the “verified safe” update bundle that’s just been downloaded in the brief period between it being fetched and activated.

Sidestepping the authenticity check

As Wardle explains in his paper, one of the bugs he discovered and disclosed was a flaw in the first step listed above, when Zoom’s auto-updater tried to verify the authenticity of the update package it had just downloaded.

Instead of using the official macOS APIs to validate the digital signature of the download directly, Zoom developers decided to do the authentication indirectly, by running the macOS utility pkgutil --check-signature in the background and examining the output.

Here’s an example of pkgutil output, using an old version of the Zoom.pkg software bundle:

$ pkgutil --check-signature Zoom.pkg
Package "Zoom.pkg": Status: signed by a developer certificate issued by Apple for distribution Signed with a trusted timestamp on: 2022-06-27 01:26:22 +0000 Certificate Chain: 1. Developer ID Installer: Zoom Video Communications, Inc. (BJ4HAAB9B3) Expires: 2027-02-01 22:12:15 +0000 SHA256 Fingerprint: 6D 70 1A 84 F0 5A D4 C1 C1 B3 AE 01 C2 EF 1F 2E AE FB 9F 5C A6 80 48 A4 76 60 FF B5 F0 57 BB 8C ------------------------------------------------------------------------ 2. Developer ID Certification Authority Expires: 2027-02-01 22:12:15 +0000 SHA256 Fingerprint: 7A FC 9D 01 A6 2F 03 A2 DE 96 37 93 6D 4A FE 68 09 0D 2D E1 8D 03 F2 9C 88 CF B0 B1 BA 63 58 7F ------------------------------------------------------------------------ 3. Apple Root CA Expires: 2035-02-09 21:40:36 +0000 SHA256 Fingerprint: B0 B1 73 0E CB C7 FF 45 05 14 2C 49 F1 29 5E 6E DA 6B CA ED 7E 2C 68 C5 BE 91 B5 A1 10 01 F0 24

Unfortunately, as Wardle discovered when he decompiled Zoom’s signature verification code, the Zoom updater didn’t process the pkgutil data in the same way that human observers would.

We’d check the output by following the useful visual sequence in the output.

First, we’d look first for the desired status, e.g. signed by a developer certificate issued by Apple for distribution.

Then we’d finding the sub-heading Certificate Chain:.

Finally, we’d cross-check that the chain consisted of these three signers, in the right order:

 1. Zoom Video Communications, Inc. 2. Developer ID Certification Authority 3. Apple Root CA

Amazingly, Zoom’s code simply verified that each of the above three strings (not even checking for Zoom’s own unique ID BJ4HAAB9B3) showed up somewhere in the output from pkgutil.

So, creating a package with an absurd-but-valid name such as Zoom Video Communications, Inc. Developer ID Certification Authority Apple Root CA.pkg would trick the package verifier into finding the “identity strings” it was looking for.

The full package name is echoed into the pkgutil output header on the first line, where Zoom’s hapless “verifier” would match all three text strings in the wrong part of the output.

Thus the “security” check could trivially be bypassed.

A partial fix

Wardle says that Zoom eventually fixed this bug, more than seven months after he reported it, in time for DEF CON…

…but after applying the patch, he noticed that there was still a gaping hole in the update process.

The updater tried to do the right thing:

  • 1. Move the downloaded package to directory owned by root, and thus theoretically off-limits to any regular user.
  • 2. Verify the cryptographic signature of downloaded package, using official APIs, not via a text-matching bodge against pkgutil output.
  • 3. Unarchive the downloaded package file, in order to verify its version number, to prevent downgrade attacks.
  • 4. Install the downloaded package file, using the root privileges of the auto-update process.

Unfortunately, even though the directory used to store the update package was owned by root, in an attempt to keep it safe from prying users trying to subvert the update file while it was being used…

…the newly downloaded package file was left “world-writable” in its new location (a side-effect of having been downloaded by a regular account, not by root).

This gave local attackers a loophole to modify the update package after its digital signature had been validated (step 2), without affecting the version check details (step 3), but just before the installer took control of the package file in order to process it with root privileges (step 4).

This sort of bug is known as a race condition, because the attackers need to time their finish so they get home just before the installer starts, and are therefore to sneak their malicious changes in just ahead of it.

You’ll also hear this type of vulnerability referred to by the exotic-sounding acronym TOCTOU, short for time-of-check-to-time-of-use, a name that’s a clear reminder that if you check your facts too far in advance, then they might be out of date by the time you rely on them.

The TOCTOU problem is why car rental companies in the UK no longer simply ask to see your driving licence, which could have been issued up to 10 years ago, and could have been suspended or cancelled for a variety of reasons since then, most likely because of unsafe or illegal driving on your part. Along with your physical licence, you also need to present a one-time alphanumeric “proof of recent validity” code, issued within the last 21 days, to reduce the potential TOCTOU gap from 10 years to just three weeks.

The fix is now in

According to Wardle, Zoom has now prevented this bug by changing the access rights on the update package file that’s copied in step 1 above.

The file that’s used for signature checking, version validation, and the final root-level install is now limited to access by the root account only, at all times.

This removes the race condition, because an unprivileged attacker can’t modify the file between the end of step 2 (verification successful) and the start of step 4 (installation begins).

To modify the package file in order to trick the system into giving you root access, you’d need to have root access already, so you wouldn’t need an EoP bug of this sort in the first place.

The TOCTOU problem doesn’t apply because the check in step 2 remains valid until the use of the file begins, leaving no window of opportunity for the check to become invalid.

What to do?

If you’re using Zoom on a Mac, open the app and then, in the menu bar, go to zoom.us > Check for Updates...

If an update is available, the new version will be shown, and you can click [Install] to apply the patches:

The version you want is 5.11.5 (9788) or later.


S3 Ep95: Slack leak, Github onslaught, and post-quantum crypto [Audio + Text]

Click-and-drag on the soundwaves below to skip to any point. You can also listen directly on Soundcloud.

With Doug Aamoth and Paul Ducklin.

Intro and outro music by Edith Mudge.

Schroedinger’s cat outlines in featured image via Dhatfield under CC BY-SA 3.0.

You can listen to us on Soundcloud, Apple Podcasts, Google Podcasts, Spotify, Stitcher and anywhere that good podcasts are found. Or just drop the URL of our RSS feed into your favourite podcatcher.


READ THE TRANSCRIPT

DOUG.  Slack leaks, naughty GitHub code, and post-quantum cryptography.

All that, and much more, on the Naked Security podcast.

[MUSICAL MODEM]

Welcome to the podcast, everybody.

I am Doug Aamoth.

With me, as always, is Paul Ducklin.

Paul, how do you do today?


DUCK.  Super-duper, as usual, Doug!


DOUG.  I am super-duper excited to get to this week’s Tech History segment, because…

…you were there, man!

This week, on August 11…


DUCK.  Oh, no!

I think the penny’s just dropped…


DOUG.  I don’t even have to say the year!

August 11, 2003 – the world took notice of the Blaster worm, affecting Windows 2000 and Windows XP systems.

Blaster, also known as Lovesan and MsBlast, exploited a buffer overflow and is perhaps best known for the message, “Billy Gates, why do you make this possible? Stop making money and fix your software.”

What happened, Paul?


DUCK.  Well, it was the era before, perhaps, we took security quite so seriously.

And, fortunately, that kind of bug would be much, much more difficult to exploit these days: it was a stack-based buffer overflow.

And if I remember correctly, the server versions of Windows were already being built with what’s called stack protection.

In other words, if you overflow the stack inside a function, then, before the function returns and does the damage with the corrupted stack, it will detect that something bad has happened.

So, it has to shut down the offending program, but the malware doesn’t get to run.

But that protection was not in the client versions of Windows at that time.

And as I remember, it was one of those early malwares that had to guess which version of the operating system you had.

Are you on 2000? Are you on NT? Are you on XP?

And if it got it wrong, then an important part of the system would crash, and you’d get the “Your system is about to shut down” warning.


DOUG.  Ha, I remember those!


DUCK.  So, there was that collateral damage that was, for many people, the sign that you were getting hammered by infections…

…which could be from outside, like if you were just a home user and you didn’t have a router or firewall at home.

But if you were inside a company, the most likely attack was going to come from someone else inside the company, spewing packets on your network.

So, very much like the CodeRed attack we spoke about, which was a couple of years before that, in a recent podcast, it was really the sheer scale, volume and speed of this thing that was the problem.


DOUG.  All right, well, that was about 20 years ago.

And if we turn back the clock to five years ago, that’s when Slack started leaking hashed passwords. [LAUGHTER]


DUCK.  Yes, Slack, the popular collaboration tool…

…it has a feature where you can send an invitation link to other people to join your workspace.

And, you imagine: you click a button that says “Generate a link”, and it’ll create some kind of network packet that probably has some JSON inside it.

If you’ve ever had a Zoom meeting invitation, you’ll know that it has a date, and a time, and the person who is inviting you, and a URL you can use for the meeting, and a passcode, and all that stuff – it has quite a lot of data in there.

Normally, you don’t dig into the raw data to see what’s in there – the client just says, “Hey, here’s a meeting, here are the details. Do you want to Accept / Maybe / Decline?”

It turned out that when you did this with Slack, as you say, for more than five years, packaged up in that invitation was extraneous data not strictly relevant to the invitation itself.

So, not a URL, not a name, not a date, not a time…

…but the *inviting user’s password hash* [LAUGHTER]


DOUG.  Hmmmmm.


DUCK.  I kid you not!


DOUG.  That sounds bad…


DUCK.  Yes, it really does, isn’t it?

The bad news is, how on earth did that get in there?

And, once it was in there, how on earth did it evade notice for five years and three months?

In fact, if you visit the article on Naked Security and look at the full URL of the article, you’ll notice it says at the end, blahblahblah-for-three-months.

Because, when I first read the report, my mind didn’t want to see it as 2017! [LAUGHTER]

It was 17 April to 17 July, and so there were lots of “17”s in there.

And my mind blanked out the 2017 as the starting year – I misread it as “April to July *of this year*” [2022].

I thought, “Wow, *three months* and they didn’t notice.”

And then the first comment on the article was, “Ahem [COUGH]. It was actually 17 April *2017*.”

Wow!

But somebody figured it out on 17 July [2022], and Slack, to their credit, fixed it the same day.

Like, “Oh, golly, what were we thinking?!”

So that’s the bad news.

The good news is, at least it was *hashed* passwords.

And they weren’t just hashed, they were *salted*, which is where you mix in uniquely chosen, per-user random data with the password.

The idea of this is twofold.

One, if two people choose the same password, they don’t get the same hash, so you can’t make any inferences by looking through the hash database.

And two, you can’t precompute a dictionary of known hashes for known inputs, because you have to create a separate dictionary for each password *for each salt*.

So it’s not a trivial exercise to crack hashed passwords.

Having said that, the whole idea is that they are not supposed to be a matter of public record.

They’re hashed and salted *in case* they leak, not *in order that they can* leak.

So, egg on Slack’s face!

Slack says that about one in 200 users, or 0.5%, were affected.

But if you’re a Slack user, I would assume that if they didn’t realise they were leaking hashed passwords for five years, maybe they didn’t quite enumerate the list of people affected completely either.

So, go and change your password anyway… you might as well.


DOUG.  OK, we also say: if you’re not using a password manager, consider getting one; and turn on 2FA if you can.


DUCK.  I thought you’d like that, Doug.


DOUG.  Yes, I do!

And then, if you are Slack or company like it, choose a reputable salt-hash-and-stretch algorithm when handling passwords yourself.


DUCK.  Yes.

The big deal in Slack’s response, and the thing that I thought was lacking, is that they just said, “Don’t worry, not only did we hash the passwords, we salted them as well.”

My advice is that if you are caught in a breach like this, then you should be willing to declare the algorithm or process you used for salting and hashing, and also ideally what’s called stretching, which is where you don’t just hash the salted password once, but perhaps you hash it 100,000 times to slow down any kind of dictionary or brute force attack.

And if you state what algorithm you are using and with what parameters.. for example, PBKDF2, bcrypt, scrypt, Argon2 – those are the best-known password “salt-hash-stretch” algorithms out there.

If you actually state what algorithm you’re using, then: [A] you’re being more open, and [B] you’re giving potential victims of the problem a chance to assess for themselves how dangerous they think this might have been.

And that sort of openness can actually help a lot.

Slack didn’t do that.

They just said, “Oh, they were salted and hashed.”

But what we don’t know is, did they put in two bytes of salt and then hash them once with SHA-1…

…or did they have something a little more resistant to being cracked?


DOUG.  Sticking to the subject of bad things, we’re noticing a trend developing wherein people are injecting bad stuff into GitHub, just to see what happens, exposing risk…

…we’ve got another one of those stories.


DUCK.  Yes, somebody who now has allegedly came out on Twitter and said, “Don’t worry guys, no harm done. It was just for research. I’m going to write a report, stand out from Blue Alert.”

They created literally thousands of bogus GitHub projects, based on copying existing legit code, deliberately inserting some malware commands in there, such as “call home for further instructions”, and “interpret the body of the reply as backdoor code to execute”, and so on.

So, stuff that really could do harm if you installed one of these packages.

Giving them legit looking names…

…borrowing, apparently, the commit history of a genuine project so that the thing looked much more legit than you might otherwise expect if it just showed up with, “Hey, download this file. You know you want to!”

Really?! Research?? We didn’t know this already?!!?

Now, you can argue, “Well, Microsoft, who own GitHub, what are they doing making it so easy for people to upload this kind of stuff?”

And there’s some truth to that.

Maybe they could do a better job of keeping malware out in the first place.

But it’s going a little bit over the top to say, “Oh, it’s all Microsoft’s fault.”

It’s even worse in my opinion, to say, “Yes, this is genuine research; this is really important; we’ve got to remind people that this could happen.”

Well, [A] we already know that, thank you very much, because loads of people have done this before; we got the message loud and clear.

And [B] this *isn’t* research.

This is deliberately trying to trick people into downloading code that gives a potential attacker remote control, in return for the ability to write a report.

That sounds more like a “big fat excuse” to me than a legitimate motivator for research.

And so my recommendation is, if you think this *is* research, and if you’re determined to do something like this all over again, *don’t expect a whole lot of sympathy* if you get caught.


DOUG.  Alright – we will return to this and the reader comments at the end of the show, so stick around.

But first, let us talk about traffic lights, and what they have to do with cybersecurity.


DUCK.  Ahhh, yes! [LAUGH]

Well, there’s a thing called TLP, the Traffic Light Protocol.

And the TLP is what you might call a “human cybersecurity research protocol” that helps you label documents that you send to other people, to give them a hint of what you hope they will (and, more importantly, what you hope they will *not*) do with the data.

In particular, how widely are they supposed to redistribute it?

Is this something so important that you could declare it to the world?

Or is this potentially dangerous, or does it potentially include some stuff that we don’t want to be public just yet… so keep it to yourself?

And it started off with: TLP:RED, which meant, “Keep it to yourself”; TLP:AMBER, which meant “You can circulate it inside your own company or to customers of yours that you think might urgently need to know this”; TLP:GREEN, which meant, “OK, you can let this circulate widely within the cybersecurity community.”

And TLP:WHITE, which meant, “You can tell anybody.”

Very useful, very simple: RED, AMBER, GREEN… a metaphor that works globally, without worrying about what’s the difference between “secret” and “confidential” and what’s the difference between “confidential” and “classified”, all that complicated stuff that needs a whole lot of laws around it.

Well, the TLP just got some modifications.

So, if you are into cybersecurity research, make sure you are aware of those.

TLP:WHITE has been changed to what I consider a much better term actually, because white has all these unnecessary cultural overtones that we can do without in the modern era.

So, TLP:WHITE has just become TLP:CLEAR, which to my mind is a much better word because it says, “You’re clear to use this data,” and that intention is stated, ahem, very clearly. (Sorry, I couldn’t resist the pun.)

And there’s an additional layer (so it has spoiled the metaphor a bit – it’s now a *five*-colour color traffic light!).

There’s a special level called TLP:AMBER+STRICT, and what that means is, “You can share this inside your company.”

So you might be invited to a meeting, maybe you work for a cybersecurity company, and it’s quite clear that you will need to show this to programmers, maybe to your IT team, maybe to your quality assurance people, so you can do research into the problem or deal with fixing it.

But TLP:AMBER+STRICT means that although you can circulate it inside your organisation, *please don’t tell your clients or your customers*, or even people outside the company that you think might have a need to know.

Keep it within the tighter community to start with.

TLP:AMBER, like before, means, “OK, if you feel you need to tell your customers, you can.”

And that can be important, because sometimes you might want to inform your customers, “Hey, we’ve got the fix coming. You’ll need to take some precautionary steps before the fix arrives. But because it’s kind of sensitive, may we ask that you don’t tell the world just yet?”

Sometimes, telling the world too early actually plays into the hands of the crooks more than it plays into the hands of the defenders.

So, if you’re a cybersecurity responder, I suggest you go: https://www.first.org/tlp


DOUG.  And you can read more about that on our site, nakedsecurity.sophos.com.

And if you are looking for some other light reading, forget quantum cryptography… we’re moving on to post-quantum cryptography, Paul!


DUCK.  Yes, we’ve spoken about this a few times before on the podcast, haven’t we?

The idea of a quantum computer, assuming a powerful and reliable enough one could be built, is that certain types of algorithms could be sped up over the state of the art today, either to the tune of the square root… or even worse, the *logarithm* of the scale of the problem today.

In other words, instead of taking 2256 tries to find a file with a particular hash, you might be able to do it in just (“just”!) 2128 tries, which is the square root.

Clearly a lot faster.

But there’s a whole class of problems involving factorising products of prime numbers that the theory says could be cracked in the *logarithm* of the time that they take today, loosely speaking.

So, instead of taking, say, 2128 days to crack [FAR LONGER THAN THE CURRENT AGE OF THE UNIVERSE], it might take just 128 days to crack.

Or you can replace “days” with “minutes”, or whatever.

And unfortunately, that logarithmic time algorithm (called Shor’s Quantum Factorisation Algorithm)… that could be, in theory, applied to some of today’s cryptographic techniques, notably those used for public key cryptography.

And, just in case these quantum computing devices do become feasible in the next few years, maybe we should start preparing now for encryption algorithms that are not vulnerable to these two particular classes of attack?

Particularly the logarithm one, because it speeds up potential attacks so greatly that cryptographic keys that we currently think, “Well, no one will ever figure that out,” might become revealable at some later stage.

Anyway, NIST, the National Institute of Standards and Technology in the USA, has for several years been running a competition to try and standardise some public, unpatented, well-scrutinised algorithms that will be resistant to these magical quantum computers, if ever they show up.

And recently they chose four algorithms that they’re prepared to standardise upon now.

They have cool names, Doug, so I have to read them out: CRYSTALS-KYBER, CRYSTALS-DILITHIUM, FALCON, and SPHINCS+. [LAUGHTER]

So they have cool names, if nothing else.

But, at the same time, NIST figured, “Well, that’s only four algorithms. What we’ll do is we’ll pick four more as potential secondary candidates, and we’ll see if any of those should go through as well.”

So there are four standardised algorithms now, and four algorithms which might get standardised in the future.

Or there *were* four on the 5 July 2022, and one of them was SIKE, short for supersingular isogeny key encapsulation.

(We’ll need several podcasts to explain supersingular isogenies, so we won’t bother. [LAUGHTER])

But, unfortunately, this one, which was hanging in there with a fighting chance of being standardised, it looks as though it has been irremediably broken, despite at least five years of having been open to public scrutiny already.

So, fortunately, just before it did get or could get standardised, two Belgian cryptographers figured out, “You know what? We think we’ve got a way around this using calculations that take about an hour, on a fairly average CPU, using just one core.”


DOUG.  I guess it’s better to find that out now than after standardising it and getting it out in the wild?


DUCK.  Indeed!

I guess if it had been one of the algorithms that already got standardised, they’d have to repeal the standard and come up with a new one?

It seems weird that this didn’t get noticed for five years.

But I guess that’s the whole idea of public scrutiny: you never know when somebody might just hit on the crack that’s needed, or the little wedge that they can use to break in and prove that the algorithm is not as strong as was originally thought.

A good reminder that if you *ever* thought of knitting your own cryptography…


DOUG.  [LAUGHS] I haven’t!


DUCK.  ..despite us having told you on the Naked Security podcast N times, “Don’t do that!”

This should be the ultimate reminder that, even when true experts put out an algorithm that is subject to public scrutiny in a global competition for five years, this still doesn’t necessarily provide enough time to expose flaws that turn out to be quite bad.

So, it’s certainly not looking good for this SIKE algorithm.

And who knows, maybe it will be withdrawn?


DOUG.  We will keep an eye on that.

And as the sun slowly sets on our show for this week, it’s time to hear from one of our readers on the GitHub story we discussed earlier.

Rob writes:

“There’s some chalk and cheese in the comments, and I hate to say it, but I genuinely can see both sides of the argument. Is it dangerous, troublesome, time wasting and resource consuming? Yes, of course it is. Is it what criminally minded types would do? Yes, yes, it is. Is it a reminder to anyone using GitHub, or any other code repository system for that matter, that safely travelling the internet requires a healthy degree of cynicism and paranoia? Yes. As a sysadmin, part of me wants to applaud the exposure of the risk at hand. As a sysadmin to a bunch of developers, I now need to make sure everyone has recently scoured any pulls for questionable entries.”


DUCK.  Yes, thank you, RobB, for that comment, because I guess it’s important to see both sides of the argument.

There were commenters who were just saying, “What the heck is the problem with this? This is great!”

One person said, “No, actually, this pen testing is good and useful. Be glad these are being exposed now instead of rearing their ugly head from an actual attacker.”

And my response to that is that, “Well, this *is* an attack, actually.”

It’s just that somebody has now come out afterwards, saying “Oh, no, no. No harm done! Honestly, I wasn’t being naughty.”

I don’t think you are obliged to buy that excuse!

But anyway, this is not penetration testing.

My response was to say, very simply: “Responsible penetration testers only ever act [A] after receiving explicit permission, and [B] within behavioural limits agreed explicitly in advance.”

You don’t just make up your own rules, and we have discussed this before.

So, as another commenter said, which is, I think, my favorite comment… Ecurb said, “I think somebody should walk house to house and smash windows to show how ineffective door locks really are. This is past due. Someone jump on this, please.”

And then, just in case you didn’t realize that was satire, folks, he says, “Not!”


DUCK.  I get the idea that it’s a good reminder, and I get the idea that if you’re a GitHub user, both as a producer and a consumer, there are things you can do.

We list them in the comments and in the article.

For example, put a digital signature on all your commits so it’s obvious that the changes came from you, and there’s some kind of traceability.

And don’t just blindly consume stuff because you did a search and it “looked like” it might be the right project.

Yes, we can all learn from this, but does this actually count as teaching us, or is that just something we should learn anyway?

I think this is *not* teaching.

It’s just *not of a high enough standard* to count as research.


DOUG.  Great discussion around this article, and thanks for sending that in, Rob.

If you have an interesting story, comment or question you’d like to submit, we’d love to read it on the podcast.

You can email tips@sophos.com; you can comment on any one of our articles; or you can hit us up on social: @NakedSecurity.

That’s our show for today – thanks very much for listening.

For Paul Ducklin, I’m Doug Aamoth reminding you, until next time, to…


BOTH.  Stay secure!

[MUSICAL MODEM]


APIC/EPIC! Intel chips leak secrets even the kernel shouldn’t see…

Here’s this week’s BWAIN, our jocular term for a Bug With An Impressive Name.

BWAIN is an accolade that we hand out when a new cybersecurity flaw not only turns out to be interesting and important, but also turns up with its own logo, domain name and website.

This one is dubbed ÆPIC Leak, a pun on the words APIC and EPIC.

The former is short for Advanced Programmable Interrupt Controller, and the latter is simply the word “epic”, as in giant, massive, extreme, mega, humongous.

The letter Æ hasn’t been used in written English since Saxon times. Its name is æsc, pronounced ash (as in the tree), and it pretty much represents the sound of the A in in the modern word ASH. But we assume you’re supposed to pronounce the word ÆPIC here either as “APIC-slash-EPIC”, or as “ah!-eh?-PIC”.

What’s it all about?

All of this raises five fascinating questions:

  • What is an APIC, and why do I need it?
  • How can you have data that even the kernel can’t peek at?
  • What causes this epic failure in APIC?
  • Does the ÆPIC Leak affect me?
  • What to do about it?

What’s an APIC?

Let’s rewind to 1981, when the IBM PC first appeared.

The PC included a chip called the Intel 8259A Programmable Interrupt Controller, or PIC. (Later models, from the PC AT onwards, had two PICs, chained together, to support more interrupt events.)

The purpose of the PIC was quite literally to interrupt the program running on the PC’s central processor (CPU) whenever something time-critical took place that needed attention right away.

These hardware interrupts included events such as: the keyboard getting a keystroke; the serial port receiving a character; and a repeating hardware timer ticking over.

Without a hardware interrupt system of this sort, the operating system would need to be littered with function calls to check for incoming keystrokes on a regular basis, which would be a waste of CPU power when no one was typing, but wouldn’t be responsive enough when they did.

As you can imagine, the PIC was soon followed by an upgraded chip called the APIC, an advanced sort of PIC built into the CPU itself.

These days, APICs provide much more than just feedback from the keyboard, serial port and system timer.

APIC events are triggered by (and provide real-time data about) events such as overheating, and allow hardware interaction between the different cores in contemporary multicore processors.

And today’s Intel chips, if we may simplifly greatly, can generally be configured to work in two different ways, known as xAPIC mode and x2APIC mode.

Here, xAPIC is the “legacy” way of extracting data from the interrupt controller, and x2APIC is the more modern way.

Simplifying yet further, xAPIC relies on what’s called MMIO, short for memory-mapped input/output, for reading data out of the APIC when it registers an event of interest.

In MMIO mode, you can find out what triggered an APIC event by reading from a specific region of memory (RAM), which mirrors the input/output registers of the APIC chip itself.

This xAPIC data is mapped into a 4096-byte memory block somewhere in the physical RAM of the computer.

This simplifies accessing the data, but it requires an annoying, complex (and, as we shall see, potentially dangerous) interaction between the APIC chip and system memory.

In contrast, x2APIC requires you to read out the APIC data directly from the chip itself, using what are known as Model Specific Registers (MSRs).

According to Intel, avoiding the MMIO part of the process “provides significantly increased processor addressability and some enhancements on interrupt delivery.”

Notably, extracting the APIC data directly from on-chip registers means that the total amount of data supported, and the maximum number of CPU cores that can be managed at the same time, is not limited to the 4096 bytes available in MMIO mode.

How can you have data that even the kernel can’t peek at?

You’ve probably guessed already that the data that ends up in the MMIO memory area when you’re using xAPIC mode isn’t always as carefully managed as it should be…

…and thus that some kind of “data leak” into that MMIO area is the heart of this problem.

But given that you already need sysadmin-level powers to read the MMIO data in the first place, and therefore you could almost certainly get at any and all data in memory anyway…

…why would having other people’s data show up by mistake in the APIC MMIO data area represent an epic leak?

It might make some types of data-stealing or RAM-scraping attack slightly easier in practice, but surely it wouldn’t give you any more memory-snooping ability that you already had in theory?

Unfortunately, that assumption isn’t true if any software on the system is using Intel’s SGX, short for Software Guard Extensions.


LEARN MORE ABOUT SGX


SGX is supported by many recent Intel CPUs, and it provides a way for the operating system kernel to “seal” a chunk of code and data into a physical block of RAM so as to form what’s known as an enclave.

This makes it behave, temporarily at least, much like the special security chips in mobile phones that are used to store secrets such as decryption keys.

Once the enclave’s SGX “lock” is set, only program code running inside the sealed-off memory area can read and write the contents of that RAM.

As a result, the internal details of any calculations that happen after the enclave is activated are invisible to any other code, thread, process or user on the system.

Including the kernel itself.

There’s a way to call the code that’s been sealed into the enclave, and a way for it to return the output of of the calculations it might perform, but there’s no way to recover, or to spy on, or to debug, the code and its associated data while it runs.

The enclave effectively turns into a black box to which you can feed inputs, such as a data to be signed with a private key, and extract outputs, such as the digital signature generated, but from which you can’t winkle out the cryptographic keys used in the signing process.

As you can imagine, if data that’s supposed to be sealed up inside an SGX enclave should ever accidentally get duplicated into the MMIO RAM that’s used to “mirror” the APIC data when you’re using xAPIC “memory-mapped” mode…

…that would violate the security of SGX, which says that no data should ever emerge from an SGX enclave after it’s been created, unless it’s deliberately exported by code already running inside the enclave itself.

What causes this epic failure in APIC?

The researchers behind the ÆPIC Leak paper discovered that by arranging to read out APIC data via a cunning and unusual sequence of memory accesses…

…they could trick the processor into filling up the APIC MMIO space not only with data freshly received from the APIC itself, but also with data that just happened to have been used by the CPU recently for some other purpose.

This behaviour is a side-effect of the fact that although the APIC MMIO memory page is 4096 bytes in size, the APIC chip in xAPIC mode doesn’t actually produce 4096 bytes’ worth of data, and the CPU doesnt’t always correctly neutralise the unused parts of the MMIO region by filling it with zeros first.

Instead, old data left over in the CPU cache was written out along with the new data received from the APIC chip itself.

As the researchers put it, the bug boils down to what’s known as an uninitialised memory read, where you accidentally re-use someone else’s leftover data in RAM because neither they nor you flushed it of its previous secrets first.

Does the ÆPIC Leak affect me?

For a full list of chips affected, see Intel’s own advisory.

As far as we can tell, if you have a 10th or 11th generation Intel processor, you’re probably affected.

But if you have a brand-new 12th generation CPU (the very latest at the time of writing), then it seems that only server-class chips are affected.

Ironically, in 12th-generation laptop chips, Intel has given up on SGX, so this bug doesn’t apply because it’s impossible to have any “sealed” SGX enclaves that could leak.

Of course, even on a potentially vulnerable chip, if you’re not relying on any software that uses SGX, then the bug doesn’t apply either.

And the bug, dubbed CVE-2022-21233, can only be exploited by an attacker who already has local, admin-level (root) access to your computer.

Regular users can’t access the APIC MMIO data block, and therefore have no way of peeking at anything at all in there, let alone secret data that might have leaked out from an SGX enclave.

Also, guest virtual machines (VMs) running under the control of a host operating system in a hypervisor such as HyperV, VMWare or VirtualBox almost certainly can’t use this trick to plunder secrets from other guests or the host itself.

That’s because guest VMs generally don’t get access to the real APIC circuitry in the host processor; instead, each guest gets its own simulated APIC that’s unique to that VM.

What to do?

Don’t panic.

On a laptop or desktop computer, you may not be at risk at all, either because you have an older (or, lucky you, a brand new!) computer, or because you aren’t relying on SGX anyway.

And even if you are risk, anyone who gets into your laptop as admin/root probably has enough power to cause you a world of trouble already.

If you have vulnerable servers and you’re relying on SGX as part of your operational security, check Intel security advisory INTEL-SA-00657 for protection and mitigation information.

According to the researchers who wrote this up, “Intel [has] released microcode and SGX Software Development Kit updates to fix the issue.”

The Linux kernel team also seems to be working right now on a patch that will allow you to configure your system so that it will always use x2APIC (which, as you will remember from earlier, doesn’t transmit APIC data via shared memory), and will gracefully prevent the system being forced back into xAPIC mode after bootup.


go top