Apple ships that recent “Rapid Response” spyware patch to everyone, fixes a second zero-day

Two weeks ago, we urged Apple users with recent hardware to grab the company’s second-ever Rapid Response patch.

As we pointed out at the time, this was an emergency bug fix to block off a web-browsing security hole that had apparently been used in real-world spyware attacks:

Component: WebKit Impact: Processing web content may lead to arbitrary code execution. Apple is aware of a report that this issue may have been actively exploited. Description: The issue was addressed with improved checks. CVE-2023-37450: an anonymous researcher

The next-best thing to zero-click attacks

Technically, code execution bugs that can be triggered by getting you to look at a web page that contains booby-trapped content don’t count as so-called zero-click attacks.

A true zero-click atack is where cybercriminals can take over your device simply because it’s turned on and connected to a network.

Well-known examples include the infamous Code Red and Slammer worms of the early 2000s that spread globally in just a few hours by finding new victim computers by themselves, or the legendary Morris Worm of 1988 that distributed itself worldwide almost as soon as its creator unleashed it.

Morris, author of the eponymous worm, apparently intended to limit the side-effects of his “experiment” by infecting each potential victim only once. But he added code that randomly and occasionally reinfected existing victims as an insurance policy against crashed or fake versions of the worm that might otherwise trick the worm into avoiding computers that seemed to be infectious but weren’t. Morris decided on purposely reinfecting computers 1/7th of the time, but that turned out to be far too aggressive. The worm therefore quickly overwhelmed the internet by infecting victims them over and over again until they were doing little other than attacking everyone else.

But a look-and-get-pwned attack, also known as a drive-by install, where merely looking at a web page can invisibly implant malware, even though you don’t click any additional buttons or approve any pop-ups, is the next-best thing for an attacker.

After all, your browser isn’t supposed to download and run any unauthorised programs unless and until you explicitly give it permission.

As you can imagine, crooks love to combine a look-and-get-pwned exploit with a second, kernel-level code execution bug to take over your computer or your phone entirely.

Browser-based exploits often give attackers limited results, such as malware that can only spy on your browsing (as bad as that is on its own), or that won’t keep running after your browser exits or your device reboots.

But if the malware the attackers execute via an initial browser hole is specifically coded to exploit the second bug in the chain, then they immediately escape from any limitations or sandboxing implemented in the browser app by taking over your entire device at the operating system level instead.

Typically, that means they can spy on every app you run, and even on the operating system itself, as well as installing their malware as an official part of your device’s startup procedure, thus invisibly and automatically surviving any precautionary reboots you might perform.



More in-the-wild iPhone malware holes

Apple has now pushed out full-sized system upgrades, complete with brand new version numbers, for every supported operating system version that the company supports.

After this latest update, you should see the following version numbers, as documented in the Apple security bulletins listed below:

As well as including a permanent fix for the abovementioned CVE-2023-37450 exploit (thus patching those who skipped the Rapid Response or who had older devices that weren’t eligible), these updates also deal with this listed bug:

Component: Kernel Impact: An app may be able to modify sensitive kernel state. Apple is aware of a report that this issue may have been actively exploited against versions of iOS released before iOS 15.7.1. Description: This issue was addressed with improved state management. CVE-2023-38606: Valentin Pashkov, Mikhail Vinogradov, Georgy Kucherin (@kucher1n), Leonid Bezvershenko (@bzvr_), and Boris Larin (@oct0xor) of Kaspersky

As in our write-up of Apple’s previous system-level updates at the end of June 2023, the two in-the-wild holes that made the list this time dealt with a WebKit bug and a kernel flaw, with the WebKit-level bug once again attributed to “an anonymous researcher” and the kernel-level bug once again attributed to Russian anti-virus outfit Kaspersky.

We’re therefore assuming that these patches related to the so-called Triangulation Trojan malware, first reported by Kasperky at the start of June 2023, after the company found that iPhones belonging to some of its own staff had been actively infected with spyware:

What to do?

Once again, we urge you to ensure that your Apple devices have downloaded (and then actually installed!) these updates as soon as you can.

Even though we always urge you to Patch early/Patch often, the fixes in these upgrades aren’t just there to close off theoretical holes.

Here, you’re shutting off cybersecurity flaws that attackers already know how to exploit.

Even if the crooks have only used them so far in a limited number of successful intrusions against older iPhones…

…why remain behind when you can jump ahead?

And if guarding against the Triangulation Trojan malware isn’t enough to convince you on its own, don’t forget that these updates also patch against numerous theoretical attacks that Apple and other Good Guys found proactively, including kernel-level code execution holes, elevation-of-privilege bugs, and data leakage flaws.

As always, head to to Settings > General > Software Update to check whether you’ve correctly received and installed this emergency patch, or to jump to the front of the queue and fetch it right away if you haven’t.

(Note. On older Macs, check for updates using About This Mac > Software Update… instead.)


Hacking police radios: 30-year-old crypto flaws in the spotlight

If you’d been quietly chasing down cryptographic bugs in a proprietary police radio system since 2021, but you’d had to wait until the second half of 2023 to go public with your research, how would you deal with the reveal?

You’d probably do what researchers at boutique Dutch cybersecurity consultancy Midnight Blue did: line up a world tour of conference appearances in the US, Germany and Denmark (Black Hat, Usenix, DEF CON, CCC and ISC), and turn your findings into a BWAIN.

The word BWAIN, if you haven’t seen it before, is our very own jocular acronym that’s short for Bug With An Impressive Name, typically with its own logo, PR-friendly website and custom domain name.

(One notorious BWAIN, named after a legendary musical instrument, Orpheus’s Lyre, even had a theme tune, albeit played on a ukulele.)

Introducing TETRA:BURST

This research is dubbed TETRA:BURST, with the letter “A” stylised to look like a shattered radio transmission mast.

TETRA, if you’ve never heard of it before, is short for Terrestrial Trunked Radio, originally Trans-European Trunked Radio, and is widely used (outside North America, at least) by law enforcement, emergency services and some commercial organisations.

TETRA has featured on Naked Security before, when a Slovenian student received a criminal conviction for hacking the TETRA network in his own country after deciding that his vulnerability reports hadn’t been taken seriously enough:

Trunked radio needs fewer base stations and has a longer range than mobile phone networks, which helps in remote areas, and it supports both point-to-point and broadcast communications, desirable when co-ordinating law enforcement or rescue efforts.

The TETRA system, indeed, was standardised back in 1995, when the cryptographic world was very different.

Back then, cryptographic tools including the DES and RC4 ciphers, and the MD5 message digest algorithm, were still in widespread use, though all of them are now considered dangerously unsafe.

DES was superseded at the start of the 2000s because it uses encryption keys just 56 bits long.

Modern computers are sufficiently fast and cheap that determined cryptocrackers can fairly easily try out all possible 256 different keys (what’s known as a brute-force attack, for obvious reasons) against intercepted messages.

RC4, which is supposed to turn input data with recognisable patterns (even a text string of the same character repeated over and over) into random digital shredded cabbage, was found to have signficant imperfections.

These could be used to used to winkle out plaintext input by performing statistical analysis of ciphertext output.

MD5, which is supposed to produce a pseudorandom 16-byte message digest from any input file, thus generating unforgeable fingerprints for files of any size, turned out to be flawed, too.

Attackers can easily trick the algorithm into churning out the same fingerprint for two different files, annihilating its value as a tamper-detection tool.

End-to-end encryption for individual online transactions, which we now take for granted on the web thanks to secure HTTP (HTTPS, based on TLS, short for transport layer security), was both new and unusual back in 1995.

Transaction-based protection relied on the brand-new-at-the-time network-leve protocol known as SSL (secure sockets layer), now considered sufficiently insecure that you’ll struggle to find it in use anywhere online.

Party like it’s 1995

Unlike DES, RC4, MD5, SSL and friends, TETRA’s 1995-era encryption remains in widespread use to this day, but hasn’t received much research attention, apparently for two main reasons.

Firstly, even though it’s used around the world, it’s not an everyday service that pops up in all our lives in the way that mobile telephones and web commerce do.

Secondly, the underlying encryption algorithms are proprietary, guarded as trade secrets under strict non-disclosure agreements (NDAs), so it simply hasn’t had the levels of public mathematical scrutiny as unpatented, open-source encryption algorithms.

In contrast, cryptosystems such as AES (which replaced DES), SHA-256 (which replaced MD5), ChaCha20 (which replaced RC4), and various iterations of TLS (which replaced SSL) have all been analysed, dissected, discussed, hacked, attacked and critiqued in public for years, following what’s known in the trade as Kerckhoff’s Principle.

Auguste Kerckhoff was a Dutch-born linguist who ended up as a professor of the German language in Paris.

He published a pair of seminal papers in the 1880s under the title Military Cryptography, in which he proposed that no cryptographic system should ever rely on what we now refer to as security through obscurity.

Simply put, if you need to keep the algorithm secret, as well as the decryption key for each message, you’re in deep trouble..

Your enemies will ultimately, and inevitably, get hold of that algorithm…

…and, unlike decryption keys, which can be changed at will, you’re stuck with the algorithm that uses those keys.

Use NDAs for commerce, not for crypto

Commercial NDAs are peculiarly purposeless for keeping cryptographic secrets, especially for successful products that end up with ever more partners signed up under NDA.

There are four obvious problems here, namely:

  • More and more people officially get the opportunity to figure out exploitable bugs, which they will never disclose if they stick to the spirit of their NDA.
  • More and more vendors get the chance to leak the algorithms anyway, if any one of them violates their NDA, whether by accident or design. As Benjamin Franklin, one of America’s best-known and well-remembered scientists, is supposed to have said, “Three people may keep a secret, if two of them are dead.”.
  • Sooner or later, someone will see the algorithm legally without a binding NDA. That person is then free to disclose it without breaking the letter of the NDA, and without trampling on its spirit if they happen to agree with Kerckhoff’s Principle.
  • Someone not under NDA will eventually figure out the algorithm by observation. Amusingly, if that is the right word, cryptographic reverse engineers can be pretty sure their analysis is correct by comparing the behaviour of their alleged implementation against the real thing. Even small inconsistencies are likely to result in wildly different cryptographic outputs, if the algorithm mixes, minces, shreds, diffuses and scrambles its input in a sufficiently pseudorandom way.

The Dutch researchers in this story took the last approach, legally acquiring a bunch of compliant TETRA devices and figuring out how they worked without using any information covered by NDA.

Apparently, they discovered five vulnerabilities that ended up with CVE numbers, dating back to 2022 because of the time involved in liaising with TETRA vendors on how to fix the issues: CVE-2022-24400 to CVE-2022-24404 inclusive.

Obviously, they’re now holding out on full details for maximum PR effect, with their first public paper scheduled for 2023-08-09 at the Black Hat 2023 conference in Las Vegas, USA.

What to do?

Advance information provided by the researchers is enough to remind us of three cryptographic must-follow rules right away:

  • Don’t violate Kerckhoff’s Principle. Use NDAs or other legal instruments if you want to protect your intellectual property or to try to maximise your licensing fees. But never use “trade secrecy” in the hope of improving cryptographic security. Stick to trusted algorithms than have already survived serious public scrutiny.
  • Don’t rely on data you can’t verify. CVE-2022-24401 relates to how TETRA base stations and handsets agree on how to encrypt each transmission so that each burst of data gets encrypted uniquely. This means you can’t work out the keys to unscramble old data, even if you’ve already intercepted it, or predict the keys for future data to snoop on it later in real time. TETRA apparently does its key setup based on timestamps transmitted by the base station, so a properly programmed base station should never repeat previous encryption keys. But there’s no data authentication process to prevent a rogue base station from sending out bogus timestamps and thereby tricking a targeted handset into either reusing keystream data from yesterday, or leaking in advance the keystream it will use tomorrow.
  • Don’t built in backdoors or other deliberate weaknesses. CVE-2022-24402 covers a deliberate security downgrade trick that can be triggered in TETRA devices using the commercial-level encryption code (this apparently doesn’t apply to devices bought officially for law enforcement or first responder use). This exploit allegedly turns 80-bit encryption, where snoopers need to try 280 different decryption keys in a brute-force attack, into 32-bit encryption. Given that DES was banished more than 20 years ago for using 56-bit encryption, you can be sure that 32 bits of key is far too small for 2023.

Fortunately, it looks as though CVE-2022-24401 has already been quashed with firmware updates (assuming users have applied them).

As for the rest of the vulnerabilities…

…we’ll have to wait until the TETRA:BURST tour kicks off for full details and mitigations.


S3 Ep144: When threat hunting goes down a rabbit hole

SING US A CYBERSECURITY SONG

Why your Mac’s calendar app says it’s JUL 17. One patch, one line, one file. Careful with that {axe,file}, Eugene. Storm season for Microsoft. When typos make you sing for joy.

No audio player below? Listen directly on Soundcloud.

With Doug Aamoth and Paul Ducklin. Intro and outro music by Edith Mudge.

You can listen to us on Soundcloud, Apple Podcasts, Google Podcasts, Spotify and anywhere that good podcasts are found. Or just drop the URL of our RSS feed into your favourite podcatcher.


READ THE TRANSCRIPT


DOUG.  Patching by hand, two kinda/sorta Microsoft zero-days, and “Careful with that file, Eugene.”

All that, and more, on the Naked Security podcast.

[MUSICAL MODEM]

Welcome to the podcast, everybody.

I am Doug Aamoth; he is Paul Ducklin.

Paul, how do you do today?


DUCK.  Were you making an allusion to The Pink Floyd?


DOUG.  *THE* Pink Floyd, yes!


DUCK.  That’s the name by which they were originally known, I believe.


DOUG.  Oh, really?


DUCK.  They dropped the “The” because I think it got in the way.

The Pink Floyd.


DOUG.  That’s a fun fact!

And as luck would have it, I have more Fun Facts for you…

You know we start the show with This Week in Tech History, and we’ve got a two-fer today.

This week, on 17 July 2002, Apple rolled out “iCal”: calendar software that featured internet-based calendar sharing and the ability to manage multiple calendars.

“JUL 17” was prominently featured on the app’s icon, which even led July 17 to become World Emoji Day, established in 2014.

It’s quite a cascading effect, Paul!


DUCK.  Although. on your iPhone,, you’ll notice that the icon changes to today’s date, because that’s very handy.

And you’ll notice that other service providers may or may not have chosen different dates, because “why copy your competition”, indeed.


DOUG.  Alright, let’s get into it.

We’ll talk about our first story.

This is about Zimbra and adventures in cross-site scripting.

Good old XSS, Paul:

Zimbra Collaboration Suite warning: Patch this 0-day right now (by hand)!


DUCK.  Yes.

That’s where you are essentially able to hack a website to include rogue JavaScript without breaking into the server itself.

You perform some action, or create some link to that site, that tricks the site into including content in its reply that doesn’t just mention, for example, the search term you typed in, like My Search Term, but includes additional text that shouldn’t be there, like My search <script> rogue JavaScript </script>.

In other words, you trick a site into displaying content, with its own URL in the address bar, that contains untrusted JavaScript in it.

And that means that the JavaScript you have sneakily injected actually has access to all the cookies set by that site.

So it can steal them; it can steal personal data; and, even more importantly, it can probably steal authentication tokens and stuff like that to let the crooks get back in next time.


DOUG.  OK, so what did Zimbra do in this case?


DUCK.  Well, the good news is that they reacted quickly because, of course, it was a zero-day.

Crooks were already using it.

So they actually took the slightly unusual approach of saying, “We’ve got the patch coming. You will get it fairly soon.”

But they said, quite thoughtfully, “We understand that you may want to take action sooner rather than later.”

Now, unfortunately, that does mean writing a script of your own to go and patch one line of code in one file in the product distribution on all your mailbox nodes.

But it’s a very small and simple fix.

And, of course, because it’s one line, you can easily change the file back to what it was if it should cause problems.

If you were dead keen to get ahead of the crooks, you could do that without waiting for the full release to drop…


DOUG.  And what a sense of accomplishment, too!

It’s been a while since we’ve been able to roll up our sleeves and just hand-patch something like this.

It’s like fixing the sink on a Saturday morning… you just feel good afterwards.

So if I was a Zimbra user, I’d be jumping all over this just because I like to get my hands on… [LAUGHTER]


DUCK.  And, unlike patching the sink, there was no crawling around in tight cupboards, and there was no risk of flooding your entire property.

The fix was clear and well-defined.

One line of code changed in one file.


DOUG.  Alright, so if I’m a programmer, what are some steps I can take to avoid cross-site scripting such as this?


DUCK.  Well, the nice thing about this bug, Doug, is it almost acts as documentation for the kind of things you need to look out for in cross-site scripting.

The patch shows that there’s a server side component which was simply taking a string and using that string inside a web form that would appear at the other end, in the user’s browser.

And you can see that what the program *now* does (this particular software is written in Java)… it calls a function escapeXML(), which is, if you like, the One True Way of taking a text string that you want to display and making sure that there are no magic XML or HTML characters in there that could trick the browser.

In particular: less than (<); greater than (>); ampersand (&); double quote ("); or single quote, also known as apostrophe (').

Those get converted into their long-form, safe HTML codes.

If I may use our standard Naked Security cliche, Doug: Sanitise thine inputs is the bottom line here.


DOUG.  Oooh, I love that one!

Great. let’s move on to Pink Floyd, obviously… we’ve been waiting for this all show.

If Pink Floyd were cybersecurity researchers, it’s fun to imagine that they may have written a hit song called “Careful with that file, Eugene” instead, Paul. [Pink Floyd famously produced a song called Careful with that axe, Eugene.]

Google Virus Total leaks list of spooky email addresses


DUCK.  Indeed.

“Careful with that file” is a reminder that sometimes, when you upload a file to an online service, if you pick the wrong one, you might end up redistributing the file rather than, for example, uploading it for secure storage.

Fortunately, not too much harm was done in this case, but this was something that happened at Google’s Virus Total service.

Listeners will probably know that Virus Total is a very popular service where, if you’ve got a file that either you know it’s malware and you want to know what lots of different products call it (so you know what to go hunting for in your threat logs), or if you think, “Maybe I want to get the sample securely to as many vendors as possible, as quickly as possible”…

…then you upload to Virus Total.

The file is meant to be made available to dozens of cybersecurity companies almost immediately.

That’s not quite the same as broadcasting it to the world, or uploading it to a leaky online cloud storage bucket, but the service *is* meant to share that file with other people.

And unfortunately, it seems that an employee inside Virus Total accidentally uploaded an internal file that was a list of customer email addresses to the Virus Total portal, and not to whatever portal they were supposed to use.

Now, the real reason for writing this story up, Doug, is this.

Before you laugh; before you point fingers; before you say, “What were they thinking?”…

..stop and ask yourself this one question.

“Have I ever sent an email to the wrong person by mistake?” [LAUGHTER]

That’s a rhetorical question. [MORE LAUGHTER]

We’ve all done it…


DOUG.  It is rhetorical!


DUCK.  …some of us more than once. [LAUGHTER]

And if you have ever done that, then what is it that guarantees you won’t upload a file to the wrong *server* by mistake, making a similar kind of error?

It is a reminder that there is many a slip, Douglas, between the cup and the lip.


DOUG.  Alright, we do have some tips for the good people here, starting with, I’d say, arguably one of our most unpopular pieces of advice: Log out from online accounts whenever you aren’t actually using them.


DUCK.  Yes.

Now, ironically, that might not have helped in this case because, as you can imagine, Virus Total is specifically engineered so that anybody can *upload* files (because they’re meant to be shared for the greater good of all, quickly, to people who need to see them), but only trusted customers can *download* stuff (because the assumption is that the uploads often do contain malware, so they’re not meant to be available to just anybody).

But when you think about the number of sites that you probably remain logged into all the time, that just makes it more likely that you will take the right file and upload it to the wrong place.

If you’re not logged into a site and you do try and upload a file there by mistake, then you will get a login prompt…

…and you will protect you from yourself!

It’s a fantastically simple solution, but as you say, it’s also outrageously unpopular because it is modestly inconvenient. [LAUGHTER]


DOUG.  Yes!


DUCK.  Sometimes, however, you’ve got to take one for the team.


DOUG.  Not to shift all the onus to the end users: If you’re in the IT team, consider putting controls on which users can send what sorts of files to whom.


DUCK.  Unfortunately, this kind of blocking is unpopular, if you like for the other-side-of-the-coin reason to why people don’t like logging out of accounts when they’re not using them.

When IT comes along and says, “You know what, we’re going to turn on the Data Loss Prevention [DLP] parts of our cybersecurity endpoint product”…

…people go, “Well, that’s inconvenient. What if it gets in the way? What if it interferes with my workflow? What if it causes a hassle for me? I don’t like it!”

So, a lot of II
T departments may end up staying a little bit shy of potentially interfering with workflow like that.

But, Doug, as I said in the article, you will always get a second chance to send a file that wouldn’t go out the first time, by negotiating with IT, but you never get the chance to unsend a file that was not supposed to go out at all.


DOUG.  [LAUGHS] Exactly!

Alright, good tips there.

Our last story, but certainly not least.

Paul, I don’t have to remind you, but we should remind others…

…applied cryptography is hard, security segmentation is hard, and threat hunting is hard.

So what does that all have to do with Microsoft?

Microsoft hit by Storm season – a tale of two semi-zero days


DUCK.  Well, there’s been a lot of news in the media recently about Microsoft and its customers getting turned over, hit up, probed and hacked by a cybercrime group known as Storm.

And one part of this story goes around 25 organisations that had these rogues inside their Exchange business.

They’re sort-of zero-days.

Now, Microsoft published a pretty full and fairly frank report about what happened, because obviously there were at least two blunders by Microsoft.

The way they tell the story can teach you an awful lot about threat hunting, and about threat response when things go wrong.


DOUG.  OK, so it looks like Storm got in via Outlook Web Access [OWA] using a bunch of usurped authentication tokens, which is basically like a temporary cookie that you present that says, “This person’s already logged in, they’re legit, let them in.”

Right?


DUCK.  Exactly, Doug.

When that kind of thing happens, which obviously is worrying because it allows the crooks to bypass the strong authentication phase (the bit where you have to type in your username, type in your password, then do a 2FA code; or where you have to present your Yubikey; or you have to swipe your smart card)…

…the obvious assumption, when something like that happens, is that the person at the other end has malware on one or more of their users’ computers.

Malware does get a chance to take a peek at things like browser content before it gets encrypted, which means that it can leech out authentication tokens and send them off to the crooks where they can be abused later.

Microsoft admit in their report that that this was their first assumption.

And if it’s true, it’s problematic because it means that Microsoft and those 25 people have to go running around trying to do the threat hunting.

But if that *isn’t* the explanation, then it’s important to figure that out early on, so you don’t waste your own and everyone else’s time.

Then Microsoft realised, “Actually it looks as though the crooks are basically minting their own authentication tokens, which suggests that they must have stolen one of our supposedly secure Azure Active Directory token-signing keys.”

Well, that’s worrying!

*Then* Microsoft realised, “These tokens are actually apparently digitally signed by a signing key that’s only really supposed to be used for consumer accounts, what are called MSAs, or Microsoft accounts.”

In other words, the kind of signing key that would be used to create an authentication token, say if you or I were logging into our personal Outlook.com service.

Oh, no!

There’s another bug that means that it is possible to take a signed authentication token that is not supposed to work for the attack they have in mind, and then go in and mess around with people’s corporate email.

So, that all sounds very bad, which of course it is.

But there is an upside…

…and that is the irony that because this wasn’t supposed to work, because MSA tokens aren’t supposed to work on the corporate Azure Active Directory side of the house, and vice versa, no one at Microsoft had ever bothered writing code to use one token on the other playing field.

Which meant that all of these rogue tokens stood out.

So there was at least a giant, visible red flag for Microsoft’s threat hunting.

Fixing the problem, fortunately, because it’s a cloud side problem, means that you and I don’t need to rush out and patch our systems.

Basically, the solution is: disown the signing key that’s been compromised, so it doesn’t work anymore, and while we’re about it, let’s fix that bug that allows a consumer signing key to be valid on the corporate side of the Exchange world.

It sort-of is a bit of an “All’s well that ends well.”

But as I said, it’s a big reminder that threat hunting often involves a lot more work than you might at first think.

And if you read through Microsoft’s report, you can imagine just how much work went into this.


DOUG.  Well, in the spirit of catching everything, let’s hear from one of our readers in the Comment of the Week.

I can tell you first-hand after doing this for the better part of ten years, and I’m sure Paul can tell you first-hand after doing this in thousands and thousands of articles…

…typos are a way of life for a tech blogger, and if you’re lucky, sometimes you end up with a typo so good that you’re loath to fix it.

Such is the case with this Microsoft article.

Reader Dave quotes Paul as writing “which seemed to suggest that someone had indeed pinched a company singing [sic] key.”

Dave then follows up the quote by saying, “Singing keys rock.”

Exactly! [LAUGHTER]


DUCK.  Yes, it took me a while to realise that’s a pun… but yes, “singing key.” [LAUGHS]

What do you get if you drop a crate of saxophones into an army camp?


DOUG.  [LAUGHS]


DUCK.  [AS DRY AS POSSIBLE] A-flat major.


DOUG.  [COMBINED LAUGH-AND-GROAN] Alright, very good.

Dave, thank you for pointing that out.

And we do agree that singing keys rock; signing keys less so.

If you have an interesting story, comment or question you’d like to submit, we’d love to read it on the podcast.

You can email tips@sophos.com, you can comment on any one of our articles, or you can hit us up on social: @nakedsecurity.

That’s our show for today; thanks very much for listening.

For Paul Ducklin, I’m Doug Aamoth, reminding you, until next time, to…


BOTH.  Stay secure!

[MUSICAL MODEM]


Google Virus Total leaks list of spooky email addresses

Early disclaimer: this isn’t quite the mother of all data breaches, nor even perhaps a younger cousin, so you can stand down from Blue Alert right away.

As far as we can tell, only names, email addresses and employers were leaked in the wrongly shared document.

But what names they were!

The leaked list apparently made up a handy email Who’s Who list of global cybersecurity experts from intelligence agencies, law enforcement groups, and serving military staff.

Threat intelligence company Recorded Future and German news site Der Spiegel have listed a wide range of victims, including the NSA, FBI and the US Cyber Command in America, the German BSI (Federal Office for Information Security), the UK’s National Cybersecurity Centre…

…and we could go on.

Other countries with affected government ministries apparently include, in no particular order: Taiwan, Lithuania, Israel, the Netherlands, Poland, Saudi Arabia, Qatar, France, the United Arab Emirates, Japan, Estonia, Turkey, Czechia, Egypt, Colombia, Ukraine, and Slovakia.

Der Spiegel suggests that numerous big German companies were affected, too, including BMW, Allianz, Mercedes-Benz, and Deutsche Telekom.

A total of about 5600 names, emails and organisational affiliations were leaked in all.

How did the leak happen?

It helps to remember that Virus Total is all about sample sharing, where anyone in the world (whether they’re paying Virus Total customers or not) can upload suspicious files in order to achieve two prompt outcomes:

  • Scan the files for malware using dozens of participating products. (Sophos is one.) Note that this not a way to compare detection rates or to “test” products, because only one small component in each product is used, namely its pre-execution, file-based, anti-malware scanner. But it’s a very quick and convenient way of disambiguating the many different detection names for common malware families that different products inevitably end up with.
  • Share uploaded files swiftly and securely with participating vendors. Any company whose product is in the detection mix can download new samples, whether they already detected them or not, for further analysis and research. Sample sharing schemes in the early days of anti-malware research typically relied on PGP encryption scripts and closed mailing lists, but Virus Total’s account-based secure download system is much simpler, speedier and more scalable than that.

In fact, in those early days of malware detection and prevention, most samples were so-called executable files, or programs, which rarely if ever contained personally identifiable information.

Even though helpfully sharing a malware-infected sample of a proprietary program might ultimately attract a complaint from the vendor on copyright grounds, that sort of objection was easily resolved simply by deleting the file later on, given that file wasn’t supposed to be kept secret, merely to be licensed properly.

(In real life, few vendors minded, given the the files were never shared widely, rarely formed a complete application installation, and anyway were being shared specifically for malware analysis purposes, not for piracy.)

Non-executable files containing malware were rarely shared, and could easily and automatically be identified if you tried to share one by mistake because they lacked the tell-tale starting bytes of a typical program file.

In case you’re wondering, DOS and Windows .EXE files have, from the earliest days of MS-DOS onwards, started with the text characters MZ, which come out as 77 90 in decimal and as 0x4D 0x5A in hexadecimal. This makes EXEs easy to recognise, and all non-EXEs similarly quick to spot. And in case you’re wondering why MZ was chosen, the answer is that those are the initials of Microsoft programmer Mark Zbikowski, who came up with the file format in the first place. For what it’s worth, and as an additional fun fact, memory blocks allocated by DOS all started with the byte M, except for the very last one in the list, which was flagged with Z.

Data files with added code

In 1995, the first Microsoft Word virus appeared, dubbed Concept because that’s exactly what it was, albeit an unhelpful one.

From then on, an significant proportion of active malware samples have been files that consist primarily of private data, but with unauthorised malware code added later in the form of scripts or programming macros.

Technically, there are ways to purge such files of most of their personal information first, such as overwriting every numeric cell in a spreadsheet with the value 42, or replacing every printable non-space character in a document with X or x, but even that sort of pre-processing is prone to trouble.

Firstly, numerous malware families sneakily store at least some of their own needed data as added information in the personal part of such files, so that trying to bowdlerise, redact or rewrite the sensitive, “unsharable” parts of the file causes the malware to stop working, or to behave differently.

This rather ruins the purpose of collecting a real-life sample in the first place.

Secondly, reliably redacting all personal information inside complex, mulitpart files is effectively an unsolvable problem in its own right.

Even apparently sanitised files may nevertheless leak personal data if you aren’t careful, especially if you’re trying to redact files stored in proprietary formats for which you have little or no offical documentation.

In short, any upload system that accepts files of arbitrary type, including programs, scripts, configuration data, documents, spreadsheets, images, videos, audio and many more…

…introduces the risk that every now and then, without meaning to, someone with the best will in the world will inadvertently share a file that should never have been released, not even on the basis of working for the greater good of all.

Right file, wrong place

And that’s exactly what happened here.

A file containing a structured list of some 5600 names, email addresses and cybersecurity affiliations of Virus Total customers was uploaded to Virus Total’s scanning-and-sharing service by mistake…

…by an employee inside Virus Total.

This really does appear to have been an innocent mistake that inadvertently shared the file with exactly the wrong people.

And before you say to yourself, “What were they thinking?”

…ask yourself how many different file upload services your own company uses for various purposes, and whether you would back yourself never to put the right file in the wrong place yourself.

After all, many companies use numerous different outsourced services for different parts of their business workflow these days, so you might have completely different web upload portals for your vacation requests, expense claims, timesheets, travel requests, pension contributions, training courses, source code checkins, sales reports and more.

If you’ve ever sent the right email to the wrong person (and you have!), you should assume that uploading the right file to the wrong place is the sort of mistake that you, too, could make, leaving you asking yourself, “What was I thinking?”

What to do?

Here are three tips, all of which are digital lifestyle changes rather that settings or checkboxes you can simply turn on.

It’s unpopular advice, but logging out from online accounts whenever you aren’t actually using them is a great way to start.

That won’t necessarily stop you uploading to sites that are open to anonymous users, like Virus Total (downloads require a logged-in account, but uploads don’t).

But it greatly reduces your risk of unintentionally interacting with other sites, even if all you do is inadvertently like a social media post by mistake, when you didn’t want to.

If you’re in the IT team, consider putting controls on which users can send what sorts of file to whom.

You could consider using firewall upload rules to limit which file types can be sent to what sites, or activating various data loss prevention policies in your endpoint security software to warn users when they look like sending something somewhere they shouldn’t.

And if you’re not in IT, don’t take it personally if you one day find your upload freedoms restricted by order of the security team.

After all, you’ll always get a second chance to send a file that wouldn’t go out the first time, but you never get the chance to unsend a file that wasn’t supposed to go out at all.

We’re willing to bet that the Google employee who uploaded the wrong file in this incident would much rather be sitting down right now to negotiate with the IT department about having overly strict upload restrictions relaxed…

…than sitting down to explain to the security team why they uploaded the right file to the wrong place.

As Pink Floyd might have sung, in their early days, “Careful with that file, Eugene!”


Microsoft hit by Storm season – a tale of two semi-zero days

At the tail-end of last week, Microsoft published a report entitled Analysis of Storm-0558 techniques for unauthorized email access.

In this rather dramatic document, the company’s security team revealed the background to a previously unexplained hack in which data including email text, attachments and more were accessed:

from approximately 25 organizations, including government agencies and related consumer accounts in the public cloud.

The bad news, even though only 25 organisations were apparently attacked, is that this cybercrime may nevertheless have affected a large number of individuals, givem that some US government bodies employ anywhere from tens to hundreds of thousands of people.

The good news, at least for the vast majority of us who weren’t exposed, is that the tricks and bypasses used in the attack were specific enough that Microsft threat hunters were able to track them down reliably, so the final total of 25 organisations does indeed seem to be a complete hit-list.

Simply put, if you haven’t yet heard directly from Microsoft about being a part of this hack (the company has obviously not published a list of victims), then you may as well assume you’re in the clear.

Better yet, if better is the right word here, the attack relied on two security failings in Microsoft’s back-end operations, meaning that both vulnerabilities could be fixed “in house”, without pushing out any client-side software or configuration updates.

That means there aren’t any critical patches that you need to rush out and install yourself.

The zero-days that weren’t

Zero-days, as you know, are security holes that the Bad Guys found first and figured out how to exploit, thus leaving no days available during which even the keenest and best-informed security teams could have patched in advance of the attacks.

Technically, therefore, these two Storm-0558 holes can be considered zero-days, because the crooks busily exploited the bugs before Microsoft was able to deal with the vulnerabilities involved.

However, given that Microsoft carefully avoided the word “zero-day” in its own coverage, and given that fixing the holes didn’t require all of us to download patches, you’ll see that we referred to them in the headline above as semi-zero days, and we’ll leave the description at that.

Nevertheless, the nature of the two interconnected security problems in this case is a vital reminder of three things, namely that:

  • Applied cryptography is hard.
  • Security segmentation is hard.
  • Threat hunting is hard.

The first signs of evildoing showed crooks sneaking into victims’ Exchange data via Outlook Web Access (OWA), using illicitly acquired authentication tokens.

Typically, an authentication token is a temporary web cookie, specific to each online service you use, that the service sends to your browser once you’ve proved your identity to a satisfactory standard.

To establish your identity strongly at the start of a session, you might need to enter a password and a one-time 2FA code, to present a cryptographic “passkey” device such as a Yubikey, or to unlock and insert a smart card into a reader.

Thereafter, the authentication cookie issued to your browser acts as a short-term pass so that you don’t need to enter your password, or to present your security device, over and over again for every single interaction you have with the site.

You can think of the initial login process like presenting your passport at an airline check-in desk, and the authentication token as the boarding card that lets you into the airport and onto the plane for one specific flight.

Sometimes you might be required to reaffirm your identity by showing your passport again, such as just before you get on the plane, but often showing the boarding card alone will be enough for you affirm your “right to be there” as you make your way around the airside parts of the airport.

Likely explanations aren’t always right

When crooks start showing up with someone else’s authentication token in the HTTP headers of their web requests, one of the most likely explanations is that the criminals have already implanted malware on the victim’s computer.

If that malware is designed to spy on the victim’s network traffic, it typically gets to see the underlying data after it’s been prepared for use, but before it’s been encrypted and send out.

That means the crooks can snoop on and steal vital private browsing data, including authentication tokens.

Generally speaking, attackers can’t sniff out authentication tokens as they travel across the internet any more, as they commonly could until about 2010. That’s because every reputable online service these days requires that traffic to and from logged-on users must travel via HTTPS, and only via HTTPS, short for secure HTTP.
HTTPS uses TLS, short for transport layer security, which does what its name suggests. All data is strongly encrypted as it leaves your browser but before it gets onto the network, and isn’t decrypted it until it reaches the intended server at the other end. The same end-to-end data scrambling process happens in reverse for the data that the server sends back in its replies, even if you try to retrieve data that doesn’t exist and all the server needs to tell you is a perfunctory 404 Page not found.

Fortunately, Microsoft threat hunters soon realised that the fraudulent email interactions weren’t down to a problem triggered at the client side of the network connection, an assumption that would have sent the victim organisations off on 25 separate wild goose chases looking for malware that wasn’t there.

The next-most-likely explanation is one that in theory is easier to fix (because it can be fixed for everyone in one go), but in practice is more alarming for customers, namely that the crooks have somehow compromised the process of creating authentication tokens in the first place.

One way to do this would be to hack into the servers that generate them and to implant a backdoor to produce a valid token without checking the user’s identity first.

Another way, which is apparently what Microsoft originally investigated, is that the attackers were able to steal enough data from the authentication servers to generate fraudulent but valid-looking authentication tokens for themselves.

This implied that the attackers had managed to steal one of the cryptographic signing keys that the authentication server uses to stamp a “seal of validity” into the tokens it issues, to make it as good-as-impossible for anyone to create a fake token that would pass muster.

By using a secure private key to add a digital signature to every access token issued, an authentication server makes it easy for any other server in the ecosystem to check the validity of the tokens that they receive. That way, the authentication server can even work reliably across different networks and services without ever needing to share (and regularly to update) a leakable list of actual, known-good tokens.

A hack that wasn’t supposed to work

Microsoft ultimately determined that the rogue access tokens in the Storm-0558 attack were legitimately signed, which seemed to suggest that someone had indeed pinched a company singing key…

…but they weren’t actually the right sort of tokens at all.

Corporate accounts are supposed to be authenticated in the cloud using Azure Active Directory (AD) tokens, but these fake attack tokens were signed with what’s known as an MSA key, short for Microsoft consumer account.

Loosely speaking, the crooks were minting fake authentication tokens that passed Microsoft’s security checks, yet those tokens were signed as if for a user logging into a personal Outlook.com account instead of for a corporate user logging into a corporate account.

In one word, “What?!!?!”

Apparently, the crooks weren’t able to steal a corporate-level signing key, only a consumer-level one (that’s not a disparagement of consumer-level users, merely a wise cryptographic precaution to divide-and-separate the two parts of the ecosystem).

But having pulled off this first semi-zero day, namely acquiring a Microsoft cryptographic secret without being noticed, the crooks apparently found a second semi-zero day by means of which they could pass off an access token signed with a consumer-account key that should have signalled “this key does not belong here” as if it were an Azure AD-signed token instead.

In other words, even though the crooks were stuck with the wrong sort of signing key for the attack they had planned, they nevertheless found a way to bypass the divide-and-separate security measures that were supposed to stop their stolen key from working.

More bad-and-good news

The bad news for Microsoft is that this isn’t the only time the company has been found wanting in respect of signing key security in the past year.

The latest Patch Tuesday, indeed, saw Microsoft belatedly offering up blocklist protection against a bunch of rogue, malware-infected Windows kernel drivers that Redmond itself has signed under the aegis of its Windows Hardware Developer Program.

The good news is that, because the crooks were using corporate-style access tokens signed with a consumer-style cryptographic key, their rogue authentication credentials could reliably be threat-hunted once Microsoft’s security team knew what to look for.

In jargon-rich language, Microsoft notes that:

The use of an incorrect key to sign the requests allowed our investigation teams to see all actor access requests which followed this pattern across both our enterprise and consumer systems.

Use of the incorrect key to sign this scope of assertions was an obvious indicator of the actor activity as no Microsoft system signs tokens in this way.

In plainer English, the downside of the fact that no one at Microsoft knew about this in advance (thus preventing it from being patched proactively) led, ironically, to the upside that no one at Microsoft had ever tried to write code to work that way.

And that, in turn, meant that the rogue behaviour in this attack could be used as a reliable, unique IoC, or indicator of compromise.

That, we assume, is why Microsoft now feels confident to state that it has tracked down every instance where these double-semi-zero day holes were exploited, and thus that its 25-strong list of affected customers is an exhaustive one.

What to do?

If you haven’t been contacted by Microsoft about this, then we think you can be confident you weren’t affected.

And because the security remedies have been applied inside Microsoft’s own cloud service (namely, disowning any stolen MSA signing keys and closing the loophole allowing “the wrong sort of key” to be used for corporate authentication), you don’t need to scramble to install any patches yourself.

However, if you are a programmer, a quality assurance practioner, a red teamer/blue teamer, or otherwise involved in IT, please remind yourself of the three points we made at the top of this article:

  • Applied cryptography is hard. You don’t just need to choose the right algorithms, and to implement them securely. You also need to use them correctly, and to manage any cryptographic keys that the system relies upon with suitable long-term care.
  • Security segmentation is hard. Even when you think you’ve split a complex part of your ecosystem into two or more parts, as Microsoft did here, you need to make sure that the separation really does work as you expect. Probe and test the security of the separation yourself, because if you don’t test it, the crooks certainly will.
  • Threat hunting is hard. The first and most obvious explanation isn’t always the right one, or might not be the only one. Don’t stop hunting when you have your first plausible explanation. Keep going until you have not only identified the actual exploits used in the current attack, but also discovered as many other potentially related causes as you can, so you can patch them proactively.

To quote a well-known phrase (and the fact that it’s true means we aren’t worried about it being s cliche): Cybersecurity is a journey, not a destination.


Short of time or expertise to take care of cybersecurity threat hunting? Worried that cybersecurity will end up distracting you from all the other things you need to do?

Learn more about Sophos Managed Detection and Response:
24/7 threat hunting, detection, and response  ▶


go top