Category Archives: News

Breaching airgap security: using your phone’s compass as a microphone!

Cybersecurity stories are like buses: the one you’re waiting for doesn’t come along for ages, then two arrive at once.

The specialist subject that suddenly popped up twice this week is: resonance.

On Monday, we wrote about Janet Jackson’s 1989 song Rhythm Nation, and how it inadvertently turned into a proof-of-concept for a Windows-crashing exploit that was reported way back in 2005.

That story was publicised only recently, as a bit of weird historical fun, and with an equal sense of fun, MITRE assigned it an official CVE bug number (confusingly, however, with a 2022 datestamp, because that’s when it first became known).

In that “exploit”, something about the beat and mix of frequencies in the song is alleged to have troubled the disk drives in a certain vendor’s Windows laptops, matching the natural vibrational frequencies of the old-school hard disks…

…to the point that the resonance effects produced enough vibration to crash the disk, which crashed the driver, which crashed Windows.

Apparently, even nearby laptops with the same model of disk could be R&Bed to the point of failure, bringing down the operating system remotely.

The solution, apparently, involved adding some sort of band-pass filter (band as in “range of frequencies”, not as in “group of musicians”) that chopped out the resonance and the overload, but left the sound well-defined enough to sound normal.

Two buses at once

Well, guess what?

At around the same time that the Rhythm Nation story broke, a researcher at Ben-Gurion University of the Negev in Israel published a research paper about resonance problems in mobile phone gyroscopes.

Modern phone gyroscopes don’t have spinning flywheels housed in gimbals, like the balancing gyroscope toys you may have seen or even owned as a youngster, but are based on etched silicon nanostructures that detect motion and movement through the earth’s magnetic field.

Mordechai Guri’s paper is entitled GAIROSCOPE: Injecting Data from Air-Gapped Computers to Nearby Gyroscopes, and the title pretty much summarises the story.

By the way, if you’re wondering why the keywords Ben-Gurion University and airgap ring a bell, it’s because academics there routinely have absurd amounts of fun are regular contributors to the field of how to manage data leakage into and out of secure areas.

Maintaining an airgap

So-called airgapped networks are commonly used for tasks such as developing anti-malware software, researching cybersecurity exploits, handling secret or confidential documents safely, and keeping nuclear research facilities free from outside interference.

The name means literally what it says: there’s no physical connection between the two parts of the network.

So, if you optimistically assume that alternative networking hardware such as Wi-Fi and Bluetooth are properly controlled, data can only move between “inside” and “outside” in a way that requires active human intervention, and therefore can be robustly regulated, monitored, supervised, signed off, logged, and so on.

But what about a corrupt insider who wants to break the rules and steal protected data in a way that their own managers and security team are unlikely to spot?

Ben-Gurion University researchers have come up with many weird but workable data exfiltration tricks over the years, along with techniques for detecting and preventing them, often giving them really funky names…

…such as LANTENNA, where innocent-looking network packets on the wires connecting up the trusted side of the network actually produce faint radio waves that can be detected by a collaborator outside the secure lab with an antenna-equipped USB dongle and a software defined radio receiver:

Or fan speeds used to sent covert sound signals, in a trick dubbed the FANSMITTER:

Or using capacitors on a motherboard to act as tiny stand-in speakers in a computer with its own loudspeaker deliberately removed.

Or adding meaning to the amound of red tint on the screen from second to second, and many other abstruse airbridging tricks.

The trouble with sound

Exfiltrating data via a loudspeaker is easy enough (computer modems and acoustic couplers were doing it more than 50 years ago), but there are two problems here: [1] the sounds themselves squawking out of speakers on the trusted side of an airgapped network are a bit of a giveaway, and [2] you need an undetected, unregulated microphone on the untrusted side of the network to pick up the noises and record them surreptitiously.

Problem [1] was overcome by the discovery that many if not most computer speakers can actally produce so-called ultrasonic sounds, with frequencies high enough (typically 17,000 hertz or above) that few, if any, humans can hear them.

At the same time, a typical mobile phone microphone can pick up ultrasonic sounds at the other side of the airgap, thus providing a covert audio channel.

But trick [2] was thwarted, at least in part, by the fact that most modern mobile phones or tablets have easily-verified configuration settings to control microphone use.

So, phones that are pre-rigged to violate “no recording devices allowed” policies can fairly easily be spotted by a supervisory check before they’re allowed into a secure area.

(In other words, there’s a real chance of being caught with a “live mic” if your phone is configured in an obviously non-compliant condition, which could result in getting arrested or worse.)

As you’ve figured from the title of Guri’s paper, however, it turns out that the gyroscope chip in most modern mobile phones – the chip that detects when you’ve turned the screen sideways or picked the device up – can be used as a very rudimentary microphone.

Greatly simplified, the GAIROSCOPE data exfiltration system involves exposing a known mobile phone to a range of ultrasonic frequencies (in Guri’s example, these were just above 19,000 hertz, too high for almost anyone on earth to hear) and finding out a precise frequency that provokes detectably abnormal resonance in the gyroscope chip.

Once you’ve found one or more resonant frequencies safely out of human hearing range, you’ve effectively got yourself both ends of a covert data signalling channel, based on frequencies that can inaudibly be generated at one end and reliably detected, without using a regular microphone, at the other.

The reason for targeting the gyroscope is that most mobile phones treat the gyroscope signal as uncontroversial from a privacy and security point of view, and allow apps (on Android, this even includes the Chrome browser) to read out the gyroscope X, Y and Z position readings by default, without any special permissions.

This means a mobile device that has apparently been configured into “no eavesdropping possible” mode could nevertheless be receiving secret, inaudible data via a covert audio channel.

Don’t get too excited about throughput, though.

Data rates generally seem to be about 1 bit per second, which makes 50-year-old computer modems seem fast…

…but data such as secret keys or passwords are often only a few hundred or a few thousand bits long, and even 1 bit/sec could be enough to leak them across an otherwise secure and healthy airgap in a few minutes or hours.

What to do?

The obvious “cure” for this sort of trick is to ban mobile phones entirely from your secure areas, a precaution that you should expect in the vicinity of any serious airgapped network.

In less-secure areas where airgaps are used, but mobile phones are nevertheless allowed (subject to specific verified settings) as an operational convenience, the invention of GAIROSCOPE changes the rules.

From now on, you will want to verify that users have turned off their “motion detection” system settings, in addition to blocking access to the microphone, Wi-Fi, Bluetooth and other features already well-known for the data leakage risks they bring.

Lastly, if you’re really worried, you could disconnect internal speakers in any computers on the secure side of the network…

…or use an active frequency filter, just like that unnamed laptop vendor did to block the rogue Rhythm Nation signals in 2005.

(Guri’s paper shows a simple analog electrical circuit to cut off audio frequencies above a chosen value.)


Bitcoin ATMs leeched by attackers who created fake admin accounts

You wouldn’t know it from visiting the company’s main website, but General Bytes, a Czech company that sells Bitcoin ATMs, is urging its users to patch a critical money-draining bug in its server software.

The company claims worldwide sales of more than 13,000 ATMs, which retail for $5000 and up, depending on features and looks.

Not all countries have taken kindly to cryptocurrency ATMs – the UK regulator, for example, warned in March 2022 that none of the ATMs operating in the country at the time were officially registered, and said that it would be “contacting the operators instructing that the machines be shut down”.

We went to check on our local crypto ATM at the time, and found it displaying a “Terminal offline” message. (The device has since been removed from the shopping centre where it was installed.)

Nevertheless, General Bytes says it serves customers in more than 140 countries, and its global map of ATM locations shows a presence on every continent except Antarctica.

Security incident reported

According to the General Bytes product knowledgebase, a “security incident” at a severity level of Highest was discovered last week.

In the company’s own words:

The attacker was able to create an admin user remotely via CAS administrative interface via a URL call on the page that is used for the default installation on the server and creating the first administration user.

As far as we can tell, CAS is short for Coin ATM Server, and every operator of General Bytes cryptocurrency ATMs needs one of these.

You can host your CAS anywhere you like, it seems, including on your own hardware in your own server room, but General Bytes has a special deal with hosting company Digital Ocean for a low-cost cloud solution. (You can also let General Bytes run the server for you in the cloud in return for a 0.5% cut of all cash transactions.)

According to the incident report, the attackers performed a port scan of Digital Ocean’s cloud services, looking for listening web services (ports 7777 or 443) that identified themslves as General Bytes CAS servers, in order to find a list of potential victims.

Note that the vulnerability exploited here was not down to Digital Ocean or limited to cloud-based CAS instances. We’re guessing that the attackers simply decided that Digital Ocean was a good place to start looking. Remember that with a very high-speed internet connection (e.g. 10Gbit/sec), and using freely available software, determined attackers can now scan the entire IPv4 internet address space in hours, or even minutes. That’s how public vulnerability search engines such as Shodan and Censys work, continually trawling the internet to discover which servers, and what versions, are currently active at which online locations.

Apparently, a vulnerability in the CAS itself allowed the attackers to manipulate the settings of the victim’s cryptocurrency services, including:

  • Adding a new user with administrative privileges.
  • Using this new admin account to reconfigure existing ATMs.
  • Diverting all invalid payments to a wallet of their own.

As far as we can see, this means the attacks conducted were limited to transfers or withdrawals where the customer made a mistake.

In such cases, it seems, instead of the ATM operator collecting the misdirected funds so they could subsequently be reimbursed or correctly redirected…

…the funds would go directly and irreversibly to the attackers.

General Bytes didn’t say how this flaw came to its attention, though we imagine that any ATM operator faced with a support call about a failed transaction would quickly notice that their service settings had been tampered with, and raise the alarm.

Indicators of Compromise

The attackers, it seemed, left behind various telltale signs of their activity, so that General Bytes was able to identify numerous so-called Indicators of Compromise (IoCs) to help their users identify hacked CAS configurations.

(Remember, of course, that the absence of IoCs doesn’t guarantee the absence of any attackers, but known IoCs are a handy place to start when it comes to threat detection and response.)

Fortunately, perhaps because of the fact that this exploit relied on invalid payments, rather than allowing the attackers to drain ATMs directly, overall financial losses in this incident don’t run into the multimillion dollar amounts often associated with cryptocurrency blunders.

General Bytes claimed yesterday [2022-08-22] that the “[i]ncident was reported to Czech Police. Total damage caused to ATM operators based on their feedback is US$16,000.”

The company also automatically deactivated any ATMs that it was managing on behalf of its customers, thus requiring those customers to login and review their own settings before reactivating their ATM devices.

What to do?

General Bytes has listed an 11-step process that its customers need to follow in order to remediate this issue, including:

  • Patching the CAS server.
  • Reviewing firewall settings to restrict access to as few network users as possible.
  • Deactivating ATM terminals so that the server can be brought up again for review.
  • Reviewing all settings, including any bogus terminals that may have been added.
  • Reactivating terminals only after completing all threat-hunting steps.

This attack, by the way, is a strong reminder of why contemporary threat response isn’t simply about patching holes and removing malware.

In this case, the criminals didn’t implant any malware: the attack was orchestrated simply through malevolent configuration changes, with the underlying operating system and server software left untouched.

Not enough time or staff?
Learn more about Sophos Managed Detection and Response:
24/7 threat hunting, detection, and response  ▶


Featured image of imagined Bitcoins via Unsplash licence.

Laptop denial-of-service via music: the 1980s R&B song with a CVE!

You’ve probably heard the old joke: “Humour in the public service? It’s no laughing matter!”

But the thing with downbeat, blanket judgements of this sort is that it only takes a single counter-example to disprove them.

Something cannot universally be true if it is ever false, even for a single moment.

So, wouldn’t it be nice if the public service could be upbeat once in a while…

…as upbeat, in fact, as the catchy Janet Jackson dance number Rhythm Nation, released in 1989 (yes, it really was that long ago)?

[embedded content]

This was the era of shoulder pads, MTV, big-budget dance videos, and the sort of in-your-ears-and-in-your-face lyrical musicality that even YouTube’s contemporary auto-transcription system renders at times simply as:

 Bass, bass, bass, bass ♪ (Upbeat R&B Music) ♪ Dance beat, dance beat

Well, as Microsoft superblogger Raymond Chen pointed out last week, this very song was apparently implicated in an astonishing system crash vulnerability in the early 2000s.

According to Chen, a major laptop maker of the day (he didn’t say which one) complained that Windows was prone to crashing when certain music was played through the laptop speaker.

The crashes, it seems were not limited to the laptop playing the song, but could also be provoked on nearby laptops that were exposed to the “vulnerability-triggering” music, and even on laptops from other vendors.

Resonance considered harmful

Apparently, the ultimate conclusion was that Rhythm Nation just happened to include beats of the right pitch, repeated at the right rate, that provoked a phenomenon known as resonance in the laptop disk drives of the day.

Loosely speaking, this resonance caused the natural vibrations in the hard disk devices (which really did contain hard disks back then, made of steel or glass and spinning at 5400rpm) to be amplified and exaggerated to the point that they would crash, bringing down Windows XP along with them.

Resonance, as you may know, is the name given to the phenomenon by which singers can shatter wine glasses by producing the right note for long enough to vibrate the glass to pieces.

Once they’ve locked the frequency of the note they’re singing onto the natural frequency at which the glass like to vibrate, their singing continually boosts the amplitude of the vibration until it’s too much for the glass to take.

It’s also what lets you quickly build up height and momentum on a swing.

If you time your kicks or thrusts randomly, sometimes they boost your motion by acting in harmony with the swing, but at other times they work against the swing and slow you down instead, leaving you joggling around unsatifactorily.

But if you time your energy input so it always exactly matches the frequency of the swing, you consistently increase the amout of energy in the system, and thus your swings increase in amplitude, and you gain height rapidly.

A skilled swingineer (on a properly designed, well-mounted, “solid-arm” swing, where the seat isn’t connected to the pivot by flexible ropes or chains – don’t try this at the park!) can send a swing right over the top in a 360-degree arc with just a few pumps…

…and by deliberately timing their pumps out-of-sequence so as to counteract the swing’s motion, can bring it to a complete stop again just as quickly.

Proof-of-concept

We’re guessing that there were probably many other popular songs that could have provoked this hard-disk resonance to the point of failure, but Rhythm Nation was the proof-of-concept that showed this vulnerability could actively be exploited.

Chen reports that the laptop vendor added a frequency filter to the laptop’s own audio system in order to remove the frequency bands that tended to produce the problem, thus leaving the sound audibly unchanged but acoustically harmless.

By filtering the frequencies all the time, instead of trying to recognise Janet Jackson’s song specifically, this electronic countermeasure became a generic and proactive cybersecurity fix, not just a patch specific to one tune.

Well, to return to the issue of humour in the public service…

…it turns out that someone at MITRE in the US, where CVE bug numbers are co-ordinated, has assigned this issue an official bug number, as follows:

CVE-2022-38392:  Denial of service (device malfunction and system crash):

A certain 5400 RPM OEM hard drive, as shipped with laptop PCs in approximately 2005, allows physically proximate attackers to cause a denial of service (device malfunction and system crash) via a resonant-frequency attack with the audio signal from the Rhythm Nation music video.

Even in a world where solid-state drives (SSDs, often still referred to as disks, even though they don’t have circular parts, let alone rotating ones) are widespread, you can still buy old-school hard disks with moving parts, typically running at 5400rpm, 7200rpm and even 10,000rpm.

Old-school hard drives generally offer much higher capacity for a much lower price than SSDs, but they’re rarely found in business-class laptops these days, because they’re slower, generally require more power, and aren’t as shock-proof as their transistorised cousins.

What to do?

Whether SSDs are, in turn, vulnerable to music that focuses on other frequency ranges or amplitudes, we can’t say.

Whereas R&B might have been the Achilles heel of rotating-media storage devices in the early 2000s, perhaps louder but lower-tuned, sludgy, old-school “coding music” might ultimately prove to be too much for fully digital solid-state laptop storage?

We don’t expect fans of bands such as Melvins, Sleep, Monolord and the like to take needless experimental risks with their own laptops.

But if anyone knows of any heavy-duty riffs that can be turned into exploits…


[embedded content] [embedded content] [embedded content]

…they may be eligible for CVE numbers, though we have no idea where vulnerabilities of this sort would fit into the MITRE ATT&CK Tools, Tips and Procedures framework.

Suggestions in the comments, please!


S3 Ep96: Zoom 0-day, AEPIC leak, Conti reward, heathcare security [Audio + Text]

Click-and-drag on the soundwaves below to skip to any point. You can also listen directly on Soundcloud.

With Paul Ducklin and Chester Wisniewski.

Intro and outro music by Edith Mudge.

You can listen to us on Soundcloud, Apple Podcasts, Google Podcasts, Spotify, Stitcher and anywhere that good podcasts are found. Or just drop the URL of our RSS feed into your favourite podcatcher.


READ THE TRANSCRIPT

[MUSICAL MODEM]


DUCK.  Welcome to the podcast, everybody.

I am not Douglas… I am Paul Ducklin.

Doug’s on vacation, so I’m joined by my good friend and colleague, Chester Wisniewski, from our Vancouver office.

Hello, Chet!


CHET.  Hi, Duck.

How are you doing?


DUCK.  I am very well, thank you.

We had our first rain in Oxfordshire today for… must be at least a couple of months.

At least we got some water into the ground, because it’s been very, very dry here – atypically dry.

How about you?


CHET.  Well, I’m recovering from DEF CON despite not having attended Defcon, which I didn’t even know was a thing.


DUCK.  [LAUGHING] Oh, yes!


CHET.  I spent the whole weekend with my eyes glued to Twitter and Twitch and Discord and all these platforms that you could kind of remotely pseudo-participate in all the festivities.

And, I have to say, it’s a lot more fun when you’re actually in Las Vegas.

But considering the tally of people I know that have come back with COVID already is approaching more fingers and thumbs than I have, I think I made the right choice, and I’m happy to be exhausted from over-internetting all weekend.


DUCK.  Do you think they really got a coronavirus infection, or did they just come back feeling, how can I put it… “unwell” due to having Black Hat followed by DEF CON.


CHET.  You know, as bad as the CON FLU can be…


DUCK.  CON FLU?! [LAUGHS] Oh, dear!


CHET.  …I’m quite confident that in this case it’s COVID, because not only are people testing, but for most of the people I’m familiar with, COVID is significantly more painful than even CON FLU.

So the two combined were probably extra awful, I have to think. [LAUGHTER]


DUCK.  Yes!

But let us not tarry at DEF CON coronavirus/CON FLU problems…

…let us turn our attention actually to a *talk* that was given at DEF CON.

This is about a Zoom zero-day that was written up by Patrick Wardle, and presented at DEF CON.

Rather an unfortunate series of bugs, including one that did not get properly patched, Chester?


CHET.  Well, Patrick is not the only macOS security researcher in the world, but he is quite prodigious in finding issues.

And the last time I saw Patrick Wardle present was at the Virus Bulletin conference, several times, and each time he kind of took Apple to school over some questionable decisions on signature verification, certificate verification, this type of stuff.

And I’m starting to get the impression that Apple has largely shaped up their security posture around some of these things.

And so now he’s out hunting for additional vendors who may be making similar cryptographic errors that could allow malware onto the platform.


DUCK.  I guess in the old days, everyone thought, “Well, as long as you’ve got a TLS connection,” or, “As long as you’ve got something that’s digitally signed by *somebody*.”

So, code would often not bother to go and check.

But in this case, they decided to check downloaded update packages to make sure they were from Zoom.

But they didn’t do it very well, did they?

Instead of calling the official system API, which goes away, does the checking, and basically comes back with a true or false…

…they kind of “knitted their own”, didn’t they?


CHET.  Yes.

I mean, knitting your own things related to crypto always ends painfully.

And I recall, in the last podcast, you were talking about the new quantum-safe crypto algorithm that was cracked in an hour on a laptop.


DUCK.  SIKE!


CHET.  Everybody was so focused on the quantum side of it that they kind of missed the conventional side, even amongst some of the world’s smartest mathematicians and cryptographers, right?

So it’s really easy to make mistakes that can be devastating.

And knitting your own is something that you and I have been talking about, I want to say, for approaching 20 years, in different communications formats, on behalf of Sophos.

And I don’t think we’ve ever changed our position that it’s a terrible idea!


DUCK.  The problem here is not that they decided to use their own digital signature algorithms, or invent their own elliptic curve.

It’s just that instead of saying, “Here’s a file. Dear Operating System, use your standardized API-based tools for verifying it and come back True/False,” they chose to essentially shell out…

…they ran the pkgutil command line utility in the background, which is what you can do from the command line if you want to get a human-readable, visual display of who signed what.

And then they wrote a program that would pass the text based output of this to decide whether they wanted to get the answer “true” or “false”.

They got out a list of the certificate chain, and they were looking for “Zoom”, followed by “Developer Certification Authority”, followed by “Apple Root CA”.

So, they look for those strings *anywhere in the output*, Chester!

So [LAUGHS] it turns out that if you created a package that had a name along the lines of Zoom Video Communications Inc Developer ID Certification Authority Apple Root CA.pkg, then when pkgutil wrote the file name into its output, all three magic strings would appear!

And Zoom’s rather inept parser would decide that that could only happen if it had been signed, in the right order, by those three organisations.

Whereas, in fact, it was just simply the name that you provided.

Oh, dear!


CHET.  The issue here is that what’s leading to the problem is this kind of rudimentary signature check that they’re doing.

But the real problem, of course, is it means any package that can given that name will get installed *as root* on the system, even if the user running the update process is unprivileged.


DUCK.  That was the whole problem.

Because it seemed that what happened, in time for DEF CON, Zoom *did* patch this problem.

They use the API correctly, and they reliably verify the integrity and the authenticity of the file they’re about to run.

But in moving it to the temporary directory from which Zoom orchestrates the installation, they left it world-writable!

So, the directory was protected, and everything in the directory was protected… *except the most important file*.

So, guess what you could do?

If you timed it just right (a so-called race condition), the original user could change the file *after* it had passed its digital identity check, but *before* it was used in earnest.

The installer is using a file that it thinks has been validated, and indeed was validated…

…but got invalidated in the gap between the validation and the use.


CHET.  Yes, and as you point out in the article, Duck, this type of vulnerability, rather than just being a simple race condition, is often referred to as a TOCTOU, which to me sounds like some sort of Caribbean bird.

But it’s referring to a more complicated, scientific name for the flaw, called a Time-of-check to Time-of-use.

So, T-O-C-T-O-U… “Toctou”!


DUCK.  Like you, I always imagined it was some kind of very pretty polynesian parrot.

But it’s actually, like you say, an ugly form of bug where you check your facts, but you check them too early and by the time you come to rely on those facts, they’ve changed.

So Zoom’s fixed it – and Patrick Wardle did say he gave them congratulations… they fixed it within one day after he’d done the paper at DEF CON.

They correctly locked down the privileges on the file before they started the process of validating it in the first place.

So, the validation, once completed, remained valid until the end of the installation.

Problem solved.

Should never really have been there in the first place, though, should it?


CHET.  If you’re a Mac user, you can check your version number to be sure you’re on the fixed one.

The version that is fixed is 5.11.5 or higher – I don’t know if there have been releases subsequently.

[Note. A further update to 5.11.6 came out between recording and publishing this episode.]


DUCK.  Now, it doesn’t mean that an outsider can break into your computer if you don’t have this patch, but it is a nasty problem to have…

…where a crook who’s broken into your network but only has, say, guest privileges, can suddenly elevate themselves and get root or sysadmin superpowers.

That’s exactly what ransomware crooks love to do.

They come in with low power, and then they work their way up until they’re on equal footing with the regular sysadmins.

And then, unfortunately, there’s very little limit to what they can do for bad afterwards.

Chester, let’s move on to the next bug.

This is a bug known as… well, it’s A and E written together, which is an old English letter – it’s not used in English anymore ,and it’s the letter called ash, but in this case, it’s meant to be APIC/EPIC.

APIC, because it affects APICs, the Advanced Programmable Interrupt Controller, and they consider it to be an EPIC leak.


CHET.  I found it interesting, but let’s start with the fact that I don’t think it’s quite as epic, perhaps, as its name is implying.

The APIC is certainly involved, but I’m not so sure about the EPIC!

The truth of the matter, when you unravel all of this, is it affects part of Intel’s CPUs known as the SGX, which is the… I’m going to forget now… Software Guard Extensions, I want to say?


DUCK.  You’re correct!


CHET.  Well, this is not the first bug to affect SGX.

I didn’t count all of them, but I found at least seven previous instances, so it’s not had a great track record at doing the very thing it’s designed to do.

And the only practical use of it I could find anywhere was that you need this functionality to store the secret keys to play back UltraHD Blu-ray disks on Windows.

And with chips that don’t support SGX, you’re just not allowed to watch movies, apparently.


DUCK.  Which is ironic, because Intel have now, in the 12th generation of their CPUs… they’ve discontinued SGX for so-called “client” chips.

So the chips that you now get if you’ve got a brand new laptop – this doesn’t apply, because there’s no SGX in it.

It seems they see it as something that might be useful on servers.


CHET.  Well, I think it’s fair to say SGX’s fate has been sealed by Intel already pulling it out of the 12th-gen CPUs.

If not for the fact that this is like the eighth different clever way that somebody’s found to extract secrets… from the thing that’s designed to only hold secrets.


DUCK.  Yes, it’s a reminder that performance gets in the way.

Because my understanding is that the way this works is that the old-fashioned way of getting the data out of the Programmable Interrupt Controller, the APIC, was basically to read it out of a block of memory that was allocated specifically to that device.

The block of memory used for the interrupt data that was extracted was 4KB… one memory page in size.

But there wasn’t that much data to extract, and what was there before – for example, in the system cache – got written back.

In other words, the interrupt processor didn’t flush out the memory it was going to use before it wrote in the bytes that it intended to deliver.

So, sometimes it would accidentally deliver data values from arbitrary other parts of memory that the CPU had accessed recently.

And by controlling what happened, and in what order, the researchers found that they could persuade RAM contents that were supposed to be sealed in these SGX “enclaves” to emerge as kind-of uninitialised memory in the middle of interrupt handling.

So, always a reminder that when you try and speed things up by taking security shortcuts, you can end up with all sorts of trouble.


CHET.  If you’re going to trust this thing to keep secrets, it needs a lot of vetting.

And it feels like this SGX technology was kind of half-baked when it launched.


DUCK.  Complexity always comes with cost/risk, doesn’t it?

If you think, Chester, back to the 6502 processor that was famously in the Apple II, the VIC-20, the Commodore 64… if you’re from the UK, it was in the BBC Micro.

I believe that chip had around about 4000 transistors.

So it was truly a Reduced Instruction Set Chip, or RISC.

Whereas I understand that the latest Apple M2 processor has 20 billion (as in 20,000,000,000) transistors, just in one CPU.

So, you can see that when you start adding things like the Interrupt Controller (that can go in the chip), the secure enclave (well, that can go in the chip), hyperthreading (that can go in the chip), [SPEEDING UP MANICALLY] vector instructions (those could go in the chip), speculative execution, instruction reordering, multicores…

…all of that stuff, it’s not surprising that sometimes things do not work as you might expect, and that it takes quite a long time for anybody to notice.


CHET.  Well, good work to the researchers who did find it, because it’s certainly interesting research.

And if you want to understand a little more about it, your Naked Security article explains it incredibly well for people that are not normally acquainted with things like APIC controllers.

So I do recommend that folks check it out, because it is a perfect example of unintended consequences from simple decisions made about very complex things.


DUCK.  I think that is an excellent way to put it. Chester.

It also leaves us free to move on to another controversial issue, and that is the fact that the US Government is offering a reward that it says is “up to $10 million” for information about the Conti ransomware crew.

Now, it seems they don’t know anybody’s real name.

These people are known only as Dandis, Professor, Reshaev, Target, and Tramp.

And their pictures are just silhouettes…


CHET.  Yes, when I first saw the article, I thought the description of the criminals was like the people on Gilligan’s Island.

We have the Professor, and the Tramp… and I wasn’t quite sure where this was going with the nicknames.

I hope this attempt is more successful than the last one… I mean, there was another group that they offered $10 million for, which was the Evil Corp group.

And to my knowledge, no arrests or any kind of legal action has been taken yet. So presumably the $10 million to get Evil Corp was not enough of an incentive for people to flip on the perpetrators of that group.

So, hopefully, this one is a little more successful.

But there was a fantastic photo that caused a lot of speculation and conversation on the Twitters and even on Naked Security, in the post that you wrote up, of one of the alleged perpetrators.

We don’t know if he’s a member of the control group that ran or operated the Ransomware-as-a-Service, or whether he was simply perhaps an affiliate that used the malware, and contributed to paying commissions of ill-gotten gains from victims.

But you couldn’t get more stereotypically Russian… I mean, we’re looking at this: the guy’s got a red star on his cap, and I speculate a small bottle of vodka in his hand, and there’s a balalaika.

This this is almost too good to be true.


DUCK.  And, in good hacker dress, he’s wearing a sort of puffy jacket with a hoodie on…

…although he’s got the hoodie down, so maybe it doesn’t count?

Do you think, Chester, that they’ve targeted the Conti gang because they had a little bit of dishonour among thieves, as it were?

About a year ago, some of the affiliates got very steamed up, claimed they were getting ripped off, and there was a data breach, wasn’t there, where one of them dumped a whole load of operating manuals and software files?


CHET.  You know, there’s a lot of pieces there.

As you point out – I believe it was in August 2021 – somebody leaked their operating manuals, or their “playbook”, as it’s been referred to.

After the invasion of Ukraine, Conti as an entity seemed to come out very pro-Russian, which caused a bunch of Ukrainians that were part of their scheme to turn on them and leak a bunch of information about their operations and things as well.

So, there’s certainly been stuff there.

I think another reason, Duck, is simply the massive amount of damage they’ve caused.

I mean, when we did our writeups from our Rapid Response Team, without question the most prolific group in 2021 causing harm was Conti.

Nobody’s really buying that they’re out of the criminal underground.

It’s not like they took their money and went away… they’ve simply evolved into new schemes, and broken themselves up into different ransomware groups, and are playing different roles in the community than they were.

And most recently, some people may have heard that there were some attacks against the Costa Rican government that were attributed to Conti, and it wasn’t even very long ago.

So I think there are layers here, and one of those layers might be that Dandis, Professor, Reshaev…

…these people have somewhat been doxxed publicly [had personal data leaked deliberately] by people that claim to know who they are, but without providing evidence that would be worthy of indictments and convictions.

And so maybe this is a hope that maybe they will step forward if the price is high enough, and turn on their former comrades.


DUCK.  However, even if they all get busted tomorrow, and they all get charged, and they all get convicted, that would make a dent in ransomware proceedings, wouldn’t it?

But unfortunately, it would be a *dent*, not *the end of*.


CHET.  Absolutely.

Unfortunately, that’s the world we live in these days.

I think we’ll continue to see these crimes evolve in different ways, and that hopefully will provide some relief as we get better and better at defending ourselves.

But with $25 million potential ransoms out there, there are plenty of people willing to take a chance and continue to perpetrate these crimes, whether these particular crime lords are at the helm or not.


DUCK.  Yes.

You think, “Oh, well, they’d never get $25 million. They’d probably settle for less in the end.”

But even if that number comes down to, say, $250,000..

…as the US Rewards for Justice team points out: since 2019, they claim that the Conti gang alone (quoting from the RfJ site), that their ransomware has been used to conduct more than 1000 ransomware attacks targeting US and international critical infrastructure.

Medical services, 9-1-1 dispatch centers, towns, municipalities.

And they suggest that of healthcare and first responder networks alone – things like ambulance drivers, fire brigades, hospitals – more than 400 worldwide have been hit, including 290 in the US.

So, if you multiply 290 by the (I’m using giant air quotes here) by the “discount fee” of $250,000 that should have gone into providing healthcare…

…you get an enormously large number anyway.


CHET.  Remember four years ago when we published a report on SamSam and we were astounded that they made $6 million over three years?


DUCK.  That’s still a lot of money, Chester!

Well, it is to me… maybe you’re a high flyer. [LAUGHTER]

I know you have a topic – we haven’t written this up on Naked Security, but it’s something that you’re very interested in…

…and that is the fact that there can’t be “one ring to rule them all” when it comes to cybersecurity.

Particularly when it comes to things like healthcare and first responders, where anything that might get in the way in order to make security better could actually make the service dangerously worse.

And you have a story from the National Institutes of Health to tell…


CHET.  Yes, I think it’s an important reminder that we, first and foremost, are responsible for managing risk, not results that end up in perfect security.

And I think a lot of practitioners forget that too often.

I see a lot of these arguments going on, especially in social media: “the perfect is the enemy of the good”, which we’ve talked about previously in podcasts as well…

…where, “You should do it this way, and this is the only right way to do it.”

And I think this is interesting – this study of the relationship between hospitals that had a data breach and patient outcomes in the wake of those data breaches.

That might not make sense on the surface, but let me read to you the principal findings, which I think makes it quite clear what we’re talking about.

The principal findings are:

The hospital’s time to electrocardiogram increased as much as 2.7 minutes, and 30-day acute myocardial infarction mortality increased as much as 0.36 percentage points, during the three year window following a data breach.

In essence, what we’re saying is a third of a percent more people died of heart attacks in hospitals that had data breaches afterwards than before, as a percentage of patients that had fatal outcomes.


DUCK.  Presumably the implication there is that if they had been able to get that electrocardiogram machine onto them and get the results out and make a clinical decision more quickly, they might have been able to save non trivial number of those people who died?


CHET.  Yes, and I think when you think about a busy hospital, where people are regularly coming in with heart attacks and strokes, 1 in 300 patients dying because of new security protocols is kind of a concern.

And the Health and Human Services Administration in the United States goes on that they recommend that breached hospitals “carefully evaluate remedial security initiatives to achieve better data security without negatively affecting patient outcomes.”

And I think this is really where we have to be super cautious, right?

We all want better information security, and I want my patient records kept safe when I’m visiting the hospital.

And we certainly want to be sure that people aren’t accessing computers and records they shouldn’t, and people aren’t dispensing medicines that they shouldn’t that can be harmful.

On the other hand, this is life and death.

And while this may not apply to your law firm, or marketing company, or factory that you’re responsible for the security of… I think it’s an important reminder that there is no one size fits all to how we should do security.

We have to evaluate each situation, and make sure that we’re tailoring it with the amount of risk that we’re willing to accept.

And personally, I’m willing to accept a lot more risk of my medical records being compromised than I am the risk of dying because somebody had to go get a two-factor code in order to unlock the electrocardiogram machine!


DUCK.  Well, Chester, you’re a Type 1 diabetic, aren’t you?

And you have one of those magical insulin pumps.

Now, I bet you don’t rush to install the latest Linux kernel on that the moment that it comes out!


CHET.  Absolutely!

I mean, these devices go through rigorous testing… that’s not to say they’re bug free, but the known is better than the unknown when you’re talking about your health and being able to manage it.

And certainly there are software bugs in these devices, and they’re getting modernised and including technologies like Bluetooth… or the big leap for my device was that it got a colour screen, which tells you how old some of the technology that goes into these things is!

The medical authorities to approve these devices have a very, very long process.

And “tried and true” (as in the earlier conversation about transistors and processors), simple things that we can understand, are much preferred to new, complicated things that are much more difficult to figure out and find those security flaws.

I can’t imagine, if there was such a thing as a Patch Tuesday for this insulin pump, that I would be lining up to be the first guy on the block on Tuesday to install the update!

For all its warts, I know exactly how it works, and how it doesn’t.

And to your point, I coexist with it well…

…the device knows its responsibility to stay consistent, and I’ve learned how to exploit it for my benefit to improve my health.

Any change in that can be scary and disruptive.

So, the answer isn’t always better, faster and smarter.

Sometimes it’s the “known knowns” in the reliability and the trust.


DUCK.  Having said that, not having data breaches also helps!

And there are some surprisingly simple things you can do to protect your organisation from data getting out where it shouldn’t.


CHET.  And one of the things, Duck, is we don’t have the time we used to have.

Criminals are perpetually scanning the internet looking for any of these mistakes you may have made, whether it’s an outdated policy to allow too many things, or whether it’s exposed services that maybe were perfectly fine to expose ten years ago, but are now dangerous to have exposed to the Internet.


DUCK.  “The RDP that time forgot.”


CHET.  Yes, well, I’m sad to think that RDP keeps coming up, but in fact, at Black Hat last week, we just released a paper and wrote a blog about a situation where an organisation had three different ransomware attacks within a few weeks, all inside the same organisation, happening somewhat concurrently.

And it’s not the first time we’ve seen more than one attacker inside a network.

I think it may be the first time we’ve seen *three* inside the same network.


DUCK.  Oh, golly, did they overlap?

Were they literally still dealing with attack A when attack B came along?


CHET.  Yes, I believe there was a gap between attacker B and attacker C, but A and B were in at the same time, presumably coming in through the exact same remote access tool flaw that they both had found and exploited.

And then, I believe, group B installed their own remote access tool, sort of as a secondary back door just in case the first one got closed…

…and group C found their remote access tool and came in.


DUCK.  Golly… we shouldn’t laugh, but it’s sort-of a comedy of errors.

It’s easy to say, “Well, in any half-well-managed network, you should know what your official remote access tool is, so that anything that isn’t that one should stand out obviously.”

But let me ask our listeners this: If you’re in charge of a network, can you put your hand on your heart and tell me exactly how many teleconferencing tools you have in use in your company right now?


CHET.  Yes, absolutely.

We had one victim we wrote up earlier this year that I believe had *eight* different remote access tools that we found during our investigation, some of which were legitimately used ten years ago, and they just stopped using them but never removed them.

And other ones that had been introduced by multiple threat actors.

So this is certainly something to keep an eye out for!


DUCK.  Well, Chester, let’s hope that is an upbeat enough suggestion on which to end, because we are out of time for this week.

Thank you so much, as always, for stepping up to the mic at very short notice.

And, as always, it remains simply for me to say: Until next time…


BOTH.  Stay secure!

[MUSICAL MODEM]


Apple patches double zero-day in browser and kernel – update now!

Apple just pushed out an emergency update for two zero-day bugs that are apparently actively being exploited.

There’s a remote code execution hole (RCE) dubbed CVE-20220-32893 in Apple’s HTML rendering software (WebKit), by means of which a booby trapped web page can trick iPhones, iPads and Macs into running unauthorised and untrusted software code.

Simply put, a cybercriminal could implant malware on your device even if all you did was to view an otherwise innocent web page.

Remember that WebKit is the part of Apple’s browser engine that sits underneath absolutely all web rendering software on Apple’s mobile devices.

Macs can run versions of Chrome, Chromium, Edge, Firefox and other “non-Safari” browsers with alternative HTML and JavaScript engines (Chromium, for example, uses Blink and V8; Firefox is based on Gecko and Rhino).

But on iOS and iPadOS, Apple’s App Store rules insist that any software that offers any sort of web browsing functionality must be based on WebKit, including browsers such as Chrome, Firefox and Edge that don’t rely on Apple’s browsing code on any other plaforms where you might use them.

Additionally, any Mac and iDevice apps with popup windows such as Help or About screens use HTML as their “display language” – a programmatic convenience that is understandably popular with developers.

Apps that do this almost certainly use Apple’s WebView system functions, and WebView is based directly on top of WebKit, so it is therefore affected by any vulnerabilities in WebKit.

The CVE-2022-32893 vulnerability therefore potentially affects many more apps and system components than just Apple’s own Safari browser, so simply steering clear of Safari can’t be considered a workaround, even on Macs where non-WebKit browsers are allowed.

Then there’s a second zero-day

There’s also a kernel code execution hole dubbed CVE-2022-32894, by which an attacker who has already gained a basic foothold on your Apple device by exploiting the abovementioned WebKit bug…

…could jump from controlling just a single app on your device to taking over the operating system kernel itself, thus acquiring the sort of “admininstrative superpowers” normally reserved for Apple itself.

This almost certainly means that the attacker could:

  • Spy on any and all apps currently running
  • Download and start additional apps without going through the App Store
  • Access almost all data on the device
  • Change system security settings
  • Retrieve your location
  • Take screenshots
  • Use the cameras in the device
  • Activate the microphone
  • Copy text messages
  • Track your browsing…

…and much more.

Apple hasn’t said how these bugs were found (other than to credit “an anonymous researcher”), hasn’t said where in the world they’ve been exploited, and hasn’t said who’s using them or for what purpose.

Loosely speaking, however, a working WebKit RCE followed by a working kernel exploit, as seen here, typically provides all the functionality needed to mount a device jailbreak (therefore deliberately bypassing almost all Apple-imposed security restrictions), or to install background spyware and keep you under comprehensive surveillance.

What to do?

Patch at once!

At the time of writing, Apple has published advisories for iPad OS 15 and iOS 15, which both get updated version numbers of 15.6.1, and for macOS Monterey 12, which gets an updated version number of 12.5.2.

  • On your iPhone or iPad: Settings > General > Software Update
  • On your Mac: Apple menu > About this Mac > Software Update…

There’s also an update that takes watchOS to version 8.7.1, but that update doesn’t list any CVE numbers, and doesn’t have a security advisory of its own.

There’s no word on whether the older supported versions of macOS (Big Sur and Catalina) are affected but don’t yet have updates available, or whether tvOS is vulnerable but not yet patched.

For further information, watch this space, and keep your eyes on Apple’s official Security Bulletin portal page, HT201222.

go top