Category Archives: News

S3 Ep147: What if you type in your password during a meeting?

SNOOPING ON MEMORY, KEYSTROKES AND CRYPTOCOINS

No audio player below? Listen directly on Soundcloud.

With Doug Aamoth and Paul Ducklin. Intro and outro music by Edith Mudge.

You can listen to us on Soundcloud, Apple Podcasts, Google Podcasts, Spotify and anywhere that good podcasts are found. Or just drop the URL of our RSS feed into your favourite podcatcher.


READ THE TRANSCRIPT

DOUG.  Crocodilian cryptocrime, the BWAIN streak continues, and a reason to learn to touch-type.

All that, and more, on the Naked Security podcast.

[MUSICAL MODEM]

Welcome to the podcast, everybody.

I am Doug Aamoth; he is Paul Ducklin.

Paul, a very happy day to you, my friend.


DUCK.  And a very happy day to you, Doug.

I know what’s coming at the end of the podcast, and all I’m saying is…

…hang in there, because it is exciting, if mildly alarming!


DOUG.  But first, let’s start with Tech History.

This week, on 07 August 1944, IBM presented the Automatic Sequence Controlled Calculator to Harvard University.

You may better know this machine as the Mark I, which was a Frankenputer of sorts that mixed punch cards with electromechanical components and measured 51 feet long by 8 feet high, or roughly 15.5 metres by 2.5 metres.

And, Paul, the computer itself was almost obsolete before they got all the shrink-wrap off of it.


DUCK.  Yes, it was done towards the tail end of the Second World War…

…of course, American computer designers at that time didn’t know that the British had already successfully built high performance digital electronic computers using thermionic valves, or vacuum tubes.

And they were sworn to secrecy after the war (for reasons we didn’t understand last time we spoke about it!), so there was still this feeling in the States that valve or tube computers could be more trouble than they were worth.

Because thermionic valves run really hot; they’re quite large; they require large amounts of power.

Would they be reliable enough, even though they’re loads and loads faster than relays (thousands of times faster in switching)?

So there was still that feeling that maybe there was time and space for electromagnetic relays.

The guy who designed the Colossus computers for Bletchley Park in the UK was sworn to silence, and he wasn’t allowed to tell anybody after the war, “Yes, you *can* make a computer out of valves. It will work, and the reason I know that is I did it.”

He wasn’t allowed to tell anybody!


DOUG.  [LAUGHS] That’s fascinating…


DUCK.  So we did get the Mark I, and I guess it was the last mainstream digital computer that had a driveshaft, Doug, operated by an electrical motor. [LAUGHTER]

It is a thing of absolute beauty, isn’t it?

It’s Art Deco… if you go to Wikipedia, there are some really high-quality pics of it.

Like the ENIAC computer (which came out in, what, 1946, and did use valves)… both those computers were in a little bit of an evolutionary dead-end, in that they worked in decimal, not in binary.


DOUG.  I should have also mentioned that, although it was obsolete the moment it hit the floor, it was an important moment in computing history, so let’s not discount it.


DUCK.  Indeed.

It could do arithmetic with 18 significant decimal digits of precision.

Contemporary 64-bit IEEE floating-point numbers only have 53 binary digits of precision, which is just under 16 decimal digits.


DOUG.  All right, well, let’s talk about our new BWAIN.

This is another Bug With An Impressive Name, or BWAIN as we like to call them.

This is three weeks in a row now, so we’ve got a good streak going!

This one is called Downfall, and is caused by memory optimisation features in Intel processors.

Tell me if that sounds familiar, that some sort of optimisation feature in a processor is causing cybersecurity problems.


DUCK.  Well, if you’re a regular Naked Security podcast listener, you’ll know that we touched on Zenbleed just a couple of short weeks ago, didn’t we?

Which was a similar sort of bug in AMD Zen 2 processors.

Google, which was involved in both the Downfall and the Zenbleed research, has just published an article in which they talk about Downfall alongside Zenbleed.

It’s a similar sort of bug such that optimisation inside the CPU can inadvertently leak information about its internal state that is never supposed to escape.

Unlike Zenbleed, which can leak the top 128 bits of 256-bit vector registers, Downfall can leak the entire register by mistake.

It doesn’t work in quite the same way, but it’s the same sort of idea… if you remember Zenbleed, that worked because of a special accelerated vector instruction called VZEROUPPER.

Zenbleed: How the quest for CPU performance could put your passwords at risk

That’s where one instruction goes and writes zero-bits to all of the vector registers simultaneously, all in one go, which obviously means you don’t have to have a loop that goes around the registers one by one.

So it increases performance, but reduces security.

Downfall is a similar sort of problem that relates to an instruction that, rather than clearing data, goes out to collect it.

And that instruction is called GATHER.

GATHER can actually take a list of memory addresses and collect all this stuff together and stick it in the vector registers so you can do processing.

And, much like Zenbleed, there is a slip twixt the cup and the lip that can allow state information about other people’s data, from other processes, to leak out and be collected by somebody running alongside you on the same processor.

Clearly, that is not supposed to happen.


DOUG.  Unlike Zenbleed, where you could just turn that feature off…


DUCK.  …the mitigation will countermand the performance improvements that the GATHER instruction was supposed to bring, namely collecting data from all over memory without requiring you to do it in some kind of indexed loop of your own.

Obviously, if you notice that the mitigation has slowed down your workload, you kind of have to suck it up, because if you don’t, you could be at risk from someone else on the same computer as you.


DOUG.  Exactly.


DUCK.  Sometimes life is like that, Doug.


DOUG.  It is!

We will keep an eye on this… this is, I take it, for the Black Hat conference that we’ll get more info about, including any fixes coming out.

Let’s move on to, “When it comes to cybersecurity, we know that every little bit helps, right?”

So if we could all just take up touch-typing, the world would actually be a safer place, Paul.

Serious Security: Why learning to touch-type could protect you from audio snooping


DUCK.  This probably could have been a BWAIN if the authors wanted (I can’t think of a catchy name off the top of my head)…

…but they didn’t give it a BWAIN; they just wrote a paper about it and published it the week before Black Hat.

So I guess it just came out when it was ready.

It’s not a new topic of research, but there were some interesting insights in the paper, which is what minded me to write it up.

And it basically goes around the question of when you are recording a meeting with lots of people in it, then obviously there is a cybersecurity risk, in that people may say things that they do not want recorded for later, but that you get to record anyway.

But what about the people who don’t say anything that is controversial or that matters if it were to be released, but nevertheless just happen to sit there on their laptop typing away?

Can you figure out what they’re typing on their keyboard?

When they press the S key, does it sound different from when they press the M key, and is that different from P?

What if they decide, in the middle of a meeting (because their computer’s locked or because their screen saver kicked in)… what if they decide suddenly to type in their password?

Could you make it out, say, on the other side of a Zoom call?

This research seems to suggest that you may well be able to do that.


DOUG.  It was interesting that they used a 2021 MacBook Pro, the 16 inch version, and they found out that basically, for the most part, all MacBook keyboards sound the same.

If you and I have the same type of MacBook, your keyboard is going to sound just like mine.


DUCK.  If they take really carefully sampled “sound signatures” from their own MacBook Pro, under ideal circumstances, that sound signature data is probably good enough for most, if not all other MacBooks… at least from that same model range.

You can see why they would tend to be much more similar than different.


DOUG.  Luckily for you, there are some things you can do to avoid such malfeasance.

According to the researchers, you can learn to touch-type.


DUCK.  I think they intended that as a slightly humorous note, but they did note that previous research, not their own, has discovered that touch-typers tend to be much more regular about the way that they type.

And that means that individual keystrokes are much harder to differentiate.

I’d imagine that’s because when someone is touch-typing, they’re generally using a lot less energy, so they’re likely to be quieter, and they’re probably pressing all the keys in a very similar way.

So, apparently touch-typing makes you much more of a moving target, if you like, as well as helping you type much faster, Doug.

It seems it is a cybersecurity skill as well as a performance benefit!


DOUG.  Great.

And they noted that the Shift key causes trouble.


DUCK.  Yes, I guess that’s because when you’re doing Shift (unless you’re using Caps Lock and you have a long sequence of capital letters), you’re basically going, “Press Shift, press key; release key, release Shift.”

And it seems that that overlap of two keystrokes actually messes up the data in a way that makes it much harder to tell keystrokes apart.

My thinking on that is, Doug, that maybe those really annoying, pesky password complexity rules have some purpose after all, albeit not the one that we first thought. [LAUGHTER]


DOUG.  OK, then there’s some other things you can do.

You can use 2FA. (We talk about that a lot: “Use 2FA wherever you can.”)

Don’t type in passwords or other confidential information during a meeting.

And mute your microphone as much as you can.


DUCK.  Obviously, for a sound-sniffing password phisher, knowing your 2FA code this time isn’t going to help them next time.

Of course, the other thing about muting your microphone…

…remember that doesn’t help if you’re in a meeting room with other people, because one of them could be surreptitiously recording what you’re doing just by having their phone sitting upwards on the desk.

Unlike a camera, it doesn’t need to be pointing directly at you.

But if you’re on something like a Zoom or a Teams call where it’s just you on your side, it is common sense to mute your microphone whenever you don’t need to speak.

It’s polite to everybody else, and it also stops you leaking stuff that you might otherwise have thought entirely irrelevant or unimportant.


DOUG.  OK, last but not least…

…you may know her as Razzlekhan or the Crocodile of Wall Street, or not at all.

But she and her husband have been ensnared in the jaws of justice, Paul.

“Crocodile of Wall Street” and her husband plead guilty to giant-sized cryptocrimes


DUCK.  Yes, we’ve written about this couple before a couple of times on Naked Security, and spoken about them on the podcast.

Razzlekhan, a.k.a. the Crocodile of Wall Street, in real life is Heather Morgan.

She’s married to a chap called Ilya Lichtenstein.

They live, or they lived, in New York City, and they were implicated or connected to the infamous Bitfinex cryptocurrency heist of 2016, where about 120,000 Bitcoins were stolen.

And at the time, everyone sais, “Wow, $72 million gone just like that!”.

Amazingly, after a few years of very clever and detailed investigative works by US law enforcement, they were tracked down and arrested.

But by the time of their arrest, the value of Bitcoins had gone up so much that their heist was worth close to $4 billion ($4000 million), up from $72 million.

It seems that one of the things that they hadn’t banked on is just how difficult it can be to cash out those ill-gotten gains.

Technically, they were worth $72 million in stolen money…

…but there was no retiring to Florida or a Mediterranean island in the lap of luxury for the rest of their lives.

They couldn’t get the money out.

And their efforts to do so created a sufficient trail of evidence that they were caught, and they’ve now decided to plead guilty.

They haven’t been sentenced yet, but it seems that she faces up to 10 years, and he faces up to 20 years.

I believe he is likely to get a higher sentence because he is much more directly implicated in the original hacking into the Bitfinex cryptocurrency exchange – in other words, getting hold of the money in the first place.

And then he and his wife went out of their way to do the money laundering.

In one fascinating part of the story (well, I thought it was fascinating!), one of the ways that she tried to launder some of the money was that she traded it out for gold.

And taking a leaf out of pirates (Arrrrr!) from hundreds of years ago, she buried it.


DOUG.  That begs the question, what happens if I had 10 Bitcoins stolen from me in 2016?

They have now surfaced, so do I get 10 Bitcoins back or do I get the value of 10 Bitcoins in 2016?

Or when the bitcoins are seized, are they automatically converted to cash and given back to me no matter what?


DUCK.  I don’t know the answer to that, Doug.

I think, at the moment, they’re just sitting in a secure cupboard somewhere…

…presumably the gold that they dug up [LAUGHTER], and any money that they seized and other property, and the Bitcoins that they did recover.

Because they were able to get back about 80% of them (or something) by cracking the password on a cryptocurrency wallet that Ilya Lichtenstein had in his possession.

Stuff that he hadn’t been able to launder yet.

What would be intriguing, Doug, is if the “know your customer” data showed that it was actually your Bitcoin was the one that got cashed out for gold and buried…

…do you get the gold back?


DOUG.  Gold has gone up too.


DUCK.  Yes, but it hasn’t gone up anywhere near as much!


DOUG.  Yes…


DUCK.  So I wonder if some people will get gold back, and feel quite good, because I think they’ll have made a 2x or 3x improvement on what they lost at the time…

…but yet wish they got the Bitcoins, because they’re more like 50x the value.

So very much a question of “watch this space”, isn’t it?


DOUG.  [LAUGHS] It is with great pleasure that I say, “We will keep an eye on this.”

And now it’s time to hear from one of our readers.

Strap in for this one!

On this article. Hey Helpdesk Guy writes:


“Razzlekhan” was the answer to a question during a cybersecurity class I took.

Because I knew that I won a $100 hacker gift card.

No one knew who she was.

So, after the question, the instructor played her rap song and the entire class was horrified, haha.


Which prompted me to go look up some of her rap songs on YouTube.

And “horrified” is the perfect word.

Really bad!


DUCK.  You know how there are some things in social history that are so bad they’re good…

…like the Police Academy movies?

So I always assumed that there was an element of that in anything, including music.

That it was possible to be so bad that you came in at the other end of the spectrum.

But these rap videos prove that is false.

There are things that are so bad…

[DEADPAN] …that they are bad.


DOUG.  [LAUGHING] And this is it!

All right, thanks for sending that in, Hey Helpdesk Guy.

If you have an interesting story, comment or question you’d like to submit, we’d love to read it on the podcast.

You can email tips@sophos.com, you can comment on any one of our articles, or you can hit us up on social: @nakedsecurity.

That’s our show for today; thank you very much for listening.

For Paul Ducklin, I’m Doug Aamoth, reminding you until next time to…


BOTH.  Stay secure!

[MUSICAL MODEM]


Microsoft Patch Tuesday: 74 CVEs plus 2 “Exploit Detected” advisories

The August 2023 Microsoft security updates are out (the first day of the month was a Tuesday, making this month’s Patch Tuesday as early as ever it can be), with 74 CVE-numbered bugs fixed.

Intriguingly, if not confusingly, Microsoft’s offical bug listing page is topped by two special items dubbed Exploitation Detected.

That terminology is Microsoft’s usual euphemistic reworking of the word zero-day, typically denoting bugs that were first found and exploited by cyberattackers, and only then reported to and patched by the Good Guys.

But neither of those items line up directly with any of of this month’s CVE numbers, appearing simply as:

  • Microsoft Office: ADV230003. Exploitation detected. Workarounds: No. Mitigations: No.
  • Memory Integrity System Readiness Scan Tool: ADV230004. Exploitation detected. Workarounds: No. Mitigations: No.

Mark of the Web problems

Apparently, the above Office advisory relates to follow-up security improvements in Office to deal with CVE-2023-36884, which was a zero-day until last month, when it was patched in the July 2023 security updates.

That bug related to Microsoft’s so-called Mark of the Web (MotW), also known as the Internet Zone system, whereby files that arrive via the internet, for example as saved email attachments or downloaded files, are tagged by the operating system for later.

The idea is that even if you don’t open them immediately, but only look at them days or weeks later, Windows will nevertheless warn you that they came from an untrusted source and thereby help to protect you from yourself.

As a result, crooks love to find ways to sidestep the MotW labelling system, because it lets them deliver untrusted content in such a way that you might not remember where it came from later on.

Technically, then, this doesn’t seem to be a zero-day this month, given that there was a patch for it in July 2023, even though it counts as an Exploitation Detected bug because crooks were historically known to be abusing the vulnerability before any patch was available.

The special Advisory page doesn’t shed much more light on the issue, saying simply, “Microsoft has released an update for Microsoft Office that provides enhanced security as a defense in depth measure.”

We’re therefore assuming that explicitly listing the new security features added to Office this month (and you would usally expect an “advisory” to give you actionable advice along those lines) would give away new tips and tricks for cybercriminals to abuse, over and above the already-known bypass techniques that were fixed last month.

A mystery exploit

The second advisory, ADV230004, doesn’t mention any CVE numbers at all, so we can’t tell you what it’s supposed to fix, or why the original problem was an exploitable bug in the first place.

However, the advisory states:

The Memory Integrity System Readiness Scan Tool (hvciscan_amd64.exe and hvciscan_arm64.exe) is used to check for compatibility issues with memory integrity, also known as hypervisor-protected code integrity (HVCI).

The original version was published without a RSRC section, which contains resource information for a module.

What we can’t tell you is:

  • How the original version was able to run at all without its RSRC section. Resources typically specify must-have run-time program data such as messages in multiple languages, icons, menus and other user interface components.
  • How it passed its quality assurance tests with a key component of the executable file itself missing.
  • How it got digitally signed in an obviously incomplete state.
  • Why the missing resource section made the file vulnerable, and what sort of exploits were made possible by this manufacturing flaw.

Confusingly, Microsoft’s main Patch Tuesday bug-listing page says Exploitation Detected against this item, without saying what sort of attacks were carried out.

But the Advisory page says merely Exploitation More Likely, as though it isn’t currently, and never has been, a zero-day hole for which working attack methods are already known.

Unsurprisingly, therefore, we are sticking to our usual recommendation, namely: Do not delay; Patch it today.

Other noteworthy fixes

Other notable but non-zero-day updates this month include three with high cybersecurity danger scores on the CVSS scale, where 10/10 means the greatest risk if someone does figure out how to abuse the bug:

The Exchange bug is only rated Important by Microsoft, perhaps because the vulnerability doesn’t directly give attackers a way to run untrusted code, but does give them a way to attack and recover passwords for other users, after which the attackers could login illegally as a legitimate user.

Obviously, the ability to access an existing user account would almost certainly give attackers code execution powers, albeit only as unprivileged users, as well as to snoop around your network, even if not enough access to make off with your trophy data.

Importantly, patching against this hole isn’t just a matter of downloading and installing the Patch Tuesday updates, because Microsoft warns sysadmins as follows:

In addition to installing the updates a script must be run.

Alternatively you can accomplish the same by running commands from the command line in a PowerShell window or some other terminal.

Beware rogue meeting invitations

The two Teams vulnerabilities are rated Critical, because the side-effects could lead directly to remote code execution (RCE).

You’d need to be lured into joining a booby-trapped Teams meeting first, so this vulnerability can’t be remotely exploited directly over the internet.

Nevertheless, joining Teams meetings on someone else’s say-so is something that many of us do regularly.

Remember that even if you trust the other person, you also need to trust their computer to free fro malware, and their Teams account to be unhacked, before you can trust any meeting invitations you receive in their name.

In other words, to defend against these bugs, don’t just remember our encouragement to Patch early, patch often, but also our more general advice about online invitations, which says: If in doubt, leave it out.

Important. If you are worried that someone you trust has had their Teams account hijacked, or any other account taken over, never ask them via that same service if the request is genuine. If it really is genuine, they’ll reassure you that their account has not been hacked. But if the request is fake, the attackers will tell you exactly the same thing, namely that the account has not been hacked and you can continue to believe any messages you receive from it.

What to do?

For official information on what you need to patch, and how to get the necessary updates…

…please consult Microsoft’s offical August 2023 Security Updates overview page.


Serious Security: Why learning to touch-type could protect you from audio snooping

Audio recordings are dangerously easy to make these days, whether by accident or by design.

You could end up with your own permanent copy of something you thought you were discussing privately, preserved indefinitely in an uninterestingly-named file on your phone or laptop, thanks to hitting “Record” by mistake.

Someone else could end up with a permanent transcript of something you didn’t want preserved at all, thanks to them hitting “Record” on their phone or laptop in a way that wasn’t obvious.

Or you could knowingly record a meeting for later, just in case, with the apparent consent of everyone (or at least without any active objections from anyone), but never get round to deleting it from cloud storage until it’s too late.

Sneaky sound systems

Compared to video recordings, which are worrying enough given how easily they can be captured covertly, audio recordings are much easier to acquire surreptitiously, given that sound “goes round corners” while light, generally speaking, doesn’t.

A mobile phone laid flat on a desk and pointing directly upwards, for example, can reliably pick up most of the sounds in a room, even those coming from people and their computers that would be entirely invisible to the phone’s camera.

Likewise, your laptop microphone will record an entire room, even if everyone else is on the other side of the table, looking at the back of your screen.

Worse still, someone who isn’t in the room at all but is participating via a service such as Zoom or Teams can hear everything relayed from your side whenever your own microphone isn’t muted.

Remote meeting participants can permanently record whatever they receive from your end, and can do so without your knowlege or consent if they capture the audio stream without using the built-in features of the meeting software itself.

And that raises the long-running question, “What can audio snoops figure out, over and above what gets said in the room?”

What about any typing that you might do while the meeting is underway, perhaps because you’re taking notes, or because you just happen to type in your password during the meeting, for example to unlock your laptop because your screen saver decided you were AFK?

Attacks only ever get better

Recovering keystrokes from surreptitious recordings is not a new idea, and results in recent years have been surprisingly good, not least because:

  • Microphone quality has improved. Recording devices now typically capture more detail over a wider range of frequencies and volumes.
  • Portable storage sizes have increased. Higher data rates can be used, and sound samples stored uncompressed, without running out of disk space.
  • Processing speeds have gone up. Data can now be winnowed quickly even from huge data sets, and processed with ever-more-complex machine learning models to extract usable information from it.
  • Cybersecurity is becoming ever more important. Collectively, more of us now care about protecting ourselves from unwanted surveillance, making research into sound-snooping ever more mainstream.

A trio of British computer scientists (it seems they originally met up at Durham University in the North East of England, but are now spread out across the country) has just released a review-and-research paper on this very issue, entitled A Practical Deep Learning-Based Acoustic Side Channel Attack on Keyboards.

In the paper, the researchers claim to have:

…achieved a top-1 classification accuracy of 95% on phone-recorded laptop keystrokes, representing improved results for classifiers not utilising language models and the second best accuracy seen across all surveyed literature.

In other words, their work isn’t entirely new, and they’re not yet in the number-one spot overall, but the fact that their keytroke recognition techniques don’t use “language models” has an important side-effect.

Language models, loosely speaking, help to reconstruct poor-quality data that follows known patterns, such as being written in English, by making likely corrections automatically, such as figuring out that text recognised as dada brech notidifivatipn is very likely to be data breach notification.

But this sort of automated correction isn’t much use on passwords, given that even passphrases often contain only word fragments or initialisms, and that the sort of variety we often throw into passwords, such as mixing the case of letters or inserting arbitrary punctuation marks, can’t reliably be “corrected” precisely because of its variety.

So a top-tier “hey, you just hit the P key” recogniser that doesn’t rely on knowing or guessing what letters you typed just beforehand or just afterwards…

…is likely to do a better job of figuring out or guessing any unstructured, pseudorandom stuff that you type in, such as when you are entering a password.

One size fits all

Intriguingly, and importantly, the researchers noted that the representative audio samples they captured carefully from their chosen device, a 2021-model Apple MacBook Pro 16″, turned out not to be specific to the laptop they used.

In other words, because laptop models tend to use as-good-as-identical components, attackers don’t need to get physical access to your laptop first in order to capture the starting data needed to train their keystroke recognition tools.

Assuming you and I have similar sorts of laptop, with the same model of keyboard installed, then any “sound signatures” that I capture under carefully controlled conditions from my own computer…

…can probably be applied more or less directly to live recordings later acquired from your keyboard, given the physical and acoustic similarities of the hardware.

What to do?

Here are some fascinating suggestions based on the findings in the paper:

  • Learn to touch-type. The researchers suggest that touch-typing is harder to reconstruct reliably via sound recordings. Touch-typists are generally much faster, quieter, smoother and more consistent in their style, as well as using less energy when activating the keys. We assume this makes it harder to isolate individual keystrokes for analysis in the first place, as well as making the sound signatures of different keys harder to tell apart.
  • Mix character case in passwords. The researchers noted that when the shift key was held down before a keystroke was entered, and then released afterwards, the individual sound signatures were much harder to isolate and match. (Those annoying password construction rules may be useful after all!)
  • Use 2FA wherever you can. Even if you have a 2FA system that requires you to type in a 6-digit code off your phone (which many people do by holding their phone in one hand and hunting-and-pecking the numbers with the other), each code only works once, so recovering it doesn’t help a password-thieving attacker much, if at all.
  • Don’t type in passwords or other confidential information during a meeting. If you get locked out of your laptop by your screensaver or by a security timeout, consider popping out of the room briefly while you log back in. A little delay could go a long way.
  • Mute your own microphone as much as can. Speak, or type, but don’t do both at once. The researchers suggest that Zoom recordings are good enough for keystroke recovery (though we think they tested only with high-quality local Zoom recordings, not with lower-quality cloud-based recordings initiated by remote particpants), so if you are the only person at your end, muting your microphone controls how many of your keystrokes other people get to hear.

“Crocodile of Wall Street” and her husband plead guilty to giant-sized cryptocrimes

Back in August 2016, Heather Morgan, a.k.a. Razzlekhan, a.k.a. the Crocodile of Wall Street (actually, there’s a double-barrelled expletive in front of the word ‘crocodile’, but this is a family-friendly website so we’ll leave you to extrapolate for yourself), and her husband Ilya Lichtenstein got their hands on 120,000 of your finest bitcoins.

At the time, BTC was trading at about $600, so their stash was worth a cool $72,000,000.

For a couple in their mid-to-late 20s at the time, you’d imagine that sort of capital would fund a long life of idle luxury, especially if you stop to think that Bitcoin hasn’t traded below $10,000 for the past three years.

Even if they’d burned through half of their original fortune by now, they’d still have close to $2 billion left at today’s rate, about 25 times as much as they started out with.

But things didn’t work out that way.

The problem was that Morgan and Lichtenstein hadn’t acquired those bitcoins legally, weren’t able to cash them out as quickly as they probably thought they could, and ultimately disovered that bitcoin anonymity only goes so far, especially if you’re stuck with the problem of trying to launder a large quantity of the world’s best-known cryptocurrency.

In early 2022, US law enforcement experts had pieced together enough of the couple’s BTC story to arrest them for trying to spend the proceeds of a crime:

Simply put, the couple weren’t directly charged with stealing the bitcoinage in the first place, but for trying to cash it out despite knowing it was stolen.

In the court document submitted to apply for their arrest warrants, the victim is referred to simply as VCE, short for virtual currency exchange, but that VCE is now publicly known to be Bitfinex, so we have used that real name here:

In or around August 2016, a hacker breached [Bitfinex’s] security systems and infiltrated its infrastructure. While inside [Bitfinex’s] network, the hacker was able to initiate over 2,000 unauthorized BTC transactions, in which approximately BTC 119,754 was transferred […] to an [outside wallet].

[…] US authorities traced the stolen funds on the BTC blockchain. As detailed [in the affidavit], beginning in or around January 2017, a portion of the stolen BTC moved out of [that wallet] in a series of small, complex transactions across multiple accounts and platforms. This shuffling, which created a voluminous number of transactions, appeared to be designed to conceal the path of the stolen BTC, making it difficult for law enforcement to trace the funds. Despite these efforts, […] US authorities traced the stolen BTC to multiple accounts controlled by ILYA “DUTCH” LICHTENSTEIN, a Russian-US national residing in New York, and his wife HEATHER MORGAN.

Fast forward just over a year-and-a-half, and both of the suspects have now pleaded guilty to money laundering charges.

This time, the US Department of Justice (DOJ) unambiguously states that Lichtenstein was the hacker referred to above, and offers some intriguing new details about how the couple tried to turn the stolen cryptocoins into ready money, including using some of the tainted bitcoins to buy gold, which they hid in the traditional way of robbers and pirates throughout the ages:

According to court documents, Lichtenstein used a number of advanced hacking tools and techniques to gain access to Bitfinex’s network. Once inside their systems, Lichtenstein fraudulently authorized more than 2,000 transactions in which BTC 119,754 was transferred from Bitfinex to a cryptocurrency wallet in Lichtenstein’s control. Lichtenstein then took steps to cover his tracks by going back into Bitfinex’s network and deleting access credentials and other log files that may have given him away to law enforcement. Following the hack, Lichtenstein enlisted the help of his wife, Morgan, in laundering the stolen funds.

Lichtenstein, at times with Morgan’s assistance, employed numerous sophisticated laundering techniques, including using fictitious identities to set up online accounts; utilizing computer programs to automate transactions; depositing the stolen funds into accounts at a variety of darknet markets and cryptocurrency exchanges and then withdrawing the funds, which obfuscates the trail of the transaction history by breaking up the fund flow; converting bitcoin to other forms of cryptocurrency, including anonymity-enhanced cryptocurrency (AEC), in a practice known as “chain hopping”; depositing a portion of the criminal proceeds into cryptocurrency mixing services, such as Bitcoin Fog, Helix, and ChipMixer; using US-based business accounts to legitimize their banking activity; and exchanging a portion of the stolen funds into gold coins, which Morgan then concealed by burying them.

Lichtenstein now faces up to 20 years in prison when he’s sentenced, while the Crocodile Lady faces up to 10 years behind bars.

As the law requires, and as the DOJ reminds everyone, “there will be a formal process at the conclusion of the case […] for third-party claimants to submit claims for any seized and forfeited property”.

Fascinatingly, that restitution process could produce some peculiar results for different claimants, depending on which stolen bitcoins got traded out and recovered in the form of gold, which ones were still in BTC form when seized, and how the assests are divided up amongst the claimants.

For example, if your bitcoins were stolen in 2016, cashed out for gold by the Crocodile Lady in early 2017, and were returned to you right now in the form of gold bullion, you’d end up with a reasonably healthy return of somewhere between 250% and 300%.

That’s because BTC went from about $600 in mid 2016 to roughly double that by early 2017 (x2), and gold has gone up from $1500 an ounce to $2000 an ounce since then (x1.3), for an overall gain of approximately 2×1.3 = 260%.

If your specific bunch of bitcoins ended up untouched by the guilty pair, however, and you were to get them back directly, they’d now be worth about 50 times what they were at the time of the heist, for a 5000% return.

But if your coins were swapped out for gold in late 2021, just before the Crocodile Lady was taken into custody, they’d have been worth more than 100 times their 2016 value at the time of the trade, and although the value of BTC is now less than half what it was then, gold has declined only very slightly, so you’d still be looking at a return of better than 10,000%.

In practice, we’re assuming that the total amount recovered will be divided proportionally between all claimaints, including those whose specific cryptocoins were cashed out along the way and spent on high living…

…but it’s an intriguing reminder of how complex the and confusing the cryptocoin ecosystem can be.


HOW CRYPTOCOINS CAN BE TRACKED

If you’re wondering how stolen and laundered transactions can be traced in a pseudoanonymous trading system such as Bitcoin…

…you’ll enjoy this special episode of the Naked Security podcast in which we talk to best-selling US author Andy Greenberg about his awesome book on this very subject, Tracers in the Dark – The Global Hunt for the Crime Lords of Cryptocurrency:

Click-and-drag on the soundwaves below to skip to any point. You can also listen directly on Soundcloud.

You can also find our podcasts on Apple Podcasts, Google Podcasts, Spotify and anywhere that good podcasts are found. Or just drop the URL of our RSS feed into your favourite podcatcher.


S3 Ep146: Tell us about that breach! (If you want to.)

WEIRD BUT TRUE

No audio player below? Listen directly on Soundcloud.

With Doug Aamoth and Paul Ducklin. Intro and outro music by Edith Mudge.

You can listen to us on Soundcloud, Apple Podcasts, Google Podcasts, Spotify and anywhere that good podcasts are found. Or just drop the URL of our RSS feed into your favourite podcatcher.


READ THE TRANSCRIPT


DOUG.  Firefox updates, another Bug With An Impressive Name, and the SEC demands disclosure.

All that, and more, on the Naked Security podcast.

[MUSICAL MODEM]

Welcome to the podcast, everybody.

I am Doug Aamoth; he is Paul Ducklin.

Paul, I hope you will be proud of me… I know you are a cycling enthusiast.

I rode a bicycle yesterday for 10 American miles, which I believe is roughly 16km, all while pulling a small but not unheavy child behind the bike in a two-wheeled carriage.

And I’m still alive to tell the tale.

Is that a long way to ride a bike, Paul?


DUCK.  [LAUGHS] It depends how far you really needed to go.

Like, if it was actually 1200 metres that you had to go and you got lost… [LAUGHTER]

My enthusiasm for cycling is very high, but it doesn’t mean that I deliberately ride further than I need to, because it’s my primary way of getting around.

But 10 miles is OK.

Did you know that American miles and British miles are, in fact, identical?


DOUG.  That is good to know!


DUCK.  And have been since 1959, when a bunch of countries including, I think, Canada, South Africa, Australia, the United States and the UK got together and agreed to standardise on an “international inch”.

I think the Imperial inch got very, very slightly smaller and the American inch got very, very slightly longer, with the result that the inch (and therefore the yard, and the foot, and the mile)…

…they’re all defined in terms of the metre.

One inch is exactly 25.4mm

Three significant figures is all you need.


DOUG.  Fascinating!

Well, speaking of fascinating, it’s time for our This Week in Tech History segment.

This week, on 01 August 1981, Music Television, also known as MTV, went live as part of American cable and satellite television packages, and introduced the public to music videos.

The first one played [SINGS, RATHER WELL IN FACT] “Video Killed the Radio Star” by The Buggles.

Fitting at the time, although ironic nowadays as MTV rarely plays music videos any more, and plays no new music videos whatsoever, Paul.


DUCK.  Yes, it is ironic, isn’t it, that cable TV (in other words, where you had wires running under the ground into your house) killed the radio (or the wireless) star, and now it looks as though cable TV, MTV… that sort of died out because everyone’s got mobile networks that work wirelessly.

What goes around comes around, Douglas.


DOUG.  Alright, well, let’s talk about these Firefox updates.

We get a double dose of Firefox updates this month, because they’re on a 28 day cycle:

Firefox fixes a flurry of flaws in the first of two releases this month

No zero-days in this first round out of the gate, but some teachable moments.

We have listed maybe half of these in your article, and one that really stood out to me was: Potential permissions request bypass via clickjacking.


DUCK.  Yes, good old clickjacking again.

I like that term because it pretty much describes what it is.

You click somewhere, thinking you’re clicking on a button or an innocent link, but you’re inadvertently authorising something to happen that isn’t obvious from what the screen’s showing under your mouse cursor.

The problem here seems to be that under some circumstances, when a permissions dialog was about to pop up from Firefox, for example, say, “Are you really sure you want to let this website use your camera? have access to your location? use your microphone?”…

…all of those things that, yes, you do want to get asked.

Apparently, if you could get the browser to a performance point (again, performance versus security) where it was struggling to keep up, you could delay the appearance of the permissions pop-up.

But by having a button at the place where the pop-up would appear, and luring the user into clicking it, you could attract the click, but the click would then get sent to the permissions dialog that you hadn’t quite seen yet.

A sort of visual race condition, if you like.


DOUG.  OK, and the other one was: Off-screen canvas could have bypassed cross-origin restrictions.

You go on to say that one web page could peek at images displayed in another page from a different site.


DUCK.  That’s not supposed to happen, is it?


DOUG.  No!


DUCK.  The jargon term for that is the “same-origin policy”.

If you’re running website X and you send me a whole bunch of JavaScript that sets a whole load of cookies, then all that’s stored in the browser.

But only further JavaScript from site X can read that data back.

The fact that you’re browsing to site X in one tab and site Y in the other tab doesn’t let them peek at what the other is doing, and the browser is supposed to keep all of that stuff apart.

That’s obviously pretty important.

And it seems here that, as far as I understand it, if you were rendering a page that wasn’t being displayed yet…

…an off-screen canvas, which is where you create, if you like, a virtual web page and then at some future point you say, “Right now I’m ready to display it,” and bingo, the page appears all at once.

The problem comes with trying to make sure that the stuff that you’re rendering invisibly doesn’t inadvertently leak data, even though it never ultimately gets displayed to the user.

They spotted that, or it was responsibly disclosed, and it was patched.

And those two, I think, were included in the so called “High”-level vulnerabilities.

Most of the others were “Moderate”, with the exception of Mozilla’s traditional, “We found a whole lot of bugs through fuzzing and through automated techniques; we didn’t probe them to find out if they could be exploited at all, but we are willing to assume that somebody who tried hard enough could do so.”

That’s an admission that we both like so much, Doug… because potential bugs are worth quashing, even if you feel certain in your heart that nobody will ever figure out how to exploit them.

Because in cybersecurity, it pays never to say never!


DOUG.  Alright, you’re looking for Firefox 116, or if you’re on an extended release, 115.1.

Same with Thunderbird.

And let’s move on to… oh, man!

Paul, this is exciting!

We have a new BWAIN after a double-BWAIN last week: a Bug With An Impressive Name.

This one is called Collide+Power:

Performance and security clash yet again in “Collide+Power” attack


DUCK.  [LAUGHS] Yes, it’s intriguing, isn’t it, that they chose a name that has a plus sign in it?


DOUG.  Yes, that makes it hard to say.


DUCK.  You can’t have a plus sign in your domain name, so the domain name is collidepower.com.


DOUG.  Alright, let me read from the researchers themselves, and I quote:

The root of the problem is that shared CPU components, like the internal memory system, combine attacker data and data from any other application, resulting in a combined leakage signal in the power consumption.

Thus, knowing its own data, the attacker can determine the exact data values used in other applications.


DUCK.  [LAUGHS] Yes, that makes a lot of sense if you already know what they’re talking about!

To try and explain this in plain English (I hope I’ve got this correctly)…

This goes down to the performance-versus-security problems that we’ve talked about before, including last week’s podcast with that Zenbleed bug (which is far more serious, by the way):

Zenbleed: How the quest for CPU performance could put your passwords at risk

There’s a whole load of data that gets kept inside the CPU (“cached” is the technical term for it) so that the CPU doesn’t need to go and fetch it later.

So there’s a whole lot of internal stuff that you don’t really get to manage; the CPU looks after it for you.

And the heart of this attack seems to go something like this…

What the attacker does is to access various memory locations in such a way that the internal cache storage remembers those memory locations, so it doesn’t have to go and read them out of RAM again if they get reused quickly.

So the attacker somehow gets these cache values filled with known patterns of bits, known data values.

And then, if the victim has memory that *they* are using frequently (for example, the bytes in a decryption key), if their value is suddenly judged by the CPU to be more likely to be reused than one of the attackers’s values, it kicks the attacker’s value out of that internal superfast cache location, and puts the new value, the victim’s value, in there.

And what these researchers discovered (and as far fetched as the attack sounds in theory and is in practice, this is quite an amazing thing to discover)…

The number of bits that are different between the old value in the cache and the new value *changes the amount of power required to perform the cache update operation*.

Therefore if you can measure the power consumption of the CPU precisely enough, you can make inferences about which data values got written into the internal, hidden, otherwise invisible cache memory inside the CPU that the CPU thought was none of your business.

Quite intriguing, Doug!


DOUG.  Outstanding.

OK, there are some mitigations.

That section, it starts off: “First of all, you do not need to worry,” but also nearly all CPUs are affected.


DUCK.  Yes, that’s interesting, isn’t it?

It says “first of all” ( normal text) “you” (in italics) “do not need to worry” (in bold). [LAUGHS]

So, basically, no one’s going to attack you with this, but maybe the CPU designers want to think about this in the future if there’s any way around it. [LAUGHS]

I thought that was an interesting way of putting it.


DOUG.  OK, so the mitigation is basically to turn off hyperthreading.

Is that how it works?


DUCK.  Hyperthreading makes this much worse, as far as I can see.

We already know that hyperthreading is a security problem because there have been numerous vulnerabilities that depend upon it before.

It’s where a CPU, say, with eight cores is pretending to have 16 cores, but actually they’re not in separate parts of the chip.

They’re actually pairs of sort of pseudo-cores that share more electronics, more transistors, more capacitors, than is perhaps a good idea for security reasons.

If you’re running good old OpenBSD, I think they decided hyperthreading is just too hard to secure with mitigations; might as well just turn it off.

By the time you’ve taken the performance hits that the mitigations require, you might as well just not have it.

So I think that turning off hyperthreading will greatly immunise you against this attack.

The second thing you can do is, as the authors say in bold: do not worry. [LAUGHTER]


DOUG.  That’s a great mitigation! [LAUGHS]


DUCK.   There’s a great bit (I’ll have to read this out, Doug)…

There’s a great bit where the researchers themselves found that to get any sort of reliable information at all, they were getting data rates of somewhere between 10 bits and 100 bits per hour out of the system.

I believe that at least Intel CPUs have a mitigation that I imagine would help against this.

And this brings us back to MSRs, those model-specific registers that we spoke about last week with Zenbleed, where there was a magic bit that you could turn on that said, “Don’t do the risky stuff.”

There is a feature you can set called RAPL filtering, and RAPL is short for running average power limit.

It’s used by where programs that want to see how a CPU is performing for power management purposes, so you don’t need to break into the server room and put a power monitor onto a wire with a little probe on the motherboard. [LAUGHS]

You can actually get the CPU to tell you how much power it’s using.

Intel at least has this mode called RAPL filtering, which deliberately introduces jitter or error.

So you will get results that, on average, are accurate, but where each individual reading will be off.


DOUG.  Let’s now turn our attention to this new SEC deal.

The Security and Exchange Commission is demanding four-day disclosure limits on cybersecurity breaches:

SEC demands four-day disclosure limit for cybersecurity breaches

But (A) you get to decide if an attack is serious enough to report, and (B) the four-day limit doesn’t start until you decide something is important enough to report, Paul.

So, a good first start, but perhaps not as aggressive as we would like?


DUCK.  I agree with your assessment there, Doug.

It sounded great when I first looked at it: “Hey, you’ve got this four-day disclosure if you have a data breach or a cybersecurity problem.”

But then there was this bit about, “Well, it has to be considered a material problem,” a legal term that means that it actually matters enough to be worth disclosing in the first place.

And then I got to that bit (and it’s not a very long press release by the SEC) that sort-of said, “As soon as you’ve decided that you really ought to report this, then you’ve still got four days to report it.”

Now, I imagine that, legally, that’s not quite how it will work. Doug

Maybe we’re being a little bit harsh in the article?


DOUG.  You zoom in on ransomware attacks, saying that there are a few different types, so let’s talk about that… it’s important in determining whether this is a material attack that you need to report.

So what kind of ransomware are we looking at?


DUCK.  Yes, just to explain, I thought that was an important part of this.

Not to point fingers at the SEC, but this is something that doesn’t seem to have come out in the wash in many or any countries yet…

…whether just suffering a ransomware attack is inevitably enough to be a material data breach.

This SEC document doesn’t actually mention the “R-word” at all.

There’s no mention of ransomware-specific stuff.

And ransomware is a problem, isn’t it?

In the article, I wanted to make it clear that the word “ransomware”, which we still widely use, is not quite the right word anymore, is it?

We should probably call it “blackmailware” or just simply “cyberextortion”.

I identify three main types of ransomware attack.

Type A is where the crooks don’t steal your data, they just get to scramble your data in situ.

So they don’t need to upload a single thing.

They scramble it all in a way that they can provide you with the decryption key, but you won’t see a single byte of data leaving your network as a telltale sign that something bad is going on.

Then there’s a Type B ransomware attack, where the crooks go, “You know what, we’re not going to risk writing to all the files, getting caught doing that. We’re just going to steal all the data, and instead of paying the money to get your data back, you’re paying for our silence.”

And then, of course, there’s the Type C ransomware attack, and that is: “Both A and B.”

That’s where the crooks steal your data *and* they scramble it and they go, “Hey, if it’s not one thing that’s going to get you in trouble, it’s the other.”

And it would be nice to know where what I believe the legal profession calls materiality (in other words, the legal significance or the legal relevance to a particular regulation)…

…where that kicks in, in the case of ransomware attacks.


DOUG.  Well, this is a good time to bring in our Commenter of the Week, Adam, on this story.

Adam gives his thoughts about the various types of ransomware attack.

So, starting with Type A, where it’s just a straightforward ransomware attack, where they lock up the files and leave a ransom note to have them unlocked…

Adam says:

If a company is hit by ransomware, found no evidence of data exfiltration after a thorough investigation, and recovered their data without paying the ransom, then I would be inclined to say, “No [disclosure needed].”


DUCK.  You’ve done enough?


DOUG.  Yes.


DUCK.  You didn’t quite prevent it, but you did the next-best thing, so you don’t need to tell your investors….

The irony is, Doug, if you had done that as a company, you might want to tell your investors, “Hey, guess what? We had a ransomware attack like everyone else, but we got out of it without paying the money, without engaging with the crooks and without losing any data. So even though we weren’t perfect, we were the next best thing.”

And it actually might carry a lot of weight to disclose that voluntarily, even if the law said you didn’t have to.


DOUG.  And then, for Type B, the blackmail angle, Adam says:

That’s a tricky situation.

Theoretically, I would say, “Yes.”

But that’s likely going to lead to a lot of disclosures and damaged business reputations.

So, if you have a bunch of companies coming out and saying, “Look, we got hit by ransomware; we don’t think anything bad happened; we paid the crooks to keep them quiet; and we are trusting that they’re not going to spill the beans,” so to speak…

…that does create a tricky situation, because that could damage a company’s reputation, but had they not disclosed it, no one would know.


DUCK.  And I see that Adam felt the same way that both of you and I did about the business of, “You have four days, and no more than four days… from the moment that you think the four days should start.”

He rumbled that as well, didn’t he?

He said:

Some companies will likely adopt tactics to greatly delay deciding whether there is a material impact.

So, we don’t quite know how this will play out, and I’m sure the SEC doesn’t quite know either.

It may take a couple of test cases for them to figure out what’s the right amount of bureaucracy to make sure that we all learn what we need to know, without forcing companies to disclose every little IT glitch that ever happens and bury us all in a load of paperwork.

Which essentially leads to breach fatigue, doesn’t it?

If you’ve got so much bad news that isn’t terribly important just washing over you…

…somehow, it’s easy to miss the really important stuff that’s in amongst all the “did I really need to hear about that?”

Time will tell, Douglas.


DOUG.  Yes, tricky!

And I know I say this all the time, but we will keep an eye on this, because it will be fascinating to watch this unfold.

So, thank you, Adam, for sending in that comment.


DUCK.  Yes, indeed!


DOUG.  If you have an interesting story, comment or question you’d like to submit, we’d love to read on the podcast.

You can email tips@sophos.com, you can comment on any one of our articles, or you can hit us up on social: @nakedsecurity.

That’s our show for today; thanks very much for listening.

For Paul Ducklin, I’m Doug Aamoth, reminding you until next time to…


BOTH.  Stay secure.

[MUSICAL MODEM]


go top