Category Archives: News

Paying ransomware crooks won’t reduce your legal risk, warns regulator

Paying money to ransomware criminals is a contentious issue.

After all, ransomware demands boil down to one thing, whether you know it in everyday language as extortion, blackmail or standover, namely: demanding money with menaces.

Usually, the attackers leave all your precious files where they are, so you can see them sitting there, giving the tantalising impression that you can reach out and access them whenever you want…

…but if you try to open any of them, you’ll find them useless, turned into the colourless digital equivalent of shredded cabbage.

That’s when you’re faced with the extortion, blackmail, standover, call it what you will: “We’ve got a program that will unscramble your files, and we’ve got the decryption key that’s unique to your network. We’ll sell you this rescue toolkit for what we consider a reasonable fee. Contact us to find out how much you’ll need to pay.”

Sometimes, the attackers also steal a tasty selection of your files first, typically uploading your trophy data to an encrypted cloud backup to which they alone hold the access codes.

They then add this into their extortion demands, warning you that if you try to recover the scrambled files yourself, for example by using your backups, they’ll put the stolen data to nefarious use.

They may threaten to leak information to the data protection regulator in your country, or sell the data on to other crooks, or simply dump the juiciest bits where anyone in the world can gorge on them at will.

There’s no doubt that this crime involves both demands and menace, as you can hear in this ransom message, where the crooks didn’t bother to disguise their tone or underlying threats:

[embedded content]

Many ransomware gangs run their own “news websites” where they claim to publish “status updates” about companies that refused to pay, aiming to watch them squirm in a way that the criminals hope may “encourage” future victims to do a deal, and pay the blackmail money instead of risking exposure.

Also, ransomware criminals typically don’t break into your network and unleash the file scrambling part of their attack right away.

They may spend days or even weeks snooping around first, and one of the things they’re keen to find out is how you do your backups, so they can mess with them in advance.

The attackers aim to ruin your ability to recover on your own, and thereby to increase the chance that you will be stuck with doing a “deal” with them to get your business back on the rails again.

It’s not all about the data

But it’s not all about getting the data back and re-starting business operations.

It’s also about potential liability, or at least that’s what the UK data protection regulator thinks.

In an open letter to the legal community published late last week, the Information Commissioner’s Office (ICO), together with the National Cyber Security Centre (NCSC, a government advisory body that’s part of the secret intelligence community), wrote the following:

RE: The legal profession and its role in supporting a safer UK online.

[…] In recent months, we have seen an increase in the number of ransomware attacks and ransom amounts being paid and we are aware that legal advisers are often retained to advise clients who have fallen victim to ransomware on how to respond and whether to pay.

It has been suggested to us that a belief persists that payment of a ransom may protect the stolen data and/or result in a lower penalty by the ICO should it undertake an investigation. We would like to be clear that this is not the case.

As the ICO very baldly points out, echoing what we’ve found in our recent ransomware surveys (our emphasis below):

[P]ayment incentivises further harmful behaviour by malicious actors and does not guarantee decryption of networks or return of stolen data.

[…] For the avoidance of doubt the ICO does not consider the payment of monies to criminals who have attacked a system as mitigating the risk to individuals and this will not reduce any penalties incurred through ICO enforcement action.

By the way, if you’ve ever wondered just how readily today’s ransomware payments help to fund tomorrow’s attacks, keep in mind how the infamous REvil ransomware gang once casually dumped $1,000,000 in Bitcoin into an online crime forum.

This up-front payout was as a “lure” to attract criminal affiliates with desirable skills, notably including real-world experience of using and abusing mainstream backup software tools:

Our ransomware surveys already show that paying off the crooks almost certainly won’t save you money, not least because you still have to go through a recovery exercise that will take as much time as restoring in conventional ways, as well as paying the blackmail.

We also found that the decryption tools supplied by the criminals who attacked you in the first place are often unfit for purpose.

Some victims paid up and got nothing back at all, and very few victims actually managed to recover everything. (Colonial Pipeline allegedly and infamously paid $4,400,000 for a decryptor that was basically useless.)

Now, you also need to know that government regulators aren’t going to accept paying up as a legally valid sort of “we did our best and tried to make good” excuse.

Miitgation of risk, as the ICO refers to it, can’t be achieved by paying extortion demands, because the process of risk mitigation is supposed to go like this:

Where the ICO will recognise mitigation of risk is where organisations have taken steps to fully understand what has happened and learn from it, and, where appropriate, they have raised their incident with the NCSC, reported to Law Enforcement via Action Fraud, and can evidence that they have taken advice from or can demonstrate compliance with appropriate NCSC guidance and support.

What to do?

Combining our own survey findings with the ICO’s legal advice gives these four simple things to remember:

  • Paying up could get you into legal trouble. The ICO notes that paying ransomware demands is not automatically unlawful in the UK. If it’s likely to be the only hope of saving your business and keeping your staff in their jobs, it seems fair to consider paying up as a sort of “necessary evil”. But, as the ICO reminds us, paying up could still get you in trouble because of “relevant sanctions regimes (particularly those related to Russia).”
  • Paying up may be a total failure. There are no guarantees that the criminals will be able to help you recover your data, even if they genuinely want the process to work in order to act as an “advert” to future victims. As we noted above, some victims pay up and recover absolutely nothing, and very few victims who do pay up end up recovering everything. Half of those who pay up lose at least a third of their data anyway, and a third of them lose at least half. (And you don’t get to choose which half that is.)
  • Paying up generally increases your overall cost of recovery. The “recovery tools” aren’t instantaneous and automatic, so you need to add to the blackmail fee the operational costs of actually deploying and using the tools, assuming they work reliably in the first place. Those operational costs are likely to be at least as much as it would cost you to recover from your own backups, given that the overall process is not dissimilar.
  • Paying up will not reduce any data breach penalties. Giving money to the criminals who attacked you in the first place doesn’t count as “mitigating risk”, or as a reasonable precaution, so it can’t be used to argue that your penalty should be reduced, no matter what your legal advisors might think.

Simply put: paying up is not a good idea, should only ever be a last resort, and sometimes serves only to make a bad thing worse.



That didn’t last! Microsoft turns off the Office security it just turned on

Remember 1999?

Well, the Melissa virus just called, and it’s finding life tough in 2022.

It’s demanding a return to the freewheeling days of the last millennium, when Office macro viruses didn’t face the trials and tribulations that they do today.

In the 1990s, you could insert VBA (Visual Basic for Applications) macro code into documents at will, email them to people, or ask them to download them from a website somewhere…

…and then you could just totally take over their computer!

In fact, it was even better/worse that that.

If you created a macro subroutine with a name that mirrored one of the common menu items, such as FileSave or FilePrint, then your code would magically and invisibly be invoked whenver the user activated that option.

Worse still, if you gave your macro a name like AutoOpen, then it would run every time the document was opened, even if the user only wanted to look at it.

And if you installed your macros into a central repository known as the global template, your macros would automatically apply all the time.

Worst of all, perhaps, an infected document could implant macros into the global template, thus infecting the computer, and the same macros (when they detected they were running from the global template but the document you just opened was uninfected) could copy themselves back out again.

That led to regular “perfect storms” of fast-spreading and long-running macro virus outbreaks.

Macro viruses spread like crazy

Simply put, once you’d opened one infected document on your computer, every document you opened or created thereafter would (or could, at least) get infected as well, until you had nothing but infected Office files everywhere.

As you can imagine, at that point in the game, any file you sent to or shared with a colleague, customer, prospector, investor, supplier, friend, enemy, journalist, random member of the public…

…would contain a fully-functional copy of the virus, ready to do its best to infect them when they opened it, assuming they weren’t infected already.

And if that wasn’t enough on its own, Office macro malware could deliberately distribute itself, instead of waiting for you to send a copy to someone else, by reading your email address book and sending itself to some, many or all of the names in there.

If you had an address book entry that was an email group, such as Everyone, or All Friends, or All Global Groups, then every time the virus emailed the group, hundreds or thousands of infectious messages would go flying across the internet to all your colleagues. Many of them would soon mail you back as the virus got hold of their computer, too, and a veritable email storm would result.

The first macro malware, which spread by means of infected Word files, appeared in late 1995 and was dubbed Concept, because at that time it was little more than a proof-of-concept.

But it was quickly obvious that malicious macros were going to be more than just a passing headache.

Microsoft was slow to come to the cybersecurity party, carefully avoiding terms such such as virus, worm, Trojan Horse and malware, resolutely referring to the Concept virus as a nothing more than a “prank macro”.

A gradual lockdown

Over the years, however, Microsoft gradually implemented a series of functional changes in Office, by variously:

  • Making it easier and quicker to detect whether a file was a pure document, thus swiftly differentiating pure document files, and template files with macro code inside. In the early days of macro viruses, back when computers were much slower than today, significant and time-consuming malware-like scanning was needed on every document file just to figure out if it needed scanning for malware.
  • Making it harder for template macros to copy themselves out into uninfected files. Unfortunately, although this helped to kill off self-spreading macro viruses, it didn’t prevent macro malware in general. Criminals could still create their own booby-trapped files up front and send them individually to each potential victim, just as they do today, without relying on self-replication to spread further.
  • Popping up a ‘dangerous content’ warning so that macros couldn’t easily run by mistake. As useful as this feature is, because macros don’t run until you choose to allow them, crooks have learned how to defeat it. They typically add content to the document that helpfully “explains” which button to press, often providing a handy graphical arrow pointing at it, and giving a believable reason that disguises the security risk involved.
  • Adding Group Policy settings for stricter macro controls on company networks. For example, administrators can block macros altogether in Office files that came from outside the network, so that users can’t click to allow macros to run in files received via email or downloaded the web, even if they want to.

At last, in February 2022, Microsoft announced, to sighs of collective relief from the cybersecurity community, that it was planning to turn on the “inhibit macros in documents that arrived from the internet” by default, for everyone, all the time.

The security option that used to require Group Policy intervention was finally adopted as a default setting.

In other words, as a business you were still free to use the power of VBA to automate your internal handling of official documents, but you wouldn’t (unless you went out of your way to permit it) be exposed to potentially unknown, untrusted and unwanted macros that weren’t from an approved, internal source.

As we reported at the time. Microsoft described the change thus:

VBA macros obtained from the internet will now be blocked by default.

For macros in files obtained from the internet, users will no longer be able to enable content with a click of a button. A message bar will appear for users notifying them with a button to learn more. The default is more secure and is expected to keep more users safe including home users and information workers in managed organizations.

We were enthusiatic, though we thought that the change was somewhat half-hearted, noting that:

We’re delighted to see this change coming, but it’s nevertheless only a small security step for Office users, because: VBA will still be fully supported, and you will still be able to save documents from email or your browser and then open them locally; the changes won’t reach older versions of Office for months, or perhaps years, [given that] change dates for Office 2021 and earlier haven’t even been announced yet; mobile and Mac users won’t be getting this change; and not all Office components are included. Apparently, only Access, Excel, PowerPoint, Visio, and Word will be getting this new setting.

Well, it turns out not only that our enthusiasm was muted, but also that it was short-lived.

Last week, Microsoft unchanged the change, and unblocked the block, stating that:

Following user feedback, we have rolled back this change temporarily while we make some additional changes to enhance usability. This is a temporary change, and we are fully committed to making the default change for all users.

Regardless of the default setting, customers can block internet macros through the Group Policy settings described in the article Block macros from running in Office files from the Internet.

We will provide additional details on timeline in the upcoming weeks.

What to do?

In short, it seems that sufficiently many companies not only rely on receiving and using macros from potentially risky sources, but also aren’t yet willing to change that situation by adapting their corporate workflow.

  • If you were happy with this change, and want to carry on blocking macros from outside, use Group Policy to enable the setting regardless of the product defaults.
  • If you weren’t happy with it, why not use this respite to think about how you can change your business workflow to reduce the need to keep transferring unsigned macros to your users?

It’s an irony that a cybersecurity change that a cynic might have described “as too little, too late” turns out, in real life, to have been “too much, too soon.”

Let’s make sure that we’re collectively ready for modest cybersecurity changes of this sort in future…


Apache “Commons Configuration” patches Log4Shell-style bug – what you need to know

Remember the Log4Shell bug that showed up in Apache Log4j late in 2021?

Log4j is one of the Apache Software Foundation’s many software projects (more than 350 at current count), and it’s a programming library that Java coders can use to manage logfiles in their own products.

Logfiles are a vital part of development, debugging, record keeping, program monitoring, and, in many industry sectors, of regulatory compliance.

Unfortunately, not all text you logged – even if it was sent in by an external user, for example as a username in a login form – was treated literally.

If you gave your name as MYNAME, it would be logged just like that, as the text string MYNAME, but any text wrapped in ${...} characters was treated as a command for the logger to run, which could cause what’s known as RCE, short for remote code execution.

Recently, we saw a similar sort of bug called Follina, which affected Microsoft Windows.

There, the troublesome characters were $(...), with round brackets replacing squiggly ones, but with the same sort of side-effect.

In the Follina bug, a URL that contained a directory name with the string SOMETEXT in it would be treated just as it was written, but any text wrapped in $(...) would be run as a Powershell command, once again causing a risk of remote code execution.

More trouble with brackets

Well, the bug CVE-2022-33980, which doesn’t have a catchy name yet, is a very similar sort of blunder in the Apache Commons Configuration toolkit.

The name’s quite a mouthful: Apache Commons is another Apache project that provides numerous Java utilities (sub-projects, if you like) that provide a wide range of handy programming toolkits.

One of these is Commons Configuration, which lets Java apps work with configuration files of a wide range of different formats, including XML, INI, plist, and many more.

As the project itself says, “the Commons Configuration software library provides a generic configuration interface which enables a Java application to read configuration data from a variety of sources.”

Unfortunately, this software treats text wrapped in ${...} specially, too.

Instead of using the text literally, the following special “reprocessing” takes place, referred to rather confusingly in the jargon as interpolation:

  • $(script:STRING) runs STRING as a Java script and uses the output of that code.
  • $(dns:STRING) looks up STRING using DNS.
  • $(url:STRING) reads the URL STRING and retrieves the text to use from there.

In other words, booby-trapped configuration data could, in theory, be used to run malicious code, leak data via DNS lookups, or fetch configuration settings from a rogue website.

What to do?

According to the Commons Configuration team, this “interpolation” bug was introduced in version 2.4 (released in late 2018) and patched in version 2.8.0 (released 2022-07-05, which is Tuesday this week).

All updates going back to version 2.2 in 2017 are listed as “minor releases”, so we’re assuming that updating from any of the vulnerable versions 2.4, 2.5, 2.6 or 2.7 to the latest version ought to be uncontroversial.

So, if you have any Java software that uses the Apache Commons Configuration library, update as soon as you can!

Oh, and if you’re a programmer…

…whether you call it “command substitution”, “live rewriting”, “reprocessing” or “interpolation”, use it sparingly, and don’t turn it on by default for data you haven’t already verified that you can trust.

Am I vulnerable?

A quick way to look for the presence of a possibly-vulnerable Commons Configuration library on a computer is to search for filenames of the form commons-configuration2-*.jar, where * is a wildcard denoting “any text allowed here”.

On Linux/Unix, try:

$ find / -type f -name 'commons-configuration2-*.jar'

On Windows, try:

> DIR C:\commons-configuration2-*.jar /S

Vulnerable versions have the names:

commons-configuration2-2.4.jar
commons-configuration2-2.5.jar
commons-configuration2-2.6.jar
commons-configuration2-2.7.jar

Earlier or later versions don’t have the bug.

The latest, patched, version, is:

commons-configuration2-2.8.0.jar

If you find files with names like this:

commons-configuration-1.9.jar

…those are the old (version 1) flavour of the library, which doesn’t have this bug.


Note. The name Apache refers to the entire Software Foundation and all its projects. It’s not the name of a webserver, in the same way that Microsoft isn’t the name of an operating system. So, even though websites based on the Apache Webserver, also known as httpd, are often referred to as “running Apache”, this bug doesn’t apply to the web server, which is written in C, not in Java.


S3 Ep90: Chrome 0-day again, True Cybercrime, and a 2FA bypass [Podcast + Transcript]

LISTEN NOW

Click-and-drag on the soundwaves below to skip to any point. You can also listen directly on Soundcloud.

With Paul Ducklin and Chester Wisniewski.

Intro and outro music by Edith Mudge.

You can listen to us on Soundcloud, Apple Podcasts, Google Podcasts, Spotify, Stitcher and anywhere that good podcasts are found. Or just drop the URL of our RSS feed into your favourite podcatcher.


READ THE TRANSCRIPT

DUCK.  Chrome! Cybercrime! A missing cryptoqueen! Capturing 2FA tokens!

And the Curious Case Of Chester’s New Chums.

All that and more on the Naked Security podcast.

[MUSICAL MODEM]

Hello everybody.

Once again, it’s Duck in the chair, because Doug is on vacation.

I am joined by my friend and colleague Chester Wisniewski…

Czesław, a very good day to you.


CHET.  Good day to you, Duck.

It’s good to be filling in for Doug again.

I like it when he takes vacation- we get to have an interesting chat planning out the podcast, and since it’s a little slow in the summertime, I’ve got the spare time and it’s really good to be back.


DUCK.  Well, sadly, it’s not slow on the 0-day front.

Once again, we’ve just had the latest Chrome update.

Google has put out three essentially separate security bulletins: one for Android; one for Windows and Mac; and one for Windows and Mac, but on the previous version, the “extended stable channel”.

No mention of Linux, but they all share one common bug, which is “CVE-2022-2294: Buffer overflow in WebRTC.”

Known to have been exploited in the wild, meaning the crooks got there first.

So, tell us more, Chester.


CHET.  Well, I can confirm, at least on the Linux side, that they did do a release.

I don’t know what’s in that release, but the version number at least matches the version number that we expect to see on Windows and Mac, which is 103.0.5060.114.

At any rate, on my Arch Linux running Chromium, that is the build number, and it matches the production Chrome release for Windows on my computer next to it.

So, at least we have version parity. We don’t know if we have bug parity.


DUCK.  Yes, and annoyingly, the Android version, which supposedly has the same patches that were mentioned in the others, is basically the same version number except ends dot-71.

And of course, the 102 version… that’s completely different because it’s a completely different set of four numbers.

The only thing common to all of them is the zero in second position.

So it is quite confusing.


CHET.  Yes, considering it was discovered having been used in the wild, which means somebody beat Google to the punch.

And this particular functionality is especially important to Google, as they’re promoting their Google Meet platform, which is their primary version of… I’ve heard people refer to it as “Google Zoom”.

Google’s M-E-E-T platform, not the type of meat that you might have with dinner.


DUCK.  My mind was boggling for a little bit there!

To clarify, I found myself drifting towards Google Hangouts, which is apparently closing soon, and of course the late and, I think, unlamented Google Plus.


CHET.  Well, if you want to fully go down the Google Messaging platform rabbit hole of how many things they’ve invented and uninvented and merged and cancelled and then reinvented again, there’s a great article on Vox.com that you can read on that!

WebRTC… in essence, that’s the protocol that allows you to stream your webcam into platforms like Google Meet, and stream your microphone.

And I think it’s probably under more use than ever since the pandemic began.

Because many services may offer a fat client for enhanced screen sharing and these types of things, but also offer a web-only version, so you can access things like Zoom or Citrix and so on, often just through your browser.

So, I think this functionality is something that is very complex, which could lead to these types of vulnerabilities, and it also is under a lot of use these days.

I consider this one kind of the most important of the three bugs that you call out in the Naked Security story.


DUCK.  Yes, there’s CVE-2022-2294, -2295 and -2296.

They’re all bugs that you’d kind of hope we were done and dusted with many years ago, aren’t they?

A buffer overflow, a type confusion, and a use after free – so they’ve all basically got to do with memory mismanagement.


CHET.  And I thought Google was telling the world that all problems were solved by Go and Rust, and this suggests very little Go and Rust here.


DUCK.  Even with a very careful language that encourages correct programming, the specs can let you down, can’t they?

In other words, if you if you implement something correctly, but where the specifications aren’t quite right, or leave a loophole, or put files in the wrong place, or treat data in an improper way, you can still have bugs of low, medium or high severity, even with the greatest memory safety enforcement in the world.

So, luckily, there’s a simple solution, isn’t there?

For most people, Chrome will almost certainly have updated automatically.

But even if you think that’s happened, it is worthwhile – at least on Windows and Mac – to go to More > Help > About Google Chrome > Update Google Chrome, and either it will say, “You don’t need to, you’ve got the latest one,” or it’ll go, “Whoa, I haven’t done it yet. Would you like to jump ahead?”

And of course, you would!

In Linux, as you found, your distro provided the update, so that will be, I imagine, the route for most Linux users who have Chrome.

So, it’s not perhaps as bad as it sounds, but it’s something that, as we always say, “Do not delay, do it today.”

Onto the next…

Well, there are two stories, not one, but they’re both related to law enforcement busts.

One is a cybercriminal who pleaded guilty in the US, and the other is someone that the US would dearly love to get their hands on, but is missing somewhere, and has now joined the FBI’s Ten Top Wanted criminals worldwide – the only woman in the Top Ten.

Let’s start with her – that’s Dr Ruja Ignatova of Bulgaria, the “Missing Cryptoqueen”.

Now, that’s a story of a lifetime, isn’t it?


CHET.  Yes, it’s one of the things the cryptoworld seems to be introducing us to – it’s a little more inclusive of women.

There’s a lot of women also involved in the thieving and grafting, along with all the typical men that are involved in so many of the other stories that we cover.

Unfortunately, in this case, she allegedly created a new Bitcoin-like currency known as OneCoin, and allegedly convinced people to give her US$4 billion-with-a-B to invest in the nonexistent cryptocurrency, from everything I can read into this.


DUCK.  $4 billion.. that’s what the FBI seems to think it can prove.

Other reports I’ve seen suggest that the real total may well be very much higher than that.


CHET.  It does sort of make spending $6 million on the picture of a smoking ape seem almost downright sensible…


DUCK.  Rather took me off my stride there. [LAUGHTER]


CHET.  There’s a lot of FOMO, or Fear Of Missing Out.


DUCK.  Absolutely.


CHET.  And I think this entire crime is driven by that FOMO: “Oh, I didn’t get in on Bitcoin when you could buy a pizza for a Bitcoin. So I want to get on the next big thing. I want to be an early investor in Tesla, Uber, Apple.”

I think people perceive these cryptocurrencies to somehow actually have an air of legitimacy that might parallel these real company success stories, as opposed to being a pipe dream, which is exactly what it is.


DUCK.  Yes, and like many pipes… up in smoke, Chester.

I think the thing with cryptocurrencies is when people look at the Bitcoin story, there was actually an extended period where it wasn’t as though bitcoin was “only worth $10”.

It was that bitcoin was essentially so valueless that, apparently, in 2010, a guy – intriguingly called SmokeTooMuch – tried to make the first essentially public sale of Bitcoin, and he had 10,000 of them.

I guess he just mined them, as you did back then, and said, “I want $50 for them.”

So, he’s valuing them at one half of a US cent each… and nobody was willing to pay that much.

Then Bitcoin went to $10, and then at one point, they were, what, $60,000 plus.

So, I guess there’s this idea that if you get in *even before* it’s like Apple shares… if you get in in the early days when it doesn’t really exist yet, then that’s like getting in not just early in Bitcoin, but *right at the very beginning*.

And then you don’t just make 10x your money or 100x times your money… you make 1,000,000x your money.

And I think that, as you say, is the dream that many people are looking at.

And that means, I suspect, that it makes them more willing to invest in things that don’t exist… ironically, precisely because they don’t yet exist, so they really are getting in the ground floor.

You still only get $100,000 in reward, apparently, for information leading to Ruja Ignatova’s conviction.

But she’s certainly up there: Top Ten Wanted!


CHET.  I promise, if I find out where she’s at, and I get the $100,000 reward, I will not gamble it on cryptocurrencies.

I can assure you of that.


DUCK.  So, Chester, now let us move on to the other law-and-order part of the podcast.

I know this is something that you specifically said you wanted to talk about it, and not just because it includes the word “Desjardins”, which we spoke about last time.

This is Mr. Vachon-Desjardins, and we have spoken about him, or you have spoken about him, on the podcast before.

So tell us this story – it’s a fascinating and rather destructive one.


CHET.  Yes. I found it quite coincidental that you invited me on this week, when just randomly a couple of years ago, you also happened to invite me on in the week that I believe he was extradited.


DUCK.  No, that was this March this year when we last spoke about it!


CHET.  Was it?


DUCK.  Yes, I think when he had actually just landed in Florida…


CHET.  Yes! He had just been extradited, exactly!

He had been sent to the United States for prosecution, which is a quite common thing that we do here in Canada.

The US often has stricter laws in many cases, but more than that, the FBI [US federal law enforcement] does a really good job at getting the information together to prosecute these cases.

Not saying that the RCMP [Canadian federal law enforcement] is not capable of that, but the FBI is a little more experienced, so I think they often feel that the US will have a better crack at putting them behind bars.


DUCK.  Having said that, the RCMP had prosecuted him in Canada, and he had a close to seven year jail sentence.

And as you said last time, “We’ve let him out of prison temporarily. We’ve lent him to the Americans. And if he goes to prison there, when he finishes up his time, then he’ll come back and we will put him back in prison for the remainder of his seven years.”

It looks like he will be out of circulation for a while.


CHET.  Yes, I suspect so.

Although, in these types of non-violent crimes, when you’re cooperating with the authorities, they often will reduce sentences or let you out on parole early, that kind of stuff.

We’ll see what happens.

In fact, in his plea agreement, when he pled guilty in Florida, my understanding is it was noted that he was going to be cooperating with authorities on pretty much everything and anything he had access to that they desired… basically helping them build their case.

When we’re talking about these ransomware groups, I find this case particularly interesting because he’s Canadian and I’m in Canada.

But more than that, I think we have this perception that these crimes are committed by criminals in Russia, and they’re far away and they can never be touched, so there’s no point reporting these crimes because we can’t find these people – they’re too good at hiding; they’re on the dark web.

And the truth of the matter is some of them are in your backyard. Some of them are your neighbours. They’re in every country in the world.

Crime knows no boundaries… people are greedy everywhere, and are willing to commit these crimes.

And they’re well worth pursuing when we can pursue them, just as we ought to.


DUCK.  Absolutely.

In fact, if you don’t mind, I’ll read from the plea agreement, because I agree with you: the FBI does a fantastic job not just of doing these investigations, but of putting the information together – even in something which is a conspicuously and formal legal document – in the kind of plain English that makes it easy for a court, for a judge, for a jury, and for anybody who wants to understand the ugly side of ransomware and how it works to learn a lot more.

These are very readable documents, even if you’re not interested in the legal side of the case.

And this is what they say:

“NetWalker operated as a Ransomware-as-a-Service system featuring Russia-based developers, and affiliates who resided all over the world. Under the Ransomware-as-a-Service model, developers were responsible for creating and updating the ransomware and making it available to the affiliates. The affiliates were responsible for identifying and attacking high-value victims with the ransomware. After a victim paid, developers and affiliates split the ransom. Sebastian Vachon-Desjardins was one of the most prolific NetWalker ransomware affiliates.”

That’s a fantastic summary of the whole ransomware-as-a-service model, isn’t it, with a practical example of somebody far away from Russia who is actually very active in making the whole system work.


CHET.  Absolutely.

He is accounted for, I believe, more than 50% of the alleged money pocketed by the NetWalker gang.

When he was captured, he had a little over $20 million in cryptocurrencies from these ransoms… and I thought I read that the total amount of ransom believed to be collected by NetWalker was somewhere in the $40 millio to $50 million range.

So it’s a significant amount of the profit – he was maybe the prime affiliate.


DUCK.  It’s clear, as you say, that he is facing a world of trouble…

…but that he is very definitely expected to rat out his former chums.

And maybe that will be a good thing?

Maybe they’ll be able to close down more examples of this kind of criminality, or more people involved in this prolific group.


CHET.  Maybe we should conclude this with a few more succinct words directly from the agreement, because I think that it really wraps this up well:

“The defendant is pleading guilty because he is, in fact, guilty.”

[LAUGHS]

So that’s a pretty clear statement that he’s not using any weasel words, that he’s taking no responsibility for what he did, which I think is really important for the victims to hear.

And additionally, they say:

“The defendant agrees to co-operate fully with the United States in the investigation and prosecution of other persons, including a full and complete disclosure of all relevant information, including production of any and all books, papers, documents and other objects in defendant’s possession or control.”

And I’m sure “other objects” might include things like cryptocurrency wallets, and chat forums, and things where the planning for all these dirty deeds were conducted.


DUCK.  Yes, and then the good news is that it was due to the seizure of a server, I believe, that they were able to work backwards towards him , amongst other people.

Let’s move on to the last part of the podcast that relates to a story you can also read on Naked Security…

That is about 2FA phishing of Facebook, something I was minded to write up because I myself received this scam.

When I went to investigate it, I thought, “That is one of the more believable fake websites I’ve ever seen.”

There was one spelling mistake, but I had to go looking for it; the workflow is quite believable; there are no obvious mistakes except the wrong domain name.

And when I looked at the time I got the email, wherever I was on the list of recipients – maybe not at the top, maybe in the middle, maybe at the bottom, who knows? – it was only 28 minutes after the crooks had originally registered the fake domain that they were using in that scam.

So, they are not asleep – everything happens at lightning speed these days.


CHET.  Exactly.

I’ve got a warning before I go into this, which is that we in no way want to suggest to people that they shouldn’t use multifactor authentication.

But this does remind me… I was cheating on you with another podcast this morning, and while I was on that other podcast, the topic of multifactor came up.

And one of the challenges we have with multifactor that just consists of “secret number codes”, is that the criminals can act as a sort of proxy-in-the-middle, where they can just ask you nicely for the string of numbers, and if you are tricked into giving it to them, it doesn’t really provide any extra layer of protection.

There is a distinct difference between using some sort of a security key, like a Titan key from Google or a Yubikey, or FIDO authentication using things like an Android smartphone…

There’s a difference between that, and something that displays six digits on the screen and says, “Give these to the website.”

The six digits on the screen is a major improvement over just using a password, but you still need to remain vigilant for these types of threats.


DUCK.  If the crooks have already lured you to the point where you’re willing to type in your username and your password, then you are going to expect that two factor authentication code to arrive in an SMS; you’re going to expect to be consulting your app and to be retyping the code, aren’t you?

I’m not saying to people, “Stop using it,” because it definitely makes things harder for the crooks.

But it’s not a panacea – and, even more importantly, if you’ve got the second factor of authentication, it doesn’t mean you can get all casual with the first one.

The idea is it’s meant to take something that you’ve made as strong as you possibly can, e.g. by using a good password generated by a password manager, and then you add something that also has strength to it.

Otherwise you have half-FA plus half-FA equals 1FA all over again, don’t you?


CHET.  Yes, absolutely.

And there are two things to combat this type of an attack, and one is certainly Using That Password Manager.

The idea there, of course, is the password manager is validating that the page asking you for the password *is actually the one that you originally stored it for*.

So that’s your first warning sign… when it doesn’t offer up your Facebook password because the site is not in fact facebook.com, that should be ringing alarm bells that something is wrong, if you need to go searching through your password manager to find the Facebook password.

So, it’s kind of your first chance here.

And then if you, like myself, use a FIDO token wherever it’s supported (also known as U2F, or Universal Second Factor), that also verifies that the site asking you is in fact the site that you originally set up that authentication with.

Many sites, especially large sites that are heavily fished, like Gmail and Twitter, do support these little USB tokens that you can carry on your keyring, or Bluetooth tokens that you can use with your mobile phone if you happen to use the brand of mobile phone that doesn’t like you plugging tokens into it.

Those are an extra layer of security that are better than those six digits.

So, use the best thing you have available to you.

But when you get a hint such as, “That’s weird, my password manager is not auto-filling my Facebook password”… that’s your big flashing warning sign that something about this is not what it looks like.


DUCK.  Absolutely, because your password manager is not trying to be an artificially intelligent, sentient, “Hey, I can recognise that beautiful background photo I’ve seen so many times on the website.”

It doesn’t get fooled by appearance; it just says, “Am I being asked to put in a password for a website that I already know about?”

If not, then it can’t even try and help you, and like you say, that’s a perfect warning.

But it was the speed of this that interested me.

I know that everything happens super-quickly these days, but it was 28 minutes after the domain first went live that I received the email.


CHET.  Yes, this is another indicator that we do use in SophosLabs when we’re analysing things: “Oh, that’s weird, this domain didn’t exist an hour ago. How likely is it that it’ll show up in an email within an hour of creation?”

Because even on the best of days that I bought a new domain name, I didn’t get around to even configuring my mail server with an MX record for at least an hour. [LAUGHS]


DUCK.  Chester, let’s finish up with what I announced at the start as “The Curious Case Of Chester’s New Chums”. This is an intriguing sort of scammers. Meet Chester that’s just been happening to you in the last 24 hours, isn’t it?


CHET.  Yes…

I have a certain type of follower, let’s say, and I can usually spot people following me that are bots quite easily… you have to be in a particular nerdy frame of mind to be interested in the things that I post on my Twitter account on social media.

And anybody that is in that frame of mind, and wants to know what I’m thinking about can follow me on Twitter (@chetwisniewski).

But I block things that look suspicious to me, because I’ve been around the block a few times and know how information is often scraped by bots to lure people in with legitimate-sounding things.

When I see something suspicious, I block it.

Unfortunately, an acquaintance of mine was at the tragedy in the United States yesterday, on July Fourth, where there was a shooting, and he posted a tweet about how he fled with his daughters to safety.

Fortunately, he and his family are okay, but it was a very traumatic and emotional event for them, and, as a result, his tweet sort of had a moment, right?

Tens of thousands of retweets; hundreds of thousands of likes… and he’s not normally a celebrity kind of person that gets that kind of attention on Twitter.

And I responded with concern for his safety myself, from my Twitter account, and I didn’t put two-and-two together until we were planning this podcast…

Suddenly, I started getting very random likes on an old tweet that had no relevance to any situation that’s current.

I posted something about meeting people in San Francisco at the RSA conference.

Of course, that event was more than a month ago and is long over now, and as a result that tweet is, in fact, completely uninteresting, even to people it might have been temporarily interesting to, who wanted to meet up with me at RSA, and it started getting all these likes.


DUCK.  Even to people who *did* meet up with you at RSA exactly. [LAUGHTER]


CHET.  Which wasn’t very many people, because after I got there and saw the COVID nightmare that was going on, I kind of thought better of meeting too many people at RSA.

But that tweet started getting random likes, and I started looking at the profiles of these people who are liking the tweet, and they were not my people… these are not people who would normally follow me.

One was professing how much love he had for different Nigerian soccer players, and another one was purporting to be a woman from New York City who was into the fashion scene and models and all this kind of stuff…


DUCK.  Right up your street, Chester! [LAUGHTER]


CHET.  Yes. [LAUGHS]

And when I looked at who these accounts were following, they followed a very random set of people that were not thematic.

Most of the people who follow me follow me because of security things I tweet about; they often follow lots of other IT people.

I’ll see that they follow different “IT celebrity” kind of people, or they follow a lot of tech companies… those are signs to me that they’re legitimate followers.

But these accounts: when I looked at them, it was like a scattershot of random people they were following.

There was no rhyme or reason to any of it, which is unlike most of us.

Most of us are into our favorite sporting teams, or whatever hobbies we have, and there’s always a theme running through the people we follow that you can spot very easily.


DUCK.  Yes – when you get to the “16th degree of separation”, chasing someone down the Twitter rabbit hole, it’s a pretty good bet that they don’t really move in your circles in any way whatsoever!


CHET.  Yes.

And what’s bizarre about this is that I’m not really sure what they’re doing, other than latching onto this horrible tragedy and trying to build some sort of reputation.

And my only guess was that perhaps they’re trying to get other people to follow back because they liked their tweet, or perhaps at least to like something that they’ve posted, to try to give them sort of social media boost.

It’s just deplorable that people latch onto these tragedies to try to create anything other than some empathy and sympathy for the people involved.

Giving these accounts what they want may seem innocent enough… I know a lot of people that are like, “Oh, I always follow back.”

It’s quite dangerous to do this.

You’re building up reputations that make things look legitimate, that allow for the continued spread of disinformation and threats and scams.

That little like or that follow back actually matters in a very bad way.


DUCK.  I agree!

Chester, thank you so much for sharing that story about what happened to you on Twitter, and in particular – just like the Facebook 2FA Scam in 28 Minutes story – the speed with which it happened.

Presumably crooks are just trying to milk a tiny bit of sympathy from people who feel it’s maybe a time for being a bit more loving than usual… without thinking about what the long-term effects of essentially blessing somebody who does doesn’t deserve it could have.

Thank you so much for stepping up for the whole podcast at short notice.

Thanks to everybody who listened.

And as usual, until next time…


BOTH.  Stay secure!

[MUSICAL MODEM]


OpenSSL fixes two “one-liner” crypto bugs – what you need to know

Just over a week ago, the newswires were abuzz with news of a potentially serious bug in the widely-used cryptographic library OpenSSL.

Some headlines went as far as describing the bug as a possibly “worse-than-Heartbleed flaw”, which was dramatic language indeed.

Heartbleed, as you may remember, was an incredibly high-profile data leakage bug that lurked unnoticed in OpenSSL for several years before being outed in a flurry of publicity back in 2014:

In fact, Heartbleed can probably be considered a prime early example of what Naked Security jokingly refer to as the BWAIN process, short for Bug With An Impressive Name.

That happens when the finders of a bug aim to maxmise their media coverage by coming up with a PR-friendly name, a logo, a dedicated website, and even, in one memorable case, a theme tune.

Heartbleed was a bug that exposed very many public-facing websites to malicious traffic that said, greatly simplified, “Hey”! Tell me you’re still there by sending back this message: ROGER. By the way, send the text back in a memory buffer that’s 64,000 bytes long.”

Unpatched servers would dutifully reply with something like: ROGER [followed by 64000 minus 5 bytes of whatever just happened followed in memory, perhaps including other people's web requests or even passwords and private keys].

As you can imagine, once news of Heartbleed got out, the bug was easily, quickly and widely abused by criminals and show-off “researchers” alike.

Bad, but not as bad as that

We don’t think these latest bugs reach that level of exploitability or immediate danger…

…but they’re certainly worth patching as soon as you can.

Intriguingly, both bugs fixed in this release are what we referred to in the headline as “one-liners”, meaning that changing or adding just a single line of code patched each of the holes.

In fact, as we’ll see, one of the patches involves changing a single assembler instruction, ultimately resulting in just a single changed bit in the compiled code.

The bugs are as follows:

  • CVE-2022-2274: Memory overflow in RSA modular exponentiation. Fortunately, this bug only exists for computers that support Intel’s special AVX512 instruction set, in OpenSSL builds that include special-purpose code for these chips. The programmer was supposed to copy N unsigned long integers (typically 32 or 64 bits each), but inadvertently copied N bits instead. The fix was to divide the total number of bits by the bit-size of each unsigned long, to compute the correct amount of data to copy.
  • CVE-2022-2097: Data leakage in AES-OCB encryption. When using Intel’s special AES acceleration instructions (widely present on most recent Intel processors), the programmer was supposed to encrypt N blocks of data by running a loop from 1 to N, but inadvertently ran it from 1 to N-1 instead. This means that the last cryptographic block (16 bytes) of an encrypted data buffer could come out with the last block of data still being the original plaintext.

The fixes are simple once you know what’s needed:


The modular exponentiation code now converts a count of bits to a count of integers, by dividing the bit-count by the number of bytes in an integer multiplied by 8 (the number of bits in a byte).


The AES-OCB encryption code now uses a JBE (jump if below or equal to) test at the end of its loop instead of JB (jump if below), which is the same sort of change as altering a C loop to say for (i = 1; i <= n; i++) {...} instead of for (i = 1; i < n; i++) {...}.

In the compiled code, this changes just a single bit of a single byte, namely by switching the binary opcode value 0111 0010 (jump if below) for 0111 0100 (jump if below or equal).


Fortunately, we’re not aware of the special encryption mode AES-OCB being widely used (its modern equivalent is AES-GCM, if you’re familiar with the many AES encryption flavours).

Notably, as the OpenSSL team points out, “OpenSSL does not support OCB based cipher suites for TLS and DTLS,” so the network security of SSL/TLS connections is unaffected by this bug.

What to do?

OpenSSL version 3.0 is affected by both of these bugs, and gets an update from 3.0.4 to 3.0.5.

OpenSSL version 1.1.1 is affected by the AES-OCB plaintext leakage bug, and gets an update from 1.1.1p to 1.1.1q.

Of the two bugs, the modular exponentiation bug is the more severe.

That’s because the buffer overflow means, in theory, that something as fundamental as checking a website’s TLS certificate before accepting a connection could be enough to trigger remote code execution (RCE).

If you are using OpenSSL 3 and you genuinely can’t upgrade your source code, but you can recompile the source you’re already using, then one possible workaround is to rebuild your current OpenSSL using the no-asm configuration setting.

Note that this isn’t recommended by the OpenSSL team, because it removes almost all assembler-accelerated functions from the compiled code, which may therefore end up noticeably slower, but it will eliminate the unwanted AVX512 instructions entirely.

To suppress the offending AES-OCB code alone, you can recompile with the configuration setting no-ocb, which ought to be a harmless intervention if you aren’t knowingly using OCB mode in your own software.

But the best solution is, as always: Patch early, patch often!


go top