Category Archives: News

When cops hack back: Dutch police fleece DEADBOLT criminals (legally!)

Sadly, we’ve needed to cover the DEADBOLT ransomware several times before on Naked Security.

For almost two years already, this niche player in the ransomware cybercrime scene has been preying mainly on home users and small businesses in a very different way from most contemporary ransomware attacks:

If you were involved in cybersecurity about ten years ago, when ransomware first started to become a massive money-spinner for the cyberunderworld, you will remember with no fondness at all the “big name brands” of ransomware back then: CryptoLocker, Locky, TeslaCrypt, and many more.

Typically, the early players in the crime of ransomware relied on demanding just-about-affordable-if-you-skipped-going-to-the-pub-for-a-month-or-three blackmail payments from as many individuals as they could.

Unlike today’s major-league ransomware crooks, whom you could summarise as “aim to extort companies for millions of dollars hundreds of times”, the early players went down a more consumer-minded route of “blackmail millions of people for $300 each” (or $600, or $1000 – the amounts varied).

The idea was simple: by scrambling your files right there on your own laptop, the crooks didn’t need to worry about internet upload bandwidth and trying to steal all your files so they could sell them back to you later.

They could leave all your files sitting in front of you, apparently in plain sight, yet totally unusable.

If you tried to open a scrambled document with your word processor, for instance, you’d either see useless pages full of digital shredded cabbage, or a popup message apologising that the app didn’t recognise the file type, and couldn’t open it at all.

Computer works, data doesn’t

Usually, the crooks would go out of their way to leave your operating system and your apps intact, focusing on your data instead.

They didn’t actually want your computer to stop working completely, for several important reasons.

Firstly, they wanted you see and feel the pain of how near but yet so far away your precious files were: your wedding photos, baby videos, tax returns, university course work, accounts receivable, accounts payable, and all the other digital data you’d been meaning to back up for months but hadn’t quite got round to yet.

Secondly, they wanted you to see the blackmail note they’d left IN HUGE LETTERS WITH DRAMATIC IMAGERY, installed as your desktop wallpaper so you couldn’t miss it, complete with instructions on how to acquire the cryptocoins you’d need to buy back the decryption key to unscramble your data.

Thirdly, they wanted to make sure you could still get online in your browser, first to conduct a futile search for “how to recover from XYZ ransomware without paying”, and then, as despondency and desperation set in, to get hold of a buddy you knew could help you with the cryptocurrency part of the rescue operation.

Unfortunately, the early players in this odious criminal plot, notably the CryptoLocker gang, turned out to be fairly reliable at replying quickly and accurately to victims who paid up, earning a sort of “honour amongst thieves” reputation.

This seemed to convince new victims that, for all that paying up burned a giant hole in their finances for the near future, and that it was a bit like doing a deal with the devil, it would very likely get their data back.

Modern ransomware attacks, in contrast, typically aim to put all the computers in entire companies (or schools, or hospitals, or municipalities, or charities) on the spot at the same time. But creating decryption tools that work reliably across a whole network is a surprisingly difficult software engineering task. In fact, getting your data back by relying on the crooks is a risky business. In the 2021 Sophos Ransomware Survey, 1/2 of victims who paid up lost at least 1/3 of their data, and 4% of them got back nothing at all. In 2022, we found we found that the halfway point was even worse, with 1/2 of those who paid up losing 40% or more of their data, and only 4% of them getting all their data back. In the infamous Colonial Pipeline ransomware attack, the company said it wasn’t going to pay up, then notoriously forked over $4,400,000 anyway, only to find that the decryption tool the criminals provided was too slow to be any use. So they ended up with all the recovery costs they would have had if they hadn’t paid the crooks, plus a $4.4m outgoing that was as good as flushed down the drain. (Amazingly, and apparently due to poor operational cybersecurity by the criminals, the FBI ultimately recovered about 85% of the bitcoins paid out by Colonial. Don’t rely on that sort of result, however: such large-scale clawbacks are a rare exception, not the rule.)

A lucrative niche

The DEADBOLT crooks, it seems, have found a lucrative niche of their own, whereby they don’t need to break into your network and work their way onto all the computers on it, and they don’t even need to worry about sneaking malware onto your laptop, or any of the regular computers in your household, office, or both.

Instead, they use global network scans to identify unpatched NAS devices (network attached storage), typically those from major vendor QNAP, and directly scramble everything on your file server device, without touching anything else on your network.

The idea is that if you’re using your NAS as most people do at home or in a small business – for backups, and as primary storage for large files such as music, videos and images – then losing access to everything on your NAS is likely to be at least as catastrophic as losing all the files on all your laptop and desktop computers, or perhaps even worse.

Because you probably leave your NAS device turned on all the time, the crooks can break in whenever they like, including when you’re most likely to be asleep; they only need to attack one device; they don’t need worry whether you’re using Windows or Mac computers…

…and by exploiting an unpatched bug in the device itself, they don’t need to trick you or anyone else in your network into downloading a suspicious file or clicking through to a dubious website to get their initial foothold.

The crooks don’t even need to worry about getting a message to you via email or your desktop wallpaper: they deviously rewrite the login page in your NAS device’s web interface, so as soon as you next try to login, perhaps to find out why all your files are messed up, you get a faceful of blackmail demand.

Even more sneakily, the DEADBOLT crooks have figured out a way to deal with you that avoids any email correspondence (possibly traceable), requires no dark web servers (potentially complicated), and sidesteps any negotiation: it’s their way, or the data highway.

Simply put, each victim gets presented with a one-off Bitcoin address to which they are told to send BTC 0.03 (currently [2022-10-21] just under $600):

The transaction itself acts both as a message (“I have decided to pay up”), and as the payment itself (“and here are the funds”).

The crooks then send you $0 in return – a transaction that has no financial purpose, but that contains a 32-character comment. (Bitcoin transactions can contain additional data in field known as OP_RETURN that doesn’t transfer any funds, but can be used to include comments or notes.)

Those 32 characters are hexadecimal digits that represent a 16-byte AES decryption key that’s unique to your scrambled NAS device.

You paste the hexadecimal code from the BTC transaction into the ransomware “login page”, and the process fires up a decryption program left behind by the crooks that unscrambles (you hope!) all your data.

Call the police!

But here’s a fascinating twist to this tale.

The Dutch police, working together with a company with cryptocurrency expertise, came up with a sneaky trick of their own to counteract the DEADBOLT criminals’ sneakiness.

They noticed that if a victim sent a Bitcoin payment to buy back the decryption key, the crooks apparently replied with the decryption key as soon as the BTC payment transaction hit the Bitcoin network in search of someone to “mine” it…

…rather than waiting until anyone in the Bitcoin ecosystem reported that they had actually mined the transaction and thus confirmed it for the first time.

In other words, to use an analogy, the crooks let you walk out of their store with the product before waiting for your credit card payment to go through.

And although you can’t explicitly cancel a BTC transaction, you can send two conflicting payments at the same time (what’s known in the jargon as a “double-spend”), as long as you’re happy that the first one to get picked up, mined, and “confirmed” is the one that will go through and ultimately get accepted by the blockchain.

The other transaction will be ultimately be discarded, because Bitcoin doesn’t allow double-spending. (If it did, the system couldn’t work.)

Loosely speaking, once Bitcoin miners see that a not-yet-processed transaction involves funds that someone else has already “mined”, they simply stop working on the unfinished transaction, on the grounds that it’s now worthless to them.

There’s no altruism involved here: after all, if the majority of the network has already decided to accept the other transaction, and to embrace it into the blockchain as “the one the community accepts as valid”, the conflicting transaction that hasn’t gone through yet is worse than useless for mining purposes.

If you carry on trying to process the conflicting transaction, then even if you do successfully “mine” it in the end, no one will accept your second-past-the-post confirmation, because there’s nothing in it for them to do so…

…so you know in advance that you’ll never get any transaction fees or Bitcoin bonus for your redundant mining work, and thus you know up front that there is no point in wasting any time or electricity on it.

As long as no one person (or mining pool, or cartel of mining pools) ever controls more than 50% of the Bitcoin network, no one should ever be in a position to command enough time and energy to “deconfirm” an already-accepted transaction by creating a new chain of confirmations that outstrips all the existing ones.

Offer more money…

Given that we just mentioned transaction fees, you can probably see where this is going.

When a miner successfully confirms a transaction that ultimately gets accepted onto the blockchain (in fact, a bundle of transactions), they get a reward in newly-minted bitcoins (currently, the amount is BTC6.25), plus all the fees offered for each transaction in the bundle.

In other words, you can incentivise miners to prioritise your transaction by offering to pay a bit more in transaction fees than everyone else…

…or if you aren’t in a hurry, you can offer a low transaction fee, and get slower service from the mining community.

In fact, if you really don’t care how long it takes, you can offer to pay zero bitcoins as a transaction fee.

Which is what the Dutch cops did for 155 victims from 13 different countries who had asked for help in getting their data back.

They sent out 155 payments from their own selection of BTC addresses to the crooks, all offering to pay transaction fees of zero.

The crooks, apparently relying on a scripted, automatic process, promptly sent back the decryption keys.

Once the cops had each decryption key, they immediately sent out a “double-spend” transaction…

…this time with a tempting fee offered in return for paying the very same funds that they originally offered to the crooks back to themselves instead!

Guess which transactions got the attention of the miners first? Guess which ones got confirmed? Guess which transactions came to nothing?

The proposed payments to the criminals got dropped like hot potatos by the Bitcoin community, before the crooks got paid, but after they’d revealed the decryption keys.

One-time result

Great news…

…except, of course, that this trap (it’s not a trick if it’s lawfully done!) won’t work again.

Unfortunately, all the crooks have to do in future is to wait until they can see their payments are confirmed before replying with the decryption keys, instead of triggering immediately on the first appearance of each transaction request.

Nevertheless, the cops outwitted the crooks this time, and 155 people got their data back for nothing.

Or at least for close to nothing – there’s the small matter of the transaction fees that were necessary to make the plan work, though at least none of that money went directly to the crooks. (The fees go to the miners of each transaction.)

It may be a comparatively modest outcome, and it may be a one-off victory, but we commend it nevertheless!


Short of time or expertise to take care of cybersecurity threat response? Worried that cybersecurity will end up distracting you from all the other things you need to do?

Learn more about Sophos Managed Detection and Response:
24/7 threat hunting, detection, and response  ▶


S3 Ep105: WONTFIX! The MS Office cryptofail that “isn’t a security flaw” [Audio + Text]

WHAT DO YOU MEAN, “DOESN’T MEET THE BAR FOR SECURITY SERVICING”?

Click-and-drag on the soundwaves below to skip to any point. You can also listen directly on Soundcloud.

With Doug Aamoth and Paul Ducklin. Intro and outro music by Edith Mudge.

You can listen to us on Soundcloud, Apple Podcasts, Google Podcasts, Spotify, Stitcher and anywhere that good podcasts are found. Or just drop the URL of our RSS feed into your favourite podcatcher.


READ THE TRANSCRIPT

DOUG.  Breathtaking breaches, decryptable encryption, and patches galore.

All that more on the Naked Security podcast.

[MUSICAL MODEM]

Welcome to the podcast, everybody.

I’m Doug Aamoth; he is Paul Ducklin.

Paul, how do you do today, Sir?


DUCK.  Doug…I know, because you told me in advance, what is coming in This Week in Tech History, and it’s GREAT!


DOUG.  OK!

This week, on 18 October 1958, an oscilloscope and a computer built to simulate wind resistance were paired with custom aluminum controllers, and the game Tennis for Two was born.

Shown off at a three-day exhibition at the Brookhaven National Laboratory, Tennis for Two proved to be extremely popular, especially with high school students.

If you’re listening to this, you must go to Wikipedia and look up “Tennis for Two”.

There’s a video there for something that was built in 1958…

…I think you’ll agree with me, Paul, it was pretty incredible.


DUCK.  I would *love* to play it today!

And, like Asteroids and Battle Zone, and those specially remembered games of the 1980s…

…because it’s an oscilloscope: vector graphics!

No pixellation, no differences depending on whether a line is at 90 degrees, or 30 degrees, or 45 degrees.

And the sound feedback from the relays in the controllers… it’s great!

It’s unbelievable that this was 1958.

Harking back to a previous This Week in Tech History, it was at the cusp of the transistor revolution.

Apparently, the computational half was a mixture of thermionic valves (vacuum tubes) and relays.

And the display circuitry was all transistor-based, Doug

So it was right at the mix of all technologies: relays, valves and transistors, all in one groundbreaking video game.


DOUG.  Very cool.

Check it out on Wikipedia: Tennis for Two.

Now let’s move on to our first story.

Paul, I know you to be very adept at writing a great poem…

…I’ve written a very short poem to introduce this first story, if you’ll indulge me.


DUCK.  So that’ll be two lines then, will it? [LAUGHS]


DOUG.  It goes a little something like this.

Zoom for Mac/Don’t get hijacked.

[VERY LONG SILENCE]

End poem.


DUCK.  Oh, sorry!

I thought that was the title, and that you were going to do the poem now.


DOUG.  So, that’s the poem.


DUCK.  OK.

[WITHOUT EMOTION] Lovely, Doug.


DOUG.  [IRONIC] Thank you.


DUCK.  The rhyme was spectacular!

But not all poems have to rhyme….


DOUG.  That’s true.


DUCK.  We’ll just call it free verse, shall we?


DOUG.  OK, please.


DUCK.  Unfortunately, this was a free backdoor into Zoom for Mac.

[FEELING GUILTY] Sorry, that wasn’t a very good segue, Doug.

[LAUGHS] You tread on someone else’s turf, you often come up short…


DOUG.  No, it’s good!

I was trying out poems this week; you’re trying out segues.

We’ve got to get out of our comfort zones every once in a while.


DUCK.  I assume that this was code that was meant to be compiled out when the final build was done, but accidentally got left in.

It’s only for the Zoom for Mac version, and it has been patched, so make sure you are up to date.

Basically, under some circumstances, when a video stream would start or the camera was activated by the app itself, it would inadvertently think that you might want to debug the program.

Because, hey, maybe you were a developer! [LAUGHS]

That’s not supposed to happen in release builds, obviously.

And that meant there was a TCP debugging port left open on the local network interface.

That meant that anybody who could pass packets into that port, which could be presumably any other locally-connected user, so it wouldn’t need to be an administrator or even you… even a guest user, that would be enough.

So, an attacker who had some kind of proxy malware on your computer that could receive packets from outside and inject them into the local interface could basically issue commands to the guts of the program.

And the typical things that debugging interfaces allow include: dump some memory; extract secrets; change the behaviour of the program; adjust configuration settings without going through the usual interface so the user can’t see it; capture all the audio without telling anybody, without popping up the recording warning; all of that sort of stuff.

The good news is Zoom found it by themselves, and they patched it pretty quickly.

But it is a great reminder that as we say so often, [LAUGHS] “There’s many a slip ‘twixt the cup and the lip.”


DOUG.  All right, very good.

Let us stay aboard the patch train, and pull into the next station.

And this story… perhaps the most interesting part of this story of the most recent Patch Tuesday was what Microsoft *didn’t* include?


DUCK.  Unfortunately, the patches that everybody was probably expecting – and we speculated in a recent podcast, “Well, it looks as though Microsoft’s going to make us wait yet another week until Patch Tuesday, and not do an out-of-band “early release”  are those two Exchange zero-days of recent memory.

What became known as E00F, or Exchange Double Zero-day Flaw in my terminology, or ProxyNotShell as it’s perhaps somewhat confusingly known in the Twittersphere.

So that was the big story in this month’s Patch Tuesday: those two bugs spectacularly didn’t get fixed.

And so we don’t know when that’s going to happen.

You need to make sure that you have applied any mitigations.

As I think we’ve said before, Microsoft kept finding that the previous mitigations they suggested… well, maybe they weren’t quite good enough, and they kept changing their tune and adapting the story.

So, if you’re in doubt, you can go back to nakedsecurity.sophos.com, search for the phrase ProxyNotShell (all one word), and then go and read up on what we’ve got to say.

And you can also link to the latest version of Microsoft’s remediation…

…because, of all the things in Patch Tuesday, that was the most interesting, as you say: because it was not there.


DOUG.  OK, let’s now shift gears to a very frustrating story.

This is a slap on the wrist for a big company whose cybersecurity is so bad that they didn’t even notice they’d been breached!


DUCK.  Yes, this is a brand that most people will probably know as SHEIN (“she-in”), written as one word, all in capitals. (At the time of the breach, the company was known as Zoetop.)

And they’re what’s called “fast fashion”.

You know, they pile it high and sell it cheap, and not without controversy about where they get their designs from.

And, as an online retailer, you would perhaps expect they had the online retailing cybersecurity details down pat.

But, as you say, they did not!

And the office of the Attorney General of the State of New York in the USA decided that it was not happy with the way that New York residents had been treated who were among the victims of this breach.

So they took legal action against this company… and it was an absolute litany of blunders, mistakes and ultimately coverups – in a word, Douglas, dishonesty.

They had this breach that they didn’t notice.

This, at least in the past, used to be disappointingly common: companies wouldn’t realise they’d been breached until a credit card handler or a bank would contact them and say, “You know what, we’ve had an awful lot of complaints about fraud from customers this month.”

“And when we looked back at what they call the CPP, the common point of purchase, the one and only one merchant that every single victim seems to have bought something from is you. We reckon the leak came from you.”

And in this case, it was even worse.

Apparently another payment processor came along and said, “Oh, by the way, we found a whole tranche of credit card numbers for sale, offered as stolen from you guys.”

So they had clear evidence that there had been either a breach in bulk, or a breach bit-by-bit.


DOUG.  So surely, when this company was made aware of this, they moved quickly to rectify the situation, right?


DUCK.  Well, that depends on how you… [LAUGHING] I shouldn’t laugh, Doug, as always.

That depends on what you mean by “rectify”.


DOUG.  [LAUGHING] Oh, god!


DUCK.  So it seems that they *did* deal with the problem… indeed, there were parts of it that they covered up really well.

Apparently.

It seems that they suddenly decided, “Whoops, we’d better become PCI DSS compliant”.

Clearly they weren’t, because they’d apparently been keeping debug logs that had credit card details of failed transactions… everything that you are not supposed to write to disk, they were writing.

And then they realised that had happened, but they couldn’t find where they left that data in their own network!

So, obviously they knew they weren’t PCI DSS compliant.

They set about making themselves PCI DSS compliant, apparently, something that they achieved by 2019. (The breach happened in 2018.)

But when they were told they had to submit to an audit, a forensic investigation…

…according to the New York Attorney General, they quite deliberately got in the way of the investigator.

They basically allowed the investigators to see the system as it was *after* they fixed it, and welded it, and polished it, and they said, “Oh no, you can’t see the backups,”which sounds rather naughty to me.


DOUG.  Uh-huh.


DUCK.  And also the way they disclosed the breach to their customers drew significant ire from the State of New York.

In particular, it seems that it was quite obvious that 39,000,000 users’ details in some way had been made off with, including very weakly hashed passwords: a two-digit salt, and one round of MD5.

Not good enough in 1998, let alone 2018!

So they knew that there was a problem for this large number of users, but apparently they only set about contacting the 6,000,000 of those users who had actually used their accounts and placed orders.

And then they said, “Well, we’ve at least contacted all of those people.”

And *then* it turned out that they hadn’t actually really contacted all 6,000,000 million users!

They had just contacted those of the six million who happened to live in Canada, the United States, or Europe.

So, if you’re from anywhere else in the world, bad luck!

As you can imagine, that did not go down well with the authorities, with the regulator.

And, I must admit… to my surprise, Doug, they were fined $1.9 million.

Which, for a company that big…


DOUG.  Yes!


DUCK.  …and making mistakes that egregious, and then not being entirely decent and honest about what had happened, and being upbraided for lying about the breach, in those words, by the Attorney General of New York?

I was kind of imagining they might have suffered a more serious fate.

Perhaps even including something that couldn’t just be paid off by coming up with some money.

Oh, and the other thing they did is that when it was obvious that there were users whose passwords were at risk… because they were deeply crackable due to the fact that it was a two-digit salt, which means you could build 100 precomputed dictionaries…


DOUG.  Is that common?

Just a two-digit salt seems really low!


DUCK.  No, you would typically want 128 bits (16 bytes), or even 32 bytes.

Loosely speaking, it doesn’t make a significant difference to the cracking speed anyway, because (depending on the block size of the hash) you’re only adding two extra digits into the mix.

So it’s not even as though the actual computing of the hashes takes any longer.

As far back as 2016, people using computers of eight GPUs running the “hashcat” program, I think, could do 200 billion MD5s a second.

Back then! (That amount is something like five or ten times higher now.)

So very, very eminently crackable.

But rather than actually contacting people and saying, “Your password is at risk because we leaked the hash, and it wasn’t a very good one, you should change it”, [LAUGHTER] they just said…

…they were very weaselly words, weren’t they?


DOUG.  “Your password has a low security level and maybe at risk. Please change your login password.”

And then they changed it to, “Your password has not been updated for more than 365 days. For your protection, please update it now.”


DUCK.  Yes, “Your password has a low security level…”


DOUG.  “BECAUSE OF US!”


DUCK.  That’s not just patronising, is it?

That’s at or over the border into victim blaming, in my eyes.

Anyway, this did not seem to me to be a very strong incentive to companies that don’t want to do the right thing.


DOUG.  All right, sound off in the comments, we’d like to hear what you think!

That article is called: Fashion brand SHEIN fined $1.9 Million for lying about data breach.

And on to another frustrating story…

..,another day, another cautionary tale about processing untrusted input!


DUCK.  Aaargh, I know what that’s going to be, Doug.

That’s the Apache Commons Text bug, isn’t it?


DOUG.  It is!


DUCK.  Just to be clear, that’s not the Apache Web Server.

Apache is a software foundation that has a whole raft of products and free tools… and they’re very useful indeed, and they are open source, and they’re great.

But we have had, in the Java part of their ecosystem (the Apache Web Server httpd is not written in Java, so let’s ignore that for now – don’t mix up Apache with Apache Web Server)…

…in the last year, we’ve had three similar problems in Apache’s Java libraries.

We had the infamous Log4Shell bug in the so-called Log4J (Logging for Java) library.

Then we had a similar bug in, what was it?… Apache Commons Configuration, which is a toolkit for managing all sorts of configuration files, say INI files and XML files, all in a standardised way.

And now in an even lower-level library called Apache Commons Text.

The bug in in the thing that in Java is generally known as “string interpolation”.

Programmers in other languages… if you use things like PowerShell or Bash, you’ll know it as “string substitution”.

It’s where you can magically make a sentence full of characters turn into a kind of mini-program.

If you’ve ever used the Bash shell, you’ll know that if you type the command echo USER, it will echo, or print out, the string USER and you’ll see, on the screen U-S-E-R.

But if you run the command echo $USER, then that doesn’t mean echo a dollar sign followed by U-S-E-R.

What it means is, “Replace that magic string with the name of the currently logged in user, and print that instead.”

So on my computer, if you echo USER, you get USER, but if you echo $USER, you get the word duck instead.

And some of the Java string substitutions go much, much, much further than that… as anyone who suffered the joy of fixing Log4Shell over Christmas 2021 will remember!

There are all sorts of clever little mini-programs that you can embed inside strings that you then process with this string processing library.

So there’s the obvious one: to read the username, you put ${env: (for “read the environment”) user}… you use squiggly brackets.

It’s dollar-sign; squiggly bracket; some magic command; squiggly bracket that is the magic part.

And unfortunately, in this library, there was uncontrolled default availability of magic commands like: ${url:...}, which allows you to trick the string processing library into reaching out on the internet, downloading something, and printing out what it gets back from that web server instead of the string ${url:...}.

So although that’s not quite code injection, because it’s just raw HTML, it still means you can put all sorts of garbage and weird and wonderful untrusted stuff into people’s log files or their web pages.

There’s ${dns:...}, which means you can trick someone’s server, which might be a business logic server inside the network…

…you can trick it into doing a DNS look up for a named server.

And if you own that domain, as a crook, then you also own and operate the DNS server that relates to that domain.

So, when the DNS look up happens, guess what?

That look up terminates *at your server*, and might help you map out the innards of someone’s business network… not just their web server, but stuff deeper in the network.

And lastly, and most worryingly, at least with older versions of Java, there was… [LAUGHS] you know what’s coming here, Doug!

The command ${script:...}.

“Hey, let me provide you with some JavaScript and kindly run that for me.”

And you’re probably thinking, “What?! Hang on, this is a bug in Java. What has JavaScript got to do with it?”

Well, until comparatively recently… and remember, many businesses still use older, still-supported versions of the Java Development Kit.

Until recently, Java… [LAUGHS] (again, I shouldn’t laugh)… the Java Development Kit contained, inside itself, a full, working JavaScript engine, written in Java.

Now, there’s no relationship between Java and JavaScript except the four letters “Java”, but you could put ${script:javascript:...}and run code of your choice.

And, annoyingly, one of the things that you can do in the JavaScript engine inside the Java runtime is tell the JavaScript engine, “Hey, I want to run this thing via Java.”

So you can get Java to call *into* JavaScript, and JavaScript essentially to call *out* into Java.

And then, from Java, you can go, “Hey, run this system command.”

And if you go to the Naked Security article, you will see me using a suspect command to [COUGHS APOLOGETICALLY] pop a calc, Doug!

An HP RPN calculator, of course, because it is I doing the calculator popping…


DOUG.  It’s got to be, yes!


DUCK.  …this one is an HP-10.

So although the risk is not as great as Log4Shell, you can’t really rule it out if you use this library.

We have some instructions in the Naked Security article on how to find out whether you have the Commons Text library… and you might have it, like many people did with Log4J, without realising it, because it may have come along with an app.

And we also have some sample code there that you can use to test whether any mitigations that you’ve put in place have worked.


DOUG.  All right, head over to Naked Security.

That article is called: Dangerous hole in Apache Commons Text – like Log4Shell all over again.

And we wrap up with a question: “What happens when encrypted messages are only kinda-sorta encrypted?”


DUCK.  Ah, you’re referring to what was, I guess, an official bug report filed by cybersecurity researchers at the Finnish company WithSecure recently…

…about the built-in encryption that’s offered in Microsoft Office, or more precisely, a feature called Office 365 Message Encryption or OME.

It’s quite handy to have a little feature like that built into the app.


DOUG.  Yes, it sounds simple and convenient!


DUCK.  Yes, except… oh, dear!

It seems that the reason for this is all down to backwards compatibility, Doug…

…that Microsoft want this feature to work all the way back to people who are still using Office 2010, which has rather old-school decryption abilities built into it.

Basically, it seems that this OME process of encrypting the file uses AES, which is the latest and greatest NIST-standardised encryption algorithm.

But it uses AES in the wrong so-called encryption mode.

It uses what’s known as ECB, or electronic codebook mode.

And that is simply the way that you refer to raw AES.

AES encrypts 16 bytes at a time… by the way, it encrypts 16 bytes whether you use AES-128, AES-192, or AES-256.

Don’t mix up the block size and the key size – the block size, the number of bytes that get churned up and encrypted each time you turn the crank handle on the cryptographic engine, is always 128 bis, or 16 bytes.

Anyway, in electronic codebook mode, you simply take 16 bytes of input, turn the crank handle around once under a given encryption key, and take the output, raw and unreprocessed.

And the problem with that is that every time you get the same input in a document aligned at the same 16-byte boundary…

…you get exactly the same data in the output.

So, patterns in the input are revealed in the output, just like they are in a Caesar cipher or a Vigenère cipher:

Now, it doesn’t mean you can crack the cipher, because you’re still dealing with chunks that are 128 bits wide at a time.

The problem with electronic code book mode arises precisely because it leaks patterns from the plaintext into the ciphertext.

Known-plaintext attacks are possible when you know that a particular input string encrypts in a certain way, and for repeated text in a document (like a header or a company name), those patterns are reflected.

And although this was reported as a bug to Microsoft, apparently the company has decided it’s not going to fix it because it “doesn’t meet the bar” for a security fix.

And it seems that the reason is, “Well, we would be doing a disservice to people who are still using Office 2010.”


DOUG.  Oof!


DUCK.  Yes!


DOUG.  And on that note, we have a reader comment for this week on this story.

Naked Security Reader Bill comments, in part:

This reminds me of the ‘cribs’ that the Bletchley Park codebreakers used during the Second World War. The Nazis often ended messages with the same closing phrase, and thus the codebreakers could work back from this closing set of encrypted characters, knowing what they likely represented. It is disappointing that 80 years later, we seem to be repeating the same mistakes.


DUCK.  80 years!

Yes, it is disappointing indeed.

My understanding is that other cribs that Allied code breakers could use, particularly for Nazi-enciphered texts, also dealt with the *beginning* of the document.

I believe this was a thing for German weather reports… there was a religious format that they followed to make sure they gave the weather reports exactly.

And weather reports, as you can imagine, during a war that involves aerial bombing at night, were really important things!

It seems that those followed a very, very strict pattern that could, on occasion, be used as what you might call a little bit of a cryptographic “loosener”, or a wedge that you could use to break in in the first place.

And that, as Bill points out… that is exactly why AES, or any cipher, in electronic codebook mode is not satisfactory for encrypting entire documents!


DOUG.  All right, thank you for sending that in, Bill.

If you have an interesting story, comment or question you’d like to submit, we’d love to read it on the podcast.

You can email tips@sophos.com, you can comment on any one of our articles, or you can hit us up on social: @nakedsecurity.

That’s our show for today; thanks very much for listening.

For Paul Ducklin, I’m Doug Aamoth, reminding you until next time to…


BOTH.  Stay secure!


Women in Cryptology – USPS celebrates WW2 codebreakers

The US Postal Service just issued a commemorative stamp to remember the service of some 11,000 women cryptologists during World War 2.

Like their Bletchley Park counterparts in the UK, these wartime heros didn’t finish the war with any sort of hero’s welcome back into civilian life.

Indeed, they got no public recognition at all for the amazing physical and intellectual effort they put into decrypting and decoding enemy intelligence.

Make no mistake, this work helped enormously towards the ultimate Allied victory over both the Nazis in Europe and the Imperial Japanese in the Pacific.

As the US Post Office puts it:

Sworn to secrecy under penalty of treason, the women cryptologists of World War II remained silent about their crucial and far-reaching contributions for decades. Today, they are widely considered STEM pioneers, especially because their wartime work coincided with the development of modern computer technology. Their contributions opened the door for women in the military and have helped shape intelligence and information security efforts for future generations.

What did you do in the war, Mom?

You can just imagine the sort of conversations that many of these women must have had with their friends and families once peace broke out at the end of 1945:

Q. What did you do in the war, Mom?

A. Oh, y’know, a bit of this and that.

Q. Like what, Mom?

A. Oh, clerical work, mainly. Just a desk job.

Q. But what did you actually *do*, Mom?

A. Oh, adding, subtracting, writing notes, that sort of thing.

Q. Must have been pretty boring!

In fact, the pressure of being a cryptographer during World War 2 was enormous, given that stealing a march on the enemy figuratively, by decrypting their plans up front, was vital to stealing a march on them literally.

Battles could be won, or better yet avoided; bombing raids could be diverted or disrupted; unarmed merchant ships carrying vital supplies could be spared from decimation by submarines; and much, much more.

A desk job in name only

And although, strictly speaking, cryptology was a desk job, it wasn’t your usual 9-to-5 sort of work.

In the early 1940s, Mavis Batey, a woman cryptologist at Bletchley Park in England famously made a cryptographic breakthough in unscrambling a mysterious Engima cipher-machine message from Italy that said, simply, TODAY'S THE DAY MINUS THREE.

Clearly, they were on to something big… but they still had to figure out what it was, and that left just three days to do it in:

[W]e worked for three days. It was all the nail-biting stuff of keeping up all night working. One kept thinking: ‘Well, would one be better at it if one had a little sleep or shall we just go on?’ — and it did take nearly all of three days. Then a very, very large message came in.

Batey’s US counterparts primarily faced a different set of challenges to the UK cryptologists, notably including the Japanese cipher machine known as PURPLE.

The PURPLE device was a home-grown device based on telephone switches, not the proprietary wired disks of the Nazi’s prized Enigma, which was a commercial product.

But shortcuts in PURPLE’s design (it encrypted 20 letters of the Roman alphabet in one way, and the remaining 6 in another, making it more predictable), plus the perspicacity of cryptologists such as Genevieve Grotjan, who served with the US Army Signal Intelligence Service, led to spectacular successes in reading Japanese secrets.

In the words of the Postal Service:

They deciphered Japanese fleet communications, helped prevent German U-boats from sinking vital cargo ships, and worked to break the encryption systems that revealed Japanese shipping routes and diplomatic messages.

“The other side isn’t smart enough”

Fortunately for the Allied forces in the Pacific theatre of war, the Japanese seem to have fallen into the same trap of self-belief that the Nazis did with their encryption devices.

The Japanese military commanders couldn’t bring themselves to accept, or apparently even to assume as a precaution, that the enemy might be smart enough to crack the cipher, and carried on using it right to the end.

So, as the French might say, “To the Women Cryptologists of World War 2: Chapeau!”

You can buy commemorative sheets and first-day covers directly from the USPS

…and you might also like to have a crack (see what we did there?) at a little decryption puzzle that’s posed on what’s called the selvedge, or selvage, of the stamp sheets. (The selvedge is, quite literally, the part around of the edge of the stamp sheet that holds the unused stamps together.)

Here it is (the same cipher is used for all four parts):

ZRPH QF UB SWRORJLVWV RIZRUOGZDULL / FLSKHU / DQDOBCH / VHFUHW

Let us know in the comments if you solve it (we’ll redact correct answers until everyone had had time to have a go).

For hints on how to solve it, have a read of our popular article on cryptographic history:


Zoom for Mac patches sneaky “spy-on-me” bug – update now!

Popular and ubiquitous (software isn’t always both of those things!) cloud meeting company Zoom recently announced an oops-that-wasn’t-supposed-to-happen bug in the Mac version of its software.

The security bulletin is, forgivably, written in the typically staccato and jargon-soaked style of bug-hunters, but the meaning is fairly clear.

The bug is denoted CVE-2022-28762, and is detailed in Zoom Bulletin ZB-22023:

When camera mode rendering context is enabled as part of the Zoom App Layers API by running certain Zoom Apps, a local debugging port is opened by the Zoom client.

Where would you like to go today?

A “debugging port” typically refers to a listening network connection, usually a TCP socket, that handles debugging requests.

In the same way that an email server usually listens on TCP port 25, waiting for remote email clients to “call in” over the network and request permission to deliver incoming messages, debugging ports listen on a port of their own choosing (often configurable, though sometimes only in an undocumented way) for incoming connections that want to issue debug commands.

Unlike an email server, however, which accepts requests relating to message delivery (e.g. MAIL FROM and RCPT TO), debugging connections usually provide a much more intimate sort of interaction with the app you’re connecting to.

Indeed, debugging ports generally allow you not only to find out about the configuration and internal state of the app itself, but also to issue commands directly to the app, including the sort of security-sapping commands that aren’t available to regular users going via the regular user interface.

An email server, for instance, will typically let you send a message to its TCP port for a username of your choice, but it won’t let you send commands that reconfigure the server itself, and it won’t let you extract secret information such as server statistics or other people’s messages.

In contrast, those are exactly the sort of “features” that debugging ports uusally do allow, so that developers can tweak and monitor the behaviour of their app while they’re trying to fix problems, without needing to go through the regular user interface.

(You can see how this sort of “side-channel” into the guts of an application would especially handy when you’re trying to debug the user interface itself, given that the act of using the UI to debug the UI would almost certainly interfere with the very measurements you were trying to make.)

Notably, debugging ports typically let you get a sort of “internal view” of the app itself, such as: peeking into areas of memory that would never usually be exposed to users of the app; grabbing data snapshots that could contain confidential data such as passwords and access tokens; and triggering audio or video captures without alerting the user…

…all without logging into the app or service in the first place.

In other words, debugging ports are a necessary evil for use during development and testing, but they aren’t supposed to be activated, or ideally even to be activatable, during regular use of the app, because of the obvious security holes they introduce.

No password needed

Loosely speaking, if you’ve got access to the TCP port on which the debugger is listening, and you can create a TCP connection to it, that’s all the authentication you need to take over the app.

And that’s why debugging ports are typically only enabled under carefully controlled circumstances, when you know you actually want to allow a developer to be able to wander round right inside the application, enjoying what is effectively unregulated and potentially dangerous superpower access .

Indeed, many software products are deliberately built in two different flavours: a debug build, where debugging can be turned on if desired, and a release build in which the debugging features are omitted altogether so they can’t be activated at all, whether by accident or by design.

Google’s Android phones include a debug mode, whereby you can plug in a USB cable and dig into the phone (albeit not with full root powers) from your laptop via what’s known as the ADB, short for Android Debug Bridge. To enable debugging at all, you first need to click on Settings > About Phone > Build Number seven times (really!) in a row. Only then does the option to turn debugging on even appear in the menus, where you can activate it at Settings > System > Advanced > Developer Options > USB debugging. Then, when you plug in and try to connect from your laptop, you have to authorise the connection via a warning popup on the phone itself. You can certainly do this on purpose, if you have physical access to an unlocked phone, but it’s unlikely to happen by mistake.

For additional security, debugging ports are often set up so they won’t accept connections that come in from other computers (in technical terms, they listen on the “localhost” interface only).

This means an attacker seeking to misuse an incorectly enabled debugging interface would need a foothold on your computer first, such as some sort of proxy malware that itself accepts connections via the internet, and then relays its network packets to the “localhost” network interface.

Despite the need for some sort of local access in the case of CVE-2022-28762, however, Zoom gave this bug a CVSS “severity score” of 7.3/10 (73%), and an urgency rating of High.

Local TCP network connections are typically designed to work across user and process boundaries, so an attacker wouldn’t need to be logged in as you (or as an administrator) to abuse this bug – any process, even a program running under a very limited guest account, might be able to spy on you at will.

Furthermore, because software commands issued via a debugging port typically operate independently of an app’s regular user interface, you probably wouldn’t see any giveaway signs that your Zoom session had been hijacked this way.

If an attacker were activating the app via more conventional Mac remote control channels such as Screen Sharing (VNC), you would at least have a chance of spotting the attacker moving your mouse pointer around, clicking menu buttons, or typing in text…

…but via a debugging interface, which is essentially a deliberate back door, you might be blissfully unaware (and perhaps even unable to detect) that an attacker was snooping on you very personally, using your webcam and your microphone.

What to do?

Fortunately, Zoom’s own security team spotted what we’re assuming was a build-time blunder (a feature left enabled that should have been suppressed), and promptly updated the buggy Mac software.

Update to your macOS Zoom Client to version 5.12.0 or later and the debugging port will stay closed when you use Zoom.

On a Mac, go to the main zoom.us menu and choose Check for Updates... to see whether you’ve got the latest version.


Dangerous hole in Apache Commons Text – like Log4Shell all over again

Java programmers love string interpolation features.

If you’re not a coder, you’re probably confused by the word “interpolation” here, because it’s been borrowed as programming jargon where it’s not a very good linguistic fit…

…but the idea is simple, very powerful, and sometimes spectacularly dangerous.

In other programming ecosystems it’s often known simply as string substitution, where string is shorthand for a bunch of characters, usually meant for displaying or printing out, and substitution means exactly what it says.

For example, in the Bash command shell, if you run the command:

$ echo USER

…you will get the output:

USER

But if you write:

$ echo ${USER}

…you will get something like this instead:

duck

…because the magic character sequence ${USER} means to look in the environment (a memory-based collection of data values typically storing the computer name, current username, TEMP directory, command path and so on), retrieve the value of the variable USER (by convention, the current user’s login name), and use that instead.

Similarly, the command:

echo cat /etc/passwd

…prints out exactly what’s on the command line, thus producing:

cat /etc/passwd

…while the very similar-looking command:

$ echo $(cat /etc/passwd)

…contains a magic $(...) sequence, with round brackets instead of squiggly ones, which means to execute the text inside the brackets as a system command, collect up the output, and write that out as a continous chunk of text instead.

In this case, you’ll get back a slightly garbled dump of the username file (despite the name, no password data is stored in /etc/passwd any more), something like this:

root:x:0:0::/root:/bin/bash bin:x:1:1:bin:/bin:/bin/false daemon:x:2:2:daemon:
daemon:x:2:2:daemon:/sbin:/bin/false adm:x:3:4:adm:/var/log:/bin/false lp:x:4:
7:lp:/var/spool/lpd:/bin/false [...TRUNCATED...]

The risks of untrusted input

As you can imagine, allowing untrusted input, such as data submitted in a web form or content extracted from an email, to be processed by a part of your program that performs substitution or interpolation can be a cybersecurity nightmare.

If you aren’t careful, simply preparing a text message to be printed out to a logfile could trigger a whole load of unwanted side-effects in your app.

These could include, at increasing levels of danger:

  • Accidentally leaking data that was only ever supposed to be in memory. Any string interpolation that extracts data from environment variables and then writes it to disk without permission could put you in trouble with your local data security regulators. In the Log4Shell incident, for example, attackers made a habit of trying to access environment variables such as AWS_ACCESS_KEY_ID, which contain cryptographic secrets that aren’t supposed to get logged or sent anywhere except to specific servers as a proof of authentication.
  • Triggering internet connections to external servers and services. Even if all an attacker can do is to trick you into looking up the IP number of a servername using DNS, you’ve nevertheless just been coerced into “calling home” to a DNS server that the attacker controls, thus potentially leaking information about the internal structure of your network
  • Executing arbitrary system commands picked by someone outside your network. If the string interpolation lets attackers trick your server into running a command of their choice, then you have created an RCE hole, short for remote code execution, which typically means the attackers can exfiltrate data, implant malware or otherwise mess wtith the cybersecurity configuration on your server at will.

As you no doubt remember from Log4Shell, unnecessary “features” in an Apache programming library called Log4J (Logging For Java) suddenly made all these scenarios possible on any server where an unpatched version of Log4J was installed.


[embedded content]

If you can’t read the text clearly here, try using Full Screen mode, or watch directly on YouTube. Click on the cog in the video player to speed up playback or to turn on subtitles.


Not just internet-facing servers

Worse, problems such as the Log4shell bug aren’t neatly confined only to servers that are directly at your network edge, such as your web servers.

When Log4Shell hit, the initial reaction from lots of organisations was to say, “We don’t have any Java-based web servers, because we only use Java in our internal business logic, so we think we’re immune to this one.”

But any server to which user data was ultimately forwarded for processing – even secure servers that were off-limits to connections from outsiders – could be affected if that server [A] had an unpatched version of Log4J installed, and [B] kept logs of data that oroiginated from outside.

A user who pretended their name was ${env:USER}, for example, would typically get logged by the Log4J code under the name of the server account doing the processing, if the app didn’t take the precaution of checking for dangerous characters in the input data first.

Sadly, history repeated itself in July 2022, when an open source Java toolkit called Apache Commons Configurator turned out to have similar string interpolation dangers:

Third time unlucky

And history is repeating itself again in October 2022, with a third Java source code library called Apache Commons Text picking up a CVE for reckless string interpolation behaviour.

This time, the bug is denoted as follows:

CVE-2022-42889: Apache Commons Text prior to 1.10.0 allows RCE when applied to untrusted input due to insecure interpolation defaults.

Commons Text is a general-purpose text manipulation toolkit, described simply as “a library focused on algorithms working on strings”.

Even if you are a programmer who hasn’t knowingly chosen to use it yourself, you may have inherited it as a dependency – part of the software supply chain – from other components you are using.

And even if you don’t code in Java, or aren’t a programmer at all, you may have one or more applications on your own computer, or installed on your backend business servers, that include compoents written in Java.

What went wrong?

The Commons Text toolkit includes a handy Java component known as a StringSubstitutor object, created with a Java command like this:

StringSubstitutor interp = StringSubstitutor.createInterpolator();

Once you’ve created an interpolator, you can use it to rewrite input data in handy ways, such as like this:

String str = "You have-> ${java:version}";
String rep = interp.replace(str); Example output: You have-> Java version 19 String str = "You are-> ${env:USER}";
String rep = interp.replace(str); Example output: You are-> duck

The replace() function processes its input string as if it’s a kind of simple software program in its own right, copying the characters one-by-one except for a variety of special embedded ${...} commands that are very similar to the ones used in Log4J.

Examples from the documentation (derived directly from the source code file String­Substitutor.java) include:

Programming function Example
-------------------- ----------------------------------
Base64 Decoder: ${base64Decoder:SGVsbG9Xb3JsZCE=}
Base64 Encoder: ${base64Encoder:HelloWorld!}
Java Constant: ${const:java.awt.event.KeyEvent.VK_ESCAPE}
Date: ${date:yyyy-MM-dd} DNS: ${dns:address|apache.org}
Environment Variable: ${env:USERNAME}
File Content: ${file:UTF-8:src/test/resources/document.properties}
Java: ${java:version} Script: ${script:javascript:3 + 4} URL Content (HTTP): ${url:UTF-8:http://www.apache.org}
URL Content (HTTPS): ${url:UTF-8:https://www.apache.org}

The dns, script and url functions are particularly dangerous, because they could lead to untrusted data, received from outside your network but processed or logged on one of the business logic servers inside your network, doing the following:

dns: Lookup a server name and replace the ${...} string with the given value returned. If attackers use a domain name they themselves own and control, then this lookup will terminated at a DNS server of their choosing. (The owner of a domain name is, in fact, obliged to provide whats known as definititive DNS data for that domain.) url: Lookup a server name, connect to it using HTTP or HTTPS, and use what's send back instead of the string ${...}. The danger posed by this behaviour depends on what the replacement string is used for. script: Run a command of the attacker's choosing. We were only able to get this function to work with older versions of Java, because there's no longer a JavaScript engine built into Java itself. But many companies and apps still use old-but-still-supported Java versions such as 1.8 (JDK 8) and 11.0 (JDK 11), on which the dangerous ${script:javascript:...} remote code execution interpolarion trick works just fine. ----- String str = "DNS lookup-> ${dns:address|nakedsecurity.sophos.com}";
String rep = interp.replace(str); Output: DNS lookup-> 192.0.66.227 ----- String str = "Stuff sucked frob web-> ---BEGIN---${url:UTF8:https://example.com}---END---"
String rep = interp.replace(str); Output: Stuff sucked frob web-> ---BEGIN---<!doctype html>
<html>
<head> <title>Example Domain</title> . . .
</head> <body>
<div> <h1>Example Domain</h1> [. . .]
</div>
</body>
</html>---END--- ----- String str = "Run some code-> ${script:javascript:6*7}"
String rep = interp.replace(str); Output: Run some code-> 42 

What to do?

  • Update to Commons Text 1.10.0. In this version, the dns, url and script functions have been turned off by default. You can enable them again if you want or need them, but they won’t work unless you explicity turn them on in your code.
  • Sanitise your inputs. Wherever you accept and process untrusted data, especially in Java code, where string interpolation is widely supported and offered as a “feature” in many third-party libraries, make sure you look for and filter out potentially dangerous character sequences from the input first, or take care not to pass that data into string interpolation functions.
  • Search your network for Commons Text software that you didn’t know you had. Searching for files with names that match the pattern common-text*.jar (the * means “anything can match here”) is a good start. The suffix .jar is short for java archive, which is how Java libraries are delivered and installed; the prefix common-text denotes the Apache Common Text software components, and the text in the middle covered by the so-called wildcard * denotes the version number you’ve got. You want common-text-1-10.0.jar or later.
  • Track the latest news on this issue. Exploiting this bug on vulnerable servers doesn’t seem to be quite as easy as it was with Log4Shell. But we suspect, if attacks are found that cause trouble for specific Java applications, that the bad news of how to do so will travel fast. You can keep up-to-date by keeping your eye on this @sophosxops Twitter thread:

Don’t forget that you may find multiple copies of the Common Text component on each computer you search, because many Java apps bring their own versions of libraries, and of Java itself, in order to keep precise control over what code they actually use.

That’s good for reliability, and avoids what’s known in Windows as DLL hell or dependency disaster, but not quite as good when it comes to updating, because you can’t simply update a single, centrally managed system file and thus patch the entire computer at once.


go top