“PwnKit” security bug gets you root on most Linux distros – what to do

Researchers at Qualys have revealed a now-patched security hole in a very widely used Linux security toolkit that’s included in almost every Linux distro out there.

The bug is officially known as CVE-2021-4034, but Qualys has given it a funky name, a logo and a web page of its own, dubbing it PwnKit.

The buggy code forms part of the Linux Polkit system, a popular way of allowing regular apps, which don’t run with any special privileges, to interact safely with other software or system services that need or have administrative superpowers.

For example, if you have a file manager that lets you take care of removable USB disks, the file manager will often need to negotiate with the operating system to ensure that you’re properly authorised to access those devices.

If you decide you want to wipe and reformat the disk, you might need root-level access to do so, and the Polkit system will help the file manager to negotiate those access rights temporarily, typically popping up a password dialog to verify your credentials.

If you’re a regular Linux user, you’ve probably seen Polkit-driven dialogs – indeed the text-based Polkit man page gives an old-school ASCII-art rendition of the way they typically look:

Polkit as an alternative sudo

What you might not know about Polkit is that, although it’s geared towards adding secure on-demand authentication for graphical apps, it comes with a handy command-line tool called pkexec, short for Polkit Execute.

Simply put, pkexec is a bit like the well-known sudo utility, where sudo is short for Set UID and Do a Command, inasmuch as it allows you to switch temporarily to a different user ID, typically root, or UID 0, the all-powerful superuser account.

In fact, you use pkexec in much the same way as you do sudo, adding pkexec to the start of a command line that you don’t have the right to run in order to get pkexec to launch it for you, assuming that Polkit thinks you’re authorised to do so:

# Regular users are locked out of the Polkit configuration directory... $ ls -l /etc/polkit-1/rules.d/
/bin/ls: cannot open directory '/etc/polkit-1/rules.d/': Permission denied # Using an account that hasn't been authorised to run "root level"
# commands via the Polkit configuration files... $ pkexec ls -l /etc/polkit-1/rules.d/
==== AUTHENTICATING FOR org.freedesktop.policykit.exec ====
Authentication is needed to run `/usr/bin/ls' as the super user
Authenticating as: root
Password: polkit-agent-helper-1: pam_authenticate failed: Authentication failure
==== AUTHENTICATION FAILED ====
Error executing command as another user: Not authorized This incident has been reported. # After adding a Polkit rule to permit our account to do "root" stuff,
# we get automatic, temporary authorisation to run as the root user... $ pkexec ls -l /etc/polkit-1/rules.d/
total 20
-rw-r--r-- 1 root root 360 Dec 31 2021 10-enable-powerdevil-discrete-gpu.rules
-rw-r--r-- 1 root root 512 Dec 31 2021 10-enable-session-power.rules
-rw-r--r-- 1 root root 812 Dec 31 2021 10-enable-upower-suspend.rules
-rw-r--r-- 1 root root 132 Dec 31 2021 10-org.freedesktop.NetworkManager.rules
-rw-r--r-- 1 root root 404 Dec 31 2021 20-plugdev-group-mount-override.rules
-rw-r--r-- 1 root root 542 Dec 31 2021 30-blueman-netdev-allow-access.rules # And if we put no command and no username on the command line, pkexec
# assumes that we want a shell, so it runs our preferred shell (bash),
# making us root (UID 0) until we exit back to the parent shell... $ pkexec bash-5.1# id
uid=0(root) gid=0(root) groups=0(root),...
exit
$ id
uid=1042(duck) gid=1042(duck) groups=1042(duck),...

As well as checking its access control rules (alluded to in the file listing above), pkexec also performs a range of other “security hardening” operations before it runs your chosen command with added privileges.

For example, consider this program, which prints out a list of its own command line arguments and environment variables:

/*----------------------------ENVP.C---------------------------*/ #include <stdio.h>
#include <string.h> void showlist(char* name, char* list[]) { int i = 0; while (1) { /* Print both the address where the pointer */ /* is stored and the address that it points to */ printf("%s %2d @ %p [%p]",name,i,list,*list); /* If the pointer isn't NULL then print the */ /* string that it actually points to as well */ if (*list != NULL) { printf(" -> %s",*list); } printf("\n"); /* List ends with NULL, so exit if we're there */ if (*list == NULL) { break; } /* Otherwise move on to the next item in list */ list++; i = i+1; } } /* Command-line C programs almost all start with a boilerplate */
/* function called main(), to which the runtime library */
/* supplies an argument count, the arguments given, and the */
/* environment variables of the process. */
/* argv[] lists the arguments; envp[] lists the env. variables */ int main(int argc, char* argv[], char* envp[]) { showlist("argv",argv); showlist("envp",envp); return 0;
}

If you compile this progam and run it, you’ll see something like this, with a laundry list of environment variables that reflect your own preferences and settings:

$ LD_LIBRARY_PATH=risky-variable GCONV_PATH=more-risk ./envp first second
argv 0 @ 0x7fff2ec882c8 [0x7fff2ec895b8] -> ./envp
argv 1 @ 0x7fff2ec882d0 [0x7fff2ec895bf] -> first
argv 2 @ 0x7fff2ec882d8 [0x7fff2ec895c5] -> second
argv 3 @ 0x7fff2ec882e0 [(nil)]
envp 0 @ 0x7fff2ec882e8 [0x7fff2ec895cc] -> GCONV_PATH=more-risk
envp 1 @ 0x7fff2ec882f0 [0x7fff2ec895e1] -> LD_LIBRARY_PATH=risky-variable
envp 2 @ 0x7fff2ec882f8 [0x7fff2ec89600] -> SHELL=/bin/bash
envp 3 @ 0x7fff2ec88300 [0x7fff2ec89610] -> WINDOWID=25165830
envp 4 @ 0x7fff2ec88308 [0x7fff2ec89622] -> COLORTERM=rxvt-xpm
[...]
envp 38 @ 0x7fff2ec88418 [0x7fff2ec89f07] -> PATH=/opt/redacted/bin:/home/duck/...
envp 39 @ 0x7fff2ec88420 [0x7fff2ec89fb9] -> MAIL=/var/mail/duck
envp 40 @ 0x7fff2ec88428 [0x7fff2ec89fcd] -> OLDPWD=/home/duck
envp 41 @ 0x7fff2ec88430 [0x7fff2ec89fe8] -> _=./envp
envp 42 @ 0x7fff2ec88438 [(nil)]

Note two things:

  • The pointer arrays for the arguments and environment variables are contiguous in memory. The NULL pointer at the end of the argument array, shown at memory address 0x7fff2ec882e0 as [(nil)], is followed immediately by the pointer to the first environment string (GCONV_PATH) at 0x7fff2ec882e8. Pointers are 8 bytes each on 64-bit Linux, and the argv and envp pointer lists run from 0x7fff2ec882c8 to 0x7fff2ec88438 in contiguous steps of 8 bytes each time. There is no unused memry between argc[] and envp[].
  • Both the argv list and the envp list are entirely under your control. You get to choose the arguments and to set any environment variables you like, including adding ones on the command line to use when running this command only. Some environment variables, such as LD_PRELOAD and LD_LIBRARY_PATH, can be used to modify the behaviour of the program you’re executing, including quietly and automatically loading additional commands or executable modules.

Let’s run the command again as root, by using pkexec:

$ LD_LIBRARY_PATH=risky-variable GCONV_PATH=more-risk pkexec ./envp first second
argv 0 @ 0x7ffdf900fc98 [0x7ffdf9010eec] -> /home/duck/Articles/pwnkit/./envp
argv 1 @ 0x7ffdf900fca0 [0x7ffdf9010f0e] -> first
argv 2 @ 0x7ffdf900fca8 [0x7ffdf9010f14] -> second
argv 3 @ 0x7ffdf900fcb0 [(nil)]
envp 0 @ 0x7ffdf900fcb8 [0x7ffdf9010f1b] -> SHELL=/bin/bash
envp 1 @ 0x7ffdf900fcc0 [0x7ffdf9010f2b] -> LANG=en_US.UTF-8
envp 2 @ 0x7ffdf900fcc8 [0x7ffdf9010f3c] -> LC_COLLATE=C
envp 3 @ 0x7ffdf900fcd0 [0x7ffdf9010f49] -> TERM=rxvt-unicode-256color
envp 4 @ 0x7ffdf900fcd8 [0x7ffdf9010f64] -> COLORTERM=rxvt-xpm
envp 5 @ 0x7ffdf900fce0 [0x7ffdf9010f77] -> PATH=/usr/sbin:/usr/bin:/sbin:/bin:/root/bin
envp 6 @ 0x7ffdf900fce8 [0x7ffdf9010fa4] -> LOGNAME=root
envp 7 @ 0x7ffdf900fcf0 [0x7ffdf9010fb1] -> USER=root
envp 8 @ 0x7ffdf900fcf8 [0x7ffdf9010fbb] -> HOME=/root
envp 9 @ 0x7ffdf900fd00 [0x7ffdf9010fc6] -> PKEXEC_UID=1000
envp 10 @ 0x7ffdf900fd08 [(nil)]

This time, you will notice that:

  • The command name (argv[0]) has been converted to a full pathname. The pkexec program does this right at the outset, to avoid ambiguity when the program runs with superuser powers. Note that this conversion happens before the underlying Polkit system intervenes to check whether you’re allowed to run the chosen program, and thus before any password prompts appear.
  • The list of environment variables has been trimmed and adjusted for security reasons. In particular, the operating system itself automatically prunes several known-bad environment variables from any program, such as pkexec, that has the privilege to promote other software to run as root. (In technical jargon, this means any program with the setuid bit set, of which pkexec is an example.)

Beware the buffer overflow

So far, so good.

Except that Qualys discovered that if you deliberately launch the pkexec program in such a way that the value of its own argv[0] parameter (by convention, set to the name of the program itself) is blanked out and set to NULL…

…then in the process of converting the command name you want to run (./envp above) into a full pathname (/home/duck/Articles/pwnkit/./envp), the pkexec startup code will perform a buffer overflow.

For security reasons, pkexec ought to detect that it was unusually launched with no command line arguments at all, not even its own name, and refuse to run.

Instead, pkexec blindly looks at what it thinks is argv[1] (usually, this would be the name of the command you are asking it to run as root), and tries to find that program on your path.

But if argv[0] was already NULL, then there are no command line arguments, and what pkexec thinks is argv[1] is actually envp[0], the first environment variable, because the argv[] and envp[] arrays are directly adjacent in memory.

So, if you set your first environment variable to be the name of a program that can be found on your PATH, and then run pkexec with no command arguments at all, not even argv[0], then the program will combine your path with value of the environment variable it mistakenly thinks is the name of the program you want to run…

…and write that “more secure” version of the “filename” back into what it thinks is argv[1], ready to run the chosen program via its full pathname, rather than a relative one.

Unfortunately, the modifed string written into argv[1] actually ends up in envp[0], which means that a rogue user could, in theory, exploit this argv-to-envp buffer misaligment to reintroduce dangerous environment variables that the operating system itself had already taken the trouble to expunge from memory.

Full elevation of privilege

To cut a long story short, Qualys researchers discovered a way to use a dangerously “reintroduced” environment variable of this sort to trick pkexec into running a program of their choice before the program got as far as verifying whether their account was entitled to use pkexec at all.

Because pkexec is a “setuid-root” program (this means that when you launch it, it magically runs as root rather than under your own account), any subprogram you can coerce it into launching will inherit superuser privileges.

This means that any user who already has access to your system, even if they’re logged in under an account with almost no power at all, could, in theory, use pkexec to promote themselves instantly to user ID 0: the root, or superuser, account.

The researchers wisely didn’t provide working proof-of-concept code, although as they wryly point out:

We will not publish our exploit immediately; however, please note that this vulnerability is trivially exploitable, and other researchers might publish their exploits shortly after the patches are available.

What to do?

  • Patch early, patch often. Many, if not most, Linux distros should have an update out already. You can (safely) run pkexec --version to check the version you’ve got. You want 0.120 or later.
  • If you can’t patch, consider demoting pkexec from its superpower privilege. If you remove the setuid bit from the pkexec executable file then this bug will no longer be exploitable, because pkexec won’t automatically launch with superuser powers. Anyone trying to exploit the bug would simply end up with the same privilege that they already had.

FIND AND FIX PKEXEC – HOW TO USE THE WORKAROUND

Finding pkexec on your path:

$ which pkexec
/usr/bin/pkexec <---probable location on most distros

Checking the version you have. Below 0.120 and you are probably vulnerable, at least on Linux:

$ /usr/bin/pkexec --version
pkexec version 0.120 <-- our distro already has the updated Polkit package

Checking the file mode bits. Note that the letter s in the first column stands for setuid, and means that when the file runs, it will automatically execute under the account name listed in column three as the owner of the file; in this case, that means root. In terminals with colour support, you may see the filename emphasised with a bright red background:

$ ls -l /usr/bin/pkexec
-rwsr-xr-x 1 root root 35544 2022-01-26 02:16 /usr/bin/pkexec*

Changing the setuid bit. Note how, after demoting the file by “subtracting” the letter s from the mode bits, the first column no longer contains an S-for-setuid marker. On a colour terminal, the dramatic red background will disappear too:

$ sudo chmod -s /usr/bin/pkexec Password: ***************
$ ls -l /usr/bin/pkexec
-rwxr-xr-x 1 root root 35544 2022-01-26 02:16 /usr/bin/pkexec* <-- setuid bit removed

Turning setuid back on. If you need to re-enable the root-acquiring powers of pkexec before getting the latest update, or if updating the Polkit package doesn’t restore the setuid bit automatically, you can use the chmod +s ... command (in a similar way to how you used -s above) in order to “add back” the letter s to the mode bits:

$ sudo chmod +s /usr/bin/pkexec Password: ***************
$ ls -l /usr/bin/pkexec
-rwsr-xr-x 1 root root 35544 2022-01-26 02:16 /usr/bin/pkexec* <-- setuid bit restored

Tax scam emails are alive and well as US tax season starts

Many countries have taxation forms with names that have entered the general vocabulary, notably the abbreviations of documents that employers are obliged to provide to their staff to show how much money they were paid – and, most importantly, how much tax was already witheld and paid in on the employee’s behalf.

In the UK, for example, the form name P45 is often used as a synonym for getting fired, given that it’s a final tax summary that you get when you leave a job, willingly or otherwise.

In South Africa, you get an IRP5 at the end of the tax year – an archaic term that we are guessing is short for Inland Revenue/Personal, Form #5, even though the South African tax office hasn’t been called the Inland Revenue for nearly 25 years.

In the USA, the earnings form is a W-2, short for Wages and Tax Statement, Version 2. (It seems that there used to be a form W-1, but it was superseded back in the 1950s.)

Here at Naked Security, we know the names of these forms, amongst numerous others, because they often show up in tax scam emails, presumably to give those messages an air of realism.

Anyway, given that it’s the last week in January, and thus that US tax filing season is about to get underway, we weren’t surprised to receive a tax-related scam email today, and to see the W-2 form mentioned explicitly.

We were, however, intrigued by the “less is more” nature of today’s phishing message: there was no traditional call to action, just a simple request for further information.

Phishing without links

Usually, when we write about tax scams, we’re warning about traditional phishing campaigns where the idea is to trick you into “logging in” to a bogus site where your tax office account details and password get captured by cybercriminals.

Sometimes, the crooks use the high-pressure tactic of warning you that you could get into trouble if you don’t act right away (and who would willingly undertake a tax office audit?); often, however, the scam relies on the lure of a refund, like this one we received via text message a year ago:

But, as regular readers will know, quite a few cybercrime groups are moving away from pure-play “technohacks” these days, such as email scams that rely entirely on you clicking a fake link.

Instead, many cybercriminals are adopting the “human led” approach that has served criminals such as advance fee fraudsters and romance scammers so well over the years.

Ransomware scammers, for example, used to rely heavily on automatically catching out hundreds or thousands of independent victims at a time by spamming out links or attachments that directly unleashed the ransomware and then demanded somewhere from $300 to $1000 from anyone who got hit.

These days, the human-led approach means that although ransomware criminals still rely on scrambling hundreds or thousands of computers in a single attack, there’s rarely any obvious or widespread spam campaign that gives away the attack in advance.


LEARN MORE ABOUT HOW MODERN CYBERCRIMINALS ATTACK

Click-and-drag on the soundwaves below to skip to any point in the podcast.
You can also listen directly on Soundcloud, or read a full transcript of the recording.


These days, ransomware criminals typically break into (or buy their way into) your network very quietly, and then carefully plan for an attack that’s co-ordinated and kicked off manually, at a time to suit the crooks and to disadvantage you.

Similarly, tech support scammers are increasingly relying on persuading you to call them, rather than bombarding the world with spammy links or phishy attachments and then trying to filter out the people or computers that seem to respond.

Many victims are willing to call the scammers back – they often provide a convenient toll-free number, so it doesn’t even cost you anything – because it feels like a low-risk approach.

After all, hackers can’t directly push malware onto your computer or inject an exploit into your browser if you’re just talking to them.

Of course, the crooks use that to their own advantage, often giving you a level of personal attention and hand-holding that you wish you could get from other IT vendors…

…at which point, the criminals don’t need an exploit to run code on your computer, because they’ll helpfully and patiently talk you through doing that job all by yourself: they sneakily trick you into creating a cybersecurity problem for yourself under the guise of fixing one.



A little politeness goes a long way

Today’s tax scammers have done a “let’s ask nicely” job, carefully avoiding links and attachments, and presumably hoping that someone on their mailing list will be willing to reply in the hope of investigating what feels like a new business opportunity:

I actually intend to change cpa for my 2021 tax return, Would like to know if your firm is open to accept new clients for the next tax year, All my documents are completed, all I am yet to have is just my W2.

Kindly advise on how to proceed and if I can send forth all the available documents and whats are your fees for individual returns

[REDACTED]
Managing Director

(CPA is short for Certified Public Accountant, the US equivalent of what people in many Commonwealth countries refer to as a CA, or Chartered Accountant.)

On one hand, the fact that many scammers are avoiding links and attachment these days suggests that we are, as a digital society, learning to be more cautious before blindly believing in unsolicited websites or files.

On the other hand, we need to remember that engaging with a scammer in any way at all is the first step that any cybercrook wants to you take.

What to do?

Not least because it’s Data Privacy Week this week, and Data Privacy Day on Friday 28 January 2022, always keep in mind our simplest advice when deciding whether to engage with people you don’t already know online:

  • Be aware before you share. Every little bit you give away about yourself makes it easier for a scammer to charm you, threaten you, or entice you into an online relationship you didn’t ask for in the first place.
  • If in doubt, don’t give it out. If it feels like a scam, back yourself and assume that it is.
  • No reply is a often good reply. Never feel compelled to reply out of politeness or completeness. It’s easier to stay out of a wheedler’s clutches if you don’t open the door for a reply-to-your-reply.
  • Listen to friends and family. Especially when money is involved – whether it’s you sending it to a romance scammer who falsely claims to love you, or receiving it from newfound “business associates” who have fraudulently pitched you a “job” in their organisation.

Stay safe online, everyone!


Alleged carder gang mastermind and three acolytes under arrest in Russia

Russian news agency Tass reported over the weekend that the “purported founder” of a notorious cybercrime group known as Infraud Organisation has been arrested.

Naked Security first wrote about law enforcement action against this crime crew almost three years ago, back in February 2018, when the US Department of Justice (DOJ) unleashed indictments against 36 defendants alleged to be part of what the DOJ described at the time as:

[A] cybercriminal enterprise engaged in the large-scale acquisition, sale, and dissemination of stolen identities, compromised debit and credit cards, personally identifiable information, financial and banking information, computer malware, and other contraband.

As a side-effect of the American indictment, 13 people were arrested in seven different countries: Australia, France, Italy, Kosovo, Serbia, the UK and the US.

The DOJ claimed to have evidence at the time that Infraud Org, operating under the unrepentant motto In Fraud We Trust, was responsible for more than $500 million in actual losses, and more than $2 billion in what law enforcement referred to as “intended losses”.

The 36 defendants went by an eclectic range of online nicknames, including Best4Best, Goldenshop, Guapo1988, Moneymafia, Moviestar, Renegade11, Secureroot, Skizo, Validshop and Zo0mer.

One of those indicted back in 2018 was a certain Andrey Sergeevich Novak, also known as Unicc, also known as Faaxxx, also known as Faxtrod.

Novak, claims this weekend’s Tass report, which quotes an “informed source”, is one of four suspects now under arrest in Russia.

He was allegedly arrested two months ago – the report implies that he’s still in custody – along with three other members of the group whom Tass describe as “detained under house arrest”. (We’re assuming that the US equivalent would be that Novak didn’t make bail, while the other three did.)

None of the latter three were listed by name in the 2018 US indictment, although six of the 36 defendants were entered simply as John Doe, US jargon for “name unknown”.

“The investigation continues,” states Tass, “to establish the other members of the international hacking group.”


Cryptocoin broker Crypto.com says 2FA bypass led to $35m theft

Maltese cryptocoin broker Foris DAX MT Ltd, better known by its domain name Crypto.com, experienced a multi-million dollar “bank robbery” earlier this month.

According to a brief security report published yesterday, 483 customers experienced ghost withdrawals totalling just over 4800 Ether tokens, just over 440 Bitcoin tokens, and just over $66,000 in what are listed only as “other cryptocurrencies”.

Using approximate conversion rates for 17 January 2022 (ETH1=$3300 and BTC1=$43,000), which is when the spurious transactions were spotted, puts the total loss due to this heist at about $35,000,000.

What went wrong?

Crypto.com claims that “all accounts found to be affected were fully restored”, which we assume to mean that customers with phantom withdrawals were reimbursed by Crypto.com itself.

Details of how the crooks pulled off the attack aren’t given in the report, which says simply that “transactions were being approved without the 2FA authentication control being inputted by the user.”

What the report doesn’t explain, or even mention, is whether 2FA codes were entered by someone – albeit not by customers themselves – in order to authorise the fraudulent withdrawals, or whether the 2FA part of the authentication process was somehow bypassed entirely.

This means we can’t easily tell how or why the 2FA process didn’t work properly, though several possible explanations spring to mind.

If you’re interested in looking at how your own 2FA system might fail, you will need to consider a long list of possibilities, including:

  • A fundamental flaw in the underlying 2FA system. For example, an SMS-based system of one-time numeric codes that was based on a defective random generator might produce guessable sequences that could allow attackers to predict the right code to enter for some or all users.
  • A breach of the 2FA authentication database. For example, an app-based code generator system typically relies on a shared secret known as a seed, which can’t be stored as a hash like a regular password. Both client and server must have access to the plaintext of the seed at login time, so a server-side breach could give an attacker the details needed to compute the one-time code sequences for some or all users.
  • Poor coding in the online login process. A badly-configured authentication server might inadvertently allow the client-side login request to manipulate the configuration settings used, for example by including undocumented HTTP headers or adding special URL parameters that unexpectedly override existing security precautions.
  • Weak internal controls to detect risky behaviour by support or IT staff. For example, overly helpful (or wilfully corrupt) insiders might not be subjected to peer review, or second sign-off, for critical account changes. This is how the infamous Twitter hack of 2020 happened: high-profile accounts such as Joe Biden, Elon Musk, Barack Obama, Bill Gates, Apple and others were taken over due to helpful support staff allowing the attackers to alter the email addresses used to secure those accounts.
  • Fail-open behaviour in the authentication process. Access control system sometimes need to fail closed, for example so that no one can sneak in if the system breaks, and sometimes need to fail open, for example so that no one gets locked in during an evacuation emergency. Unexpected reasons for a system to break can lead to incorrect failure modes that leave the system incorrectly configured, such as unlocked for everyone when it should be shut down entirely.

What happened next?

Crypto.com claims that it has “migrated to a completely new 2FA infrastructure”, apparently out of “an abundance of caution”.

We’ve never quite understood what the words “an abundance of caution” are supposed to mean, given that cybersecurity overreactions can be as costly and as counterproductive as underreactions, but it seems to be a must-say phrase in contemporary breach reports, as if thoughtfully taking appropriate precautions is no longer good enough.

After all, if the root cause of your 2FA failure was reason (1) above – an intrinsic shortcoming in the 2FA system itself – then making a root-and-branch change by swapping it for a whole new 2FA technology seems appropriate.

But if the root cause was reason (5) above – support staff too easily able to authorise account resets – then changing the underlying 2FA technology might make little or no difference.

What to do?

  • If you’re a Crypto.com customer, you’ll need to re-configure your account to use the new system. Notably, there’s apparently now a 24-hour sunrise period for adding new accounts for balance transfers. This is intended to add extra time for you to spot, or to be warned about, unexpected account changes attempted by crooks.
  • If you’re looking at adding 2FA to your own online services, don’t just test the obvious parts of the system. Make sure you consider all points of interaction with the rest of your system, and consider hiring penetration testers to probe for unexpected types of failure.
  • If you’re in PR or marketing, make the whole company practise how it will react if a breach should occur. This doesn’t imply you are expecting to fail. But it does mean that if you get caught out, the legally and morally necessary process of communicating with your unfortunate customers won’t suck up planning time that would be better spent on researching and properly fixing the problem.

S3 Ep66: Cybercrime busts, wormable Windows, and the crisis of featuritis [Podcast + Transcript]

LISTEN NOW

Click-and-drag on the soundwaves below to skip to any point. You can also listen directly on Soundcloud.

With Doug Aamoth and Paul Ducklin.

Intro and outro music by Edith Mudge.

You can listen to us on Soundcloud, Apple Podcasts, Google Podcasts, Spotify, Stitcher and anywhere that good podcasts are found. Or just drop the URL of our RSS feed into your favourite podcatcher.

READ THE TRANSCRIPT


DOUG AAMOTH. Romance, scams, bugs, worms and REvil ransomware.

All that and more on the Naked Security Podcast.

[MUSICAL MODEM]

Welcome to the podcast, everybody.

I am Doug; he is Paul…


PAUL DUCKLIN. Well done, Doug… right way round this week!


DOUG. Why, thank you! Last week I got mixed up…


DUCK. You didn’t confuse yourself.


DOUG. But it took 65 episodes for me to make that mistake, so I’m pretty proud of myself.

It might happen again, but…


DUCK. That’s right: we’re on Route 66 today!


DOUG. We are.


DUCK. That’s quite a big deal, Doug.


DOUG. Yes.

And just like Route 66, we have a lot of attractions to look at this week – a full docket.


DUCK. [LAUGHS AT SEGUE] I love your work!


DOUG. We do like to start the show with a Fun Fact.

And usually the fun fact is related to the This Week in Tech History segment.

Not this week, though, because it’s been very cold here and a lot of people have been wearing winter hats. I was looking at a group of people and I thought, “What are those pom-poms on the top of the hats? Where did that come from?”

So I looked it up, and if you ever wonder why some winter hats have those fluffy pom-poms on top, apparently they were worn by French sailors in the olden days to protect their heads from banging against the low ceilings of ships while out at sea.

They were especially effective in rough waters.

So if you have a pom-pom on the top of your hat, you have a French sailor to thank for it.


DUCK. Oh, so it was actually padding?


DOUG. Yes.

I have a very low ceiling in our basement and laundry room, so maybe I’ll go put a pom-pom hat on and walk around and see if it helps, because I hit my head quite a bit.


DUCK. You could just duct tape a mouse mat… you know, put it on top of your head and duct tape it under your chin.


DOUG. [LAUGHS] I don’t know if they had neoprene back in those days, but it’s a good idea.

Well, let’s talk about… we’ve got a lot of stories to cover.

The first one: we have effectively ended ransomware with the alleged bust of the REvil ransomware crew in Russia.

[SARCASTIC] It’s the end of ransomware as we know it, right?


DUCK. Well, even the Federal Security Bureau of Russia, the FSB, didn’t actually say that, though they did actually do a bust.

There’s been a lot of criticism in the past…

…Russians, I think, like the Germans and French, and a whole load of countries, don’t extradite their own citizens. So, if you want to get people in those countries prosecuted for crimes they committed against another country, you basically have to provide the country with the evidence it needs.

And there’s a lot of criticism that Russia didn’t seem very willing to do that.

In this case, it looks like they were: apparently, 25 street addresses got raided in a variety of different cities.

They mentioned 14 people being targeted, though they don’t say how many of those ultimately got arrested.

But there were some arrests; plus 20 fancy cars towed away, apparently bought with the proceeds of crime. And as we’ve said before, there’s probably a bunch of forensic data in the average fancy car these days, in terms of the entertainment system, satnav, phones built into the car and all that sort of stuff.

And they got something like $6,000,000 to $7,000,000 in rubles, US dollars, euros and cryptocoins.

So the FSB was quite bullish about what it had achieved, stating that, as a result of the raid, “this cyber gang ceased to exist and its criminal infrastructure was neutralized.” So, that’s REvil.

They didn’t say, ” It’s the end of ransomware as we know it,” because obviously it isn’t.

There are two problems, even if REvil really has sunk without a trace now: [a] there are plenty of other ransomware gangs where REvil came from; and [b], sadly, there are plenty of other types of cybercrime involving crooks that have little no interest in ransomware, but are still capable of doing plenty of evil, albeit that they’re not REvil.


DOUG. Yes, Sir!

But a step in the right direction nonetheless?


DUCK. Yes, I don’t think we can complain!

But it’s still all about: patch early, patch often; don’t let your guard down; prevention is better than cure; and invest in your users.


DOUG. We’ve got more advice in our State of Ransomware 2021 report, which is linked to in the article called REvil Ransomware crew allegedly busted in Russia, says FSB on nakedsecurity.sophos.com.

Let’s just shimmy right along to another bust: a romance scammer who targeted almost 700 women has gotten 28 months in jail.


DUCK. As the National Crime Agency of the UK point out, in respect of romance scams in general, they say, “We want to encourage all those who think they’ve been a victim of romance fraud to not feel embarrassed or ashamed, but please report it.”

The National Crime Agency can’t make a case where somebody hasn’t told them, “Hey, I sent money to this person and I now think I shouldn’t have,” because if they insist that they sent the money willingly, and they don’t consider that they were defrauded, then I guess fraud hasn’t happened.

And that’s the problem with a cybercrime like this.


DOUG. Yes, we do have one heartbreaking comment on the article, and another uplifting comment at the very end.

One of our readers thinks his mom is being scammed, and she’s not reacting well to her family trying to talk her out of it; and then we have another one where they caught a scammer red-handed, which was kind of an interesting story.


DUCK. Unfortunately, these crimes don’t just leave people brokenhearted and destitute. They can also leave you with a giant rift in your family circle.

That guy said, “My mom quit talking to me because I don’t believe this is the love of her life.”

The only advice we can really give is that if you have even an inkling that you might be in a scam, no matter how heart-rending it’s going to be to have to admit that, don’t “show the hand” to your friends and family if they’re trying to warn you.

They might be wrong, but they could very, very well be right… so give them a fair hearing.


DOUG. OK, we’ve got advice in the article, and a helpful video called Romance Scams: What to do?.

We talked about listening to your friends and family if they try to warn you; we also have things like: consider reporting it to the police; don’t blame yourself if you get reeled in; look for a support group; and most importantly, get out as soon as you realize it’s a scam.


DUCK. Yes, my advice there, very particularly, is: don’t say to the scammer, “Oh, I’m beginning to suspect you. I’ll give you one last chance to prove yourself.”

Remember, if they are a scammer, they’ve already reeled you in this far.

Do you think they’re going to have too much trouble with one little objection that you’re bringing now?

If you’ve decided it’s a scam, don’t tell them – just cut contact and then go and look for a local support group.

And, by the way, be very careful, if you break off connection with the scammers, if you suddenly get contacted by somebody claiming to represent a support group, or law enforcement, or a company that can help you get your scammed money back.

Because that is the classic “counter-scam”.

When the crooks realize you really have decided they’re scammers, then they come in trying to pretend to be the antiscammers!

There are numerous cases of people getting scammed twice. If you’re going to withdraw from a scam, only deal with people you actually know, and can meet, and that you can trust face to face.

Don’t just take help from anyone who comes up offering it online – it could be the scammers coming back.


DOUG. [SAD] Wonderful. The joys of human behaviour.

That is Romance scammer who targeted 670 women gets 28 months in jail on nakedsecurity.sophos.com.

We shift from human worms to Windows worms: a wormable Windows HTTP hole.

What do we need to know about this, Paul?


DUCK. This was a fascinating start to 2022, wasn’t it? It was one of the many security bugs fixed in this month’s patch Tuesday…


DOUG. That was a big one!


DUCK. I think there were 102 bugs!

But one of them didn’t seem too harmful at first, perhaps because it didn’t say, “This bug is in the Microsoft web server that everyone knows.”

It was just described as HTTP protocol stack remote code execution vulnerability, or CVE-2022-21907.

So you kind of think, “Oh, it’s some low level code thing; probably doesn’t apply to me, because I’m not running IIS.”

And in that sense, it was a little bit like the trouble we had with Log4j, where everyone said, “I don’t have any Java servers.”

And we said: no, it’s not about servers; it’s about apps that are written in Java.

“I don’t have many of those…” Are you sure?

“Well, I do have some of them, but not many of them run Log4j…” Are you sure?

And then, as we’ve said on a couple of previous podcasts, when people would go looking for Log4j, they’d find, “Golly, there’s a lot more of it than I thought.”

The problem here is very similar, namely that HTTP.sys is a low level driver that provides HTTP services for when you need a program that will accept and answer web requests, *including IIS*.

In fact, IIS is implemented on top of this HTTP.sys, but it’s just one of dozens, or hundreds, or thousands of applications you could have that might use this thing.

Any program you have, whether you realizs it or not, that contains some kind of web console, or web interface, or web port you can connect to, could be at risk of this bug if you haven’t patched.

And what got everyone excited is, as Microsoft said in their Frequently Asked Questions list for this particular patch, “Is this wormable?”, meaning could somebody use it to write a self-spreading virus…


DOUG. Yes!


DUCK. They really did just put that one word!


DOUG. [LAUGHS] “Yes. Full stop.”


DUCK. And they said, “Microsoft recommends prioritising the patching of affected servers.”

Now, my opinion is that the wording of that was somewhat unfortunate, because it leads you to infer that this only affects servers. Where else would you have an HTTP service listening than on a server?

Of course, the answer is: loads and loads of programs these days use HTTP as their GUI, as their interface, don’t they? Many have a web console, even if they’re programs designed for an end user.

The bug is a function of a low-level driver in Windows itself, and that’s what needs to be patched.

I guess the good-news part of that is, once you’ve done this patch, every program that depends on HTTP.sys is implicitly patched along with it because they all rely on the same low-level driver.


DOUG. Okay, what… playing Devil’s advocate. What should I do if I’m not able to patch right away for some reason?


DUCK. I came up with a fix which worked in my limited testing. Very simple.

You just go into your registry (we’ve got a script on Naked Security that shows you how to do this), and you change what’s called the “start code” for the HTTP Windows service from the value 3, which means start when needed, to the value 4.

You just have to know that 4 means disabled; can’t start.

And that essentially fixes this problem, because no software can actually fire up this driver, therefore nothing can actually use it, therefore the bug can’t be tickled.

The flipside of that is no software can use this HTTP service, so if it turns out that you *do* have an app where, without you realising it, part of its administration relies on a web-based console or a web based API… then that’s not going to work either.

So this is not a permanent solution; it’s just a workaround.

You ultimately need to fix this HTTP.sys file as part of the Patch Tuesday update.


DOUG. OK, that is Wormable Windows HTTP hole – what you need to know on nakedsecurity.sophos.com.

Now, it’s time for This Week in Tech History.

Lest you think we’d only talk about worms once in this episode… this week, on 20 January 1999, the world was introduced to the HAPPY99 worm, also known as Ska or Iworm. HAPPY99 was reported by several anti-virus vendors to be a pretty big pain in the neck.


DUCK. Believe me, it was jolly huge.

And it had a trick that you will grudgingly like, Doug.

The crooks did what you call the “B thing”: best/brilliant.

They avoided making spelling mistakes or typos or writing bad English.

They avoided all those problems simply by having no text.


DOUG. Aaargh.


DUCK. Brilliantly simple, isn’t it?


DOUG. Arrrrrgh!


DUCK. If you have zero characters, then you must, ipso facto, have zero spelling mistakes, typos, grammos, et cetera.

It simply arrived; it was an executable; it said HAPPY99.EXE; and if you ran it, it showed you a little fireworks display.


DOUG. [DOWNCAST] Yes, indeed.

All right, well, we’ve got two Serious Security articles lined up.

The first is about a Linux full disk encryption bug that has been fixed. But what happened before it was fixed?


DUCK. Usually, on Linux, when you’re doing full disk encryption – that’s the stuff that makes sure that if someone steals your laptop once it’s powered off, the disk’s just shredded cabbage unless you put in a password…

…on Linux, you’re probably using a thing called LUKS, Linux Unified Key Setup. And to help you manage LUKS, there’s a program called cryptsetup.

Unfortunately, as often happens with full disk encryption because it’s so useful, cryptsetup has an awful lot of features – probably a lot more than you would ever imagine you needed.

And one of the things that cryptsetup can do – the option is called reencrypt.

What it means is that instead of just changing the password that decrypts the master encryption key, it actually decrypts and reencrypts your whole hard drive *while you’re using it*, so you don’t have to decrypt the whole thing and risk having it unencrypted for a while.

It all sounds fantastic, except that what the cryptsetup team did is: they figured, “Hey, we could use the same code if someone needs to decrypt the disk,” like they actually want to remove the encryption for some reason.

Or if they’ve got a disk that somehow never was encrypted and now they want to add encryption back.

So they thought, “Well, those are just special cases of reencrypt. So let’s fudge the system instead of writing them as separate utilities.”

Let’s just do them as “deviant cases” of reinfection…


DOUG. [LAUGHS]


DUCK. To cut a long story short, it turns out that if you’re using decrypt or encrypt functions, rather than the reencrypt function, then cryptsetup doesn’t take sufficient care about what you might call the metadata – the temporary data – that records how far it’s got.

So, somebody who has access to your computer *but does not know your password* can modify your hard disk and basically trick the system into thinking, “Oh, I was in the middle of decryption, but it broke halfway through.”

If you tried to do that when the person was *reencrypting*, it would go, “Uh oh! Someone’s been tampering with your disk: you need to investigate!”.

But those checks, if you were using the pure *decrypt*, were not made.

So somebody could get your computer while you weren’t looking, fiddle with it, and then when you rebooted and actually put in your password, at least some part of the disk might get decrypted.

And you wouldn’t realise, but you’d end up with at least one little bit of your disk decrypted.

Which means that if you’re relying on full disk encryption to say to the regulator, “By the way, if this laptop is stolen, I can promise you there is no plaintext data on here at all”…

…well, you might not be telling the truth, because there might be a small, medium or large chunk of data that *did* get decrypted without you realising it.

And it gets worse!

What a person could do is this: they could decrypt a chunk of your disk and then come back later. If you haven’t noticed, they could dig around in that decrypted data, which is no longer integrity protected; it’s just plaintext.

They could make some cunning modifications: maybe they could change a file name, or, if they could find fragments of something that looked like your browsing history, they could insert browsing history that made you look like a very naughty person indeed.

Then they could run the bug backwards! They could say, “You need to reencrypt this stuff.”

And the next time you booted and put in your password, your disc would “heal itself” by reencrypting the stuff that had inadvertently been decrypted, but *with unauthorized changes in it*.


DOUG. Ooooooooooh.


DUCK. So this sounds like, “Well, that’s not really a bug, is it?”

But what it means is that somebody with your worst interests at heart (say, somebody who wants to gaslight you), if they have access to your computer when you’re not looking, they could, *without ever having to find your password*, stitch you up with data on your disk that’s encrypted with your password.

So they could say, “How on Earth could I have done that? I don’t know the password. I can prove I don’t know the password, beyond reasonable doubt, at any rate. If it’s encrypted with your password, well, then *you* must have done it.”


DOUG. Yes.


DUCK. And this was a little loophole that meant that assumption didn’t necessarily hold…

…and therefore you should get the latest version of the cryptsetup program, because it adds the checks that should have been in the pure decrypt and pure encrypt functions.

It adds integrity checks that make sure that nobody tries to trigger decryption or encryption without actually having known the password in advance.

If you have cryptsetup, the version you want is 2.4.3 or later.


DOUG. All right, you can learn more about that – the article is on Naked Security at Serious Security: Linux full disk encryption bug fixed – patch now.

Well, it feels good to be getting back into a rhythm, a cadence, where another week goes by…

…and we now have an Apple bug to talk about.


DUCK. [LAUGHS] I was wondering where you were going with that, Doug!

Yes, this is an Apple bug. And annoyingly, it’s a bug in Safari, or perhaps more importantly, in WebKit, which is what you might call the browser engine that Safari uses.


DOUG. [IRONICALLY] Then I believe I’ll just go download Firefox for my iPad and I’ll be just fine, Paul. Right?


DUCK. Well, that’s the problem. If it’s not macOS, but rather iOS or iPadOS, Apple requires all web browsing apps to use WebKit.

So in iOS and iPadOS, you don’t really have a workaround. Or more importantly, if you think, “Oh, I’ll just go and get Firefox,” it won’t save you from this bug.


DOUG. So what actually causes the problem here?


DUCK. It is Featureitis and Complexity Considered Harmful yet again, Doug.


DOUG. What, again?


DUCK. Again, again.


DOUG. This is becoming a theme!


DUCK. As our listeners will surely know, what’s called stateful HTTP data – in other words, things that your browser remembers so that when you go back to a website, the website can tell that it’s you coming back…

Obviously, that’s good for tracking, but it’s also good for things like, “Should I use the big fonts or the small fonts? Should I be in mobile phone mode or desktop mode?” All of those things that you want to retain between one website visit and the next.

…traditionally, those were handled by data objects called cookies.

And without cookies, we could never have had websites that allowed you to login, because the website wouldn’t be able to remember, “Hey, this is the same person coming back.”

But it turns out that cookies are inefficient, because when you send cookies, you have to send all the cookies ever set by a website, every time you connect any page on the website, even if that page doesn’t need them.

And therefore most browsers have a strict limit on how much cookie data you could have.

So guess what happened? The browser people got together and they said, “Hey, let’s have a thing called web storage,” which is like big cookies that you can access with JavaScript. You only access web storage with JavaScript from a particular web page, when you know you need the data.

So you had cookies and web storage; two different technologies. One did not require JavaScript; one did require JavaScript. One was limited in how much state data it could save; the other was much more flexible, and let you save much bigger objects.

But even web storage wasn’t good enough, Doug, because the funny thing seems to be that the more we embrace the cloud, the more we expect our browser to behave as if it were a locally installed application.

So, along came a thing called IndexedDB, which is, if you like, a THIRD type of cookie.

We’ve got cookies that go in the web headers; we’ve got web storage, which is a kind of loose, informal little mini-database that JavaScript can access; and we’ve got IndexedDB, which is nearly-but-not-quite a browser-side SQL database.

It doesn’t actually use SQL, but it lets you store much larger chunks of data – such as whole documents or whole sets of documents, If you’re doing a content management system, or massive images, if you’re writing a cloud-based image processing program, for example.

You’ve got cookies for small amounts of data; web storage where you need a bit more; and IndexedDB where you want significant amounts of structured data.

Because when two things can do something badly…


DOUG. [LAUGHS]


DUCK. …three things can do it even [PAUSES] better, apparently.

And the problem is – it’s really tiny – that on Safari, or on WebKit, there’s a special function called indexedDB.databases that, when you call it, gives you a list of all the currently active IndexedDB databases known to the browser.

But it gives any web page, any tab, any window, any website, access to the full list of database *names*.

It enforces the Same Origin Policy that says that website X cannot read the IndexedDB databases of website Y, so a website can only access its own cookie, its own web storage, and its own IndexedDB data.

But all tabs can access the list of database *names*, which, as tiny as it sounds, turns out to be a step too far.

Because as the researchers who found this – it’s a company called Fingerprint JS; they go looking for browser anomalies…

…as they discovered, lots of mainstream websites, when they create one of these IndexedDB databases for their own use, give it a bit of a telltale name.

They don’t just call it blah or db; they’ll name it in a way that indicates what service it belongs to.

This is like saying to a crook, “I’ve locked you out of all the data on my computer, but I *will* let you download a list of all my filenames.”


DOUG. Ah hah!


DUCK. You can imagine that there are a lot of secrets in your file names, in your so called metadata.

And the other thing the researchers discovered – they particularly looked into this for Google, but this is not really Google’s fault; not blaming Google…

…apparently Google uses your Google User ID as a database name, which is some random string of characters.

Now, that doesn’t tell somebody who can list that unique identifier *who* you are. A crook with a website that’s abusing this function won’t know that “Doug” is represented by this particular hexadecimal string.

But every time “Doug” visits their website, even if “Doug” has tracking protection on that tries to stop them figuring where you’ve been…

…you’ll come back *with the same Google User ID* if you’re still logged into Google.

So they won’t know who you are, but they will know that it’s the same person coming back over and over again – without setting any cookies or doing anything devious of their own.

It’s almost as though this IndexedDB database list can act like a kind of a supercookie: I don’t know who it is, but I do know it’s the same person every time.

That’s information that you probably never intended to give out.

And that’s why this bug is important, considering the effort that browser makers over the years have put into eliminating all this treachery that people could do with so-called supercookies – that’s where crooks use things like “which websites have you visited using HTTPS instead of HTTP?” as a way of tracking who you are, or “which fonts do you have installed?”, or “what screen resolution are you using?”

All those things that people would dubiously use to try and fingerprint you as an individual user can be done with IndexedDB.

And as we’ve lamented many times before, Apple aren’t saying when they’re going to fix this.

But the reason that Fingerprint JS wrote about it now is that they can see, from the open source components in WebKit, that Apple programmers seem to be looking at this, and they’re beginning to merge in a whole load of changes which will fix it.

So there is a patch to Safari/WebKit probably coming soon… but Apple doesn’t believe in telling you that it’s coming.

You just have to assume that it is. So watch this space.


DOUG. OK, we’ll keep an eye on that! That is Serious Security: Apple Safari leaks private data via database API – what you need to know, on nakedsecurity.sophos.com.

And it is that time of the show: the Oh! No! of the week.

Reddit user dilgentcockroach700 writes…


DUCK. Does that mean there are 699 diligent cockroaches before him or her?


DOUG. I know! Imagine trying to secure that username!


DUCK. [PRETENDING TO BE A SUPPORT BOT] “Other usernames you might like…”

DOUG, Yes, 700: a lot of cockroaches; they’re hard to kill.

[TELLING THE STORY] Back in the 1980s, I was working for a telecomms company in the UK. We had a Digital Equipment Corporation PDP-11 that I was in charge of, which was in an environmentally controlled room.

One Monday morning, I got to the office to find the computer was completely dead. I rushed into the computer room to find ladders, pots of paint, paint brushes, and a giant dustsheet completely covering the PDP-11, which by now was so hot it was almost glowing.

(If no one’s ever seen one of those, it’s about the size of a refrigerator that you put in your kitchen… it’s a giant computer.)

Apparently the Office Services Department had decided the room needed decorating, but didn’t bother to tell anybody.

I shut the power off to the computer, removed the dust sheet, and left it to cool down.

Later, I tried to reboot it but it wouldn’t work. Ended up having to call in DEC engineers from the US, and replacing most of the fried internals. My manager made the Office Services Department pay the several-thousand-pound bill out of their budget.

So, imagine – a giant computer the size of a refrigerator – how hot that would get…


DUCK. [LAUGHS]


DOUG. …and then putting a painter’s tarp over it to paint the room it was in.


DUCK. That was the world’s most expensive paint!


DOUG. Uh huh!

Well, I’m guessing that by now they’ve probably replaced that PDP-11 with something a little bit more svelte.


DUCK. Probably ten times more powerful, like a Raspberry Pi Zero.


DOUG. Or a cellphone!

Anyway, if you have an Oh! No! you’d like to submit, we’d love to read it on the podcast.

You can email tips@sophos.com, you can comment on any of our articles, or you can hit us up on social @nakedsecurity.

That’s our show for today – thanks very much for listening.

For Paul Ducklin, I’m Doug Aamoth, reminding you…

Until next time…


BOTH. …stay secure!

[MUSICAL MODEM]


go top