Instagram scammers as busy as ever: passwords and 2FA codes at risk

We monitor a range of email addresses related to Naked Security, so we receieve a regular (a word we are using here to mean “unrelenting”) supply of real-world spams and scams.

Some of our email addresses are obviously directly associated with various Sophos-related social media accounts; others are more general business-oriented addresses; and some are just regular, consumer-style emails.

As a result, we like to think that our personal scam supply is a reliably representative sample of what the crooks are up to…

…and, as you’ve probably noticed yourself, even though we see all the “old favourites” pretty much all the time, we often see bursts of one specific scam topping our personal prevalence charts.

At one point, sextortion scams were in the #1 spot (that odious sort of message turned into a real deluge in 2019 and 2020).

Then home delivery and parcel scams went wild for a while; then we had a flurry of Docusign ripoffs.

Right now, however, our scam feed is awash with a variety of frauds targeting Instagram, Instagram, and Instagram.

Instagram scams of many sorts

In the past few days, we’ve had a bogus Instagram warnings, complete with Instagram branding, in each of these categories:

  • Fake warning: Community guidelines violation. Proposed solution: Contact us to find the content that needs to be removed to clear the block.
  • Fake warning: Copyright infringement. Proposed solution: Dispute the claim and cancel the strike against you by filling in the form.
  • Fake warning: Suspicious login alert. Proposed solution: If this wasn’t you, click through now to secure your account.

Although most of the examples we’ve receivedwere old-style username-and-password phishes, one went on to request our 2FA code as well.

Even though 2FA codes are typically only valid for a few minutes, cybercriminals no longer simply collect phishing data to use later.

Many cybergangs use manual or automatic techniques that alert them as soon as victims visit their phishing sites, allowing the crooks to react in real time.

If they can trick you into handing over a 2FA code as well as your password, they will try that password-and-2FA code combination immediately, knowing that, if they’re quick enough, they’re likely to get their attempt in before the 2FA code expires.

While this is not exactly exciting or unexpected news, it’s a reminder that these scams are almost certainly still delivering results for the cybercriminals – potentially giving them instant access to established, trusted social media accounts in moments.

And although these scams usually aren’t too hard to spot…

…the crooks are getting better and better at making them easier to miss.

It’s easy to miss the warning signs and fall into the trap if you’re in a hurry, or if you’re distracted by other events (and who isn’t ATM?), or if you’re a delighful, trusting person who thinks, “Oh, there’s obviously been some mistake. Surely just the matter of a moment to sort it out, thanks to the handy and official-looking form provided.”

What to look for

Here’s what the fake warnings we’ve received have looked like; if you have friends or family whom you think might be tricked by this sort of message, please share this article with them so they know that they’re one of millions people receiving the same fraudulent messages.

It’s often easier to convince people near and dear to you if it’s someone else behind the advice you’re offering – if nothing else, it sounds less “preachy” or judgmental if someone they don’t know is saying it.

And, sometimes, pictures are worth 1000 words, so here’s what they looked like.

1. Fake “Suspicious login alert” sample:

2. Fake “Community guidelines violation” sample:

3. Fake “Copyright infringement” sample:

What happens if you click through?

Here’s an example of the sort of follow-up pages that you’d see if you clicked through – this is the “suspicious login” sequence:

And here’s the fake “copyright appeal” – take note of the website name in these images, where what is looks like an upper-case I (eye) is actually a lower-case L (ell):

Finally, here’s the fake “community violation”, complete with a phishing page that tries to grab your 2FA code (or one of your backup codes if you don’t have your phone handy) for the crooks to try to break into your account right away, in real time:

What to do?

  • Don’t click “helpful” links in emails or other messages. Learn in advance how to handle Instagram complaints or security warnings, so you know the procedure before you need to follow it. Do the same for the other social networks and content delivery sites you use. If you already know the right URL to use, you never need to rely on any links in any emails, whether those emails are real or fake.
  • Think before you click. The emails above are only vaguely likely, so we hope you wouldn’t believe them in the first place (see point 1), but if you do click through by mistake, don’t be in a hurry to go further. The fraudulent sites above had HTTPS certificates (padlocks) and server names that included text such as “lnstagram” (note: with an L, not an I!), but they clearly weren’t hosted on the genuine Instagram site. A few seconds to stop and double-check the site details is time well spent.
  • Use a password manager if you can. Password managers help to prevent you putting the right password into the wrong site, because they can’t suggest a password for a site they’ve never seen before.
  • Watch our video below for additional advice. Early in 2021, we presented a Facebook Live talk looking at the history and evolution of this type of scam. If you have any friends who rely on social media to generate income, and who might be worried about getting cut off from their accounts, show them the video to protect them from tricks like these.

Watch directly on YouTube if the video won’t play here.
Click the on-screen Settings cog to speed up playback or show subtitles.

[embedded content]


Did we learn nothing from Y2K? Why are some coders still stuck on two digit numbers?

If you use Mozilla Firefox or any Chromium-based browser, notably Google Chrome or Microsoft Edge, you’ll know that the version numbers of these products are currently at 97 and 98 respectively.

And if you’ve ever looked at your browser’s User-Agent string, you’ll know that these version numbers are, by default, transmitted to every web page you visit, as a kind of handy hint to say, “Look who’s coming to dinner.”

In an ideal world, the User-Agent header would be entirely redundant, given that websites are supposed to float disinterestedly above such petty details as which operating system you have, what CPU it’s running on, how many bits it works with, what graphics system you’re using, and which brand of browser you’ve chosen.

But here on Planet Earth, some websites need to know these details in order to adapt their behaviour accordingly, and many websites love to know them because…

…well, because from data like this you can mine information; from information you can infer knowledge; and knowledge, as the saying goes, is power.

What is your browser giving away about you?

If you’ve never seen your browser’s headers in real life, there are two easy ways to do so.

The first is to use your browser’s Developer Tools (try Ctrl-Shift-I), open the Network tab and then visit a website – the content of each outgoing HTTP request, including headers, and its related HTTP response, gets logged for you and can be examined at your leisure.

After loading the page, click on one of the requests, choose the Headers tab and scroll to the Request Headers section:

The second fun way is to watch from the other end of the connection by pretending to be a web server.

Install the Nmap toolkit from nmap.org, open up a command prompt (or a shell, or a terminal window, if you prefer those terms), and use the ncat command to listen for incoming local network connections, say on port 7777.

Then put the URL http://127.0.0.1:7777/ into your browser’s address bar, to tel your browser to connect to the listening ncat process, where the HTTP request will be received exactly as transmitted, and the headers therefore printed out on the screen in the order they arrived.

Here’s the current version of Firefox (97.0.1 on 2022-02-25T16:00Z) calling an ncat pseudo-webserver:

$ ncat -vv 127.0.0.1 -l 7777
Ncat: Version 7.92 ( https://nmap.org/ncat )
Ncat: Listening on 127.0.0.1:7777
Ncat: Connection from 127.0.0.1.
Ncat: Connection from 127.0.0.1:54810.
GET / HTTP/1.1
Host: 127.0.0.1:7777
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:97.0) Gecko/20100101 Firefox/97.0
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,*/*;q=0.8
Accept-Language: en-GB,en;q=0.5
Accept-Encoding: gzip, deflate
DNT: 1
Connection: keep-alive
Upgrade-Insecure-Requests: 1
Sec-Fetch-Dest: document
Sec-Fetch-Mode: navigate
Sec-Fetch-Site: none
Sec-Fetch-User: ?1

(You’ll need to hit Ctrl-C in the ncat window to close the connection, otherwise your browser will sit there indefinitely, waiting for an HTTP reply that never comes.)

The current version of Edge, based on Chromium, is (by chance, not by design) one ahead, at 98:

$ ncat -vv 127.0.0.1 -l 7777
Ncat: Version 7.92 ( https://nmap.org/ncat )
Ncat: Listening on 127.0.0.1:7777
Ncat: Connection from 127.0.0.1.
Ncat: Connection from 127.0.0.1:54738.
GET / HTTP/1.1
Host: 127.0.0.1:7777
Connection: keep-alive
sec-ch-ua: " Not A;Brand";v="99", "Chromium";v="98", "Microsoft Edge";v="98"
sec-ch-ua-mobile: ?0
sec-ch-ua-platform: "Linux"
Upgrade-Insecure-Requests: 1
User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/98.0.4758.102 Safari/537.36 Edg/98.0.1108.56
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9
Sec-Fetch-Site: none
Sec-Fetch-Mode: navigate
Sec-Fetch-User: ?1
Sec-Fetch-Dest: document
Accept-Encoding: gzip, deflate, br
Accept-Language: en-US,en;q=0.9

As you can see, there are plenty of different version numbers and other details that an interested web server could extract from those headers: a single-digit Mozilla number (5); a three-digit AppleWebKit and Safari number (537); two- and four- digit components in the Edg designator (56, 1108).

How hard could it possibly be for a modern website – one that is probably complex enough that is has funky JavaScript menus, third-party analytics and tracker addins, high-resolution images and perhaps even video and audio content – to make sense of a simple text string with an obvious textual pattern, such as the data you see in the User-Agent strings above?

Hard enough, apparently that both the Firefox and Chromium communities have been fretting about what to do when their respective browsers reach version 100, and the first part of any multi-part version number switches from two digits to three.

Amazingly, though thankfully quite rarely, there really are still websites that will get flummoxed when the switchover happens, and will make millennium-bug style blunders by failing to figure out the version number at all.

Some sites, indeed, are still making Y2K-type calendar miscalcuations by “figuring out” that any number of 100 or more “computes” as less than 99, or 98, or 97, or presumably any other positive integer all the way down to 1.

Given that the header processing is done on the server, we can only guess at how it works. So, it’s impossible to determine exactly what sort of bugs exist on servers with this sort of problem. Some servers might call v100 browsers “old” when they actually mean “we hit a parsing problem, so we’re blaming you and falling back on our default error page”. Others might interpret the string “100” as 10, if they simply chop off the end of the string to limit it to the usual two characters, or as 00 if they truncate it from the other end. Or they might end up with zero as a sort of uninitialised default, meaning “we hit an error but didn’t realise”. Because both 0 and 10 are a lot less than 97 or 98, the server might therefore settle for the convenient assumption that you haven’t updated your browser for a decade, rather than accepting that there could be a server-side bug and giving you the benefit of the doubt.

Surely some mistake?

We’d largely ignored this issue, which Firefox and Chrome alike have been testing for since 2021 by providing experimental settings for testers that made the browser report a major version of 100 ahead of time.

Firefox even has a special “compatibility” setting (visit the URL about:compat to see these), kicked off some three months allow, to build a list of known websites that might need lying to when version 100 rolls around for everyone.

Chromium browsers, likewise, introduced a special flag dubbed force-major-version-to-100 (visit chromium://flags or edge://flags to find it) so testers could try out a version number of 100 ahead of time.

Indeed, Chromium browsers even have special flag called force-minor-version-to-100, so that instead of, say, 98.0.4758.102, as you saw above, you’ll get something like 98.100.4758.102 (or the slightly weird hybrid version number 98.100.1108.56 on Edge) instead.

That “minor version” flag was put there specifically to test the viability of a third special flag workaround, which will be available when version 100 comes along: the “surely we don’t need something this silly in 2022” option called force-major-version-to-minor:

We turned this on to try it out. (It’s not enabled by default.)

We didn’t think a hack of this sort would be useful, or even necessary, but we were forced, if you will pardon the poor pun, to explore this new option…

…when we noticed that the release notes for the latest developer version of Microsoft Edge, which came out last night, specifially mentioned that the new release:

Enabled a management policy from Chromium to force the major version number to the minor position In the user agent string, which is a temporary policy to freeze the major version number at 99 and place the actual version number in the minor position, for example turning version 101.0.0.0 into 99.101.0.0.

Edge-dev, as the Developer channel version is known, runs one major version ahead of Edge Beta, which runs one version ahead of Edge Stable, which is what most people use, especially on business computers.

Because Edge Stable is now at 98 (see above), that means Edge-dev is already at 100, as you can see from ncat here, when we visited with the latest Edge-dev version:

$ ncat -vv -l 7777
Ncat: Version 7.92 ( https://nmap.org/ncat )
Ncat: Listening on :::7777
Ncat: Listening on 0.0.0.0:7777
Ncat: Connection from 127.0.0.1.
Ncat: Connection from 127.0.0.1:54746.
GET / HTTP/1.1
Host: 127.0.0.1:7777
Connection: keep-alive
sec-ch-ua: " Not A;Brand";v="99", "Chromium";v="100", "Microsoft Edge";v="100"
sec-ch-ua-mobile: ?0
sec-ch-ua-platform: "Linux"
Upgrade-Insecure-Requests: 1
User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/100.0.4867.0 Safari/537.36 Edg/100.0.1169.1
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9
Sec-Fetch-Site: none
Sec-Fetch-Mode: navigate
Sec-Fetch-User: ?1
Sec-Fetch-Dest: document
Accept-Encoding: gzip, deflate, br
Accept-Language: en-US,en;q=0.9

The Edge-dev team clearly seem to think that there are still sufficiently many websites out there that aren’t Y2K, sorry, v100 ready that the Chromium “fallback plan”, as it’s known, hatched back in December 2021, can be considered vital rather than merely useful:

What could go wrong?

The website webcompat.com, which is monitored by Mozilla volunteers, amongst others, has a GitHub page where you can report numerous types of incompatibility, including web bugs relating to V1H problems.

(We’ve dubbed this the V1H bug, using H to stand for hecto-, from the Greek words fo 100, as in hectopascals, or hPa, used as a standard unit for barometric pressure, or hectare, denoting a land area of 100mx100m, echoing the way that Y2K used K for kilo-, meaning 1000).

We installed Edge-dev, and tried one of the sites recently reported in the Webcompat V1H list, namely daimler.com, which redirected us to a Mercedes-Benz page that decided our three-digit browser version was way out of date, rather than brand new:

With Edge Stable, currently at v98, the site worked fine, with the Mercedes-Benz redirect showing us a page to inform us that the company Daimler AG has, since the start of this month, been renamed to Mercedes-Benz Group AG.

Ironically, perhaps, the daimler.com site didn’t do any better when we activated the force-major-version-to-minor option, making the browser seem to be v99 with a minor idenfitier of 100, which suggests that a three-digit minor version number is beyond its comprehension.

what to do?

  • If you’re a web user, the transition will probably be like Y2K: most sites will work fine, and many will never have had this as a potential bug anyway. But if there are problems with sites you need to reach, at least you know that there are workarounds anticipated by the browser makers to help you out.
  • If you’re a web programmer, this sort of thing really shouldn’t be a problem for you. After all, if three-digit version numbers are beyond your grasp, what impression does that give your visitors about the reliablity of how you might process other variable-length data such as payment amounts, credit card details, postcodes and other personal information?

There are still a few weeks left before the general public starts calling you with Chrome 100, of Edge 100, or Firefox 100, so test your own web properties before it’s too late.

Now you know how!


S3 Ep71: VMware escapes, PHP holes, WP plugin woes, and scary scams [Podcast + Transcript]

LISTEN NOW

Click-and-drag on the soundwaves below to skip to any point. You can also listen directly on Soundcloud.

With Doug Aamoth and Paul Ducklin.

Intro and outro music by Edith Mudge.

You can listen to us on Soundcloud, Apple Podcasts, Google Podcasts, Spotify, Stitcher and anywhere that good podcasts are found. Or just drop the URL of our RSS feed into your favourite podcatcher.


READ THE TRANSCRIPT

DOUG. VMware holes, PHP flaws, WordPress bugs, and sextortion.

All that and more, on the Naked Security podcast.

[MUSICAL MODEM]

Welcome to the podcast, everyone.

I am Doug Aamoth; he is Paul Ducklin…


DUCK. That is I!


DOUG. We have a lot to cover today.

Paul, we love to start the show with a Fun Fact… and I don’t know if you at one point or another were a Nokia man?


DUCK. [SINGS THE NOKIA TUNE] Didda-loo-doo/Didda-loo-doo/Didda-loo-doo/DOOO.


DOUG. So good!


DUCK. You know that “dah-dah-dah dit-dit dah-dah-dah”. the tone for Nokia SMS, is just Morse code for “SMS”?


DOUG. Yes, Morse code for SMS – I did know that!

As I was researching these Fun Facts, because this fact will tie into our This Week in Tech History segment…

…this is a fun fact about the old-school Nokia phones, which had a reputation for excellent durability.

And, among those, the Nokia 3310 handset, circa September 2000, is believed to be the most durable of the bunch.

So if you had a Nokia handset around that time, odds are it was the 3310 and it was indestructible.

I myself was a Nokia 6110 man – that came out 1998.

I remember buying it – I was working at a computer superstore called Circuit City; I was selling computers and I bought it on the employee discount.

There was a cellphone plan for $50 that gave you 70 daytime minutes and 200 nighttime and weekend minutes.

For $50 – and I thought that was a great deal.


DUCK. I had the…was it the one after? The 6210?


DOUG. That was a good one.


DUCK. And that went missing.

So, I thought, “I’m just going to get a cheap phone.”

And I got… I think it was the 8210, the little tiny one.


DOUG. Yes.


DUCK. Like the 3310, but even smaller.

And I must say, Doug, that phone had the best voice quality of any phone I’ve had before or since.


DOUG. Yes – isn’t that weird?

It’s gotten worse somehow.


DUCK. And it had the world’s weirdest camera.

I think it was 200 kilopixels, that camera.


DOUG. [LAUGHS]


DUCK. I took pictures of beaches I’d visit, and afterwards I was unable to recognise what the photograph even was.

I’d infer they were beaches because they’d be somewhat browny-coloured at the bottom and mostly blue at the top.

What a hopeless camera!

But then people didn’t buy phones as cameras, did they?.


DOUG. No.


DUCK. They bought them to make telephone calls!

And at, that I must say, it was a superstar.

And how often did you have to charge your phone, Doug?


DOUG. Once a month maybe?

Let’s talk about this VMware story.

This is interesting because as I was reading it, I was like, “What’s the big deal?”

And then you explained what the big deal was – I was perfectly lured into the trap of it not being a big deal.


DUCK. Well, the problem with the main bug that we’re talking about here – or main bugs: CVE 2021-22040 and -22041 – is that although you need to be a local administrator (basically, to have root access already) in order to exploit this bug…

…that root access can be inside a guest virtual machine on the shared computer, not the host.

And of course, if you’ve got a virtual machine in the cloud, you might not know who is running the other VMs on that physical server.

Even if it’s a non-cloud virtual machine server and it’s in your company, you might have several different departments who are each expected to keep their data private from each other – for GDPR reasons, or just common sense reasons.

You need to assume that on any virtual machine server, say one that has 10 VMs running on it, there are going to be 11 different administrator accounts and passwords: one for the host, and one each for each of the guests.

And the whole idea is, as the host administrator, you’re not supposed to have to worry about the guests.

If they’re untrustworthy, they’re untrustworthy *only inside their own VM*.

So, the problem with these bugs is they could lead to what are called “guest VM escapes”.

In other words, somebody who has root access inside one of the pseudo-servers could somehow escape from it and manipulate either: other guest computers which might not belong to them, or, worse, the host server, which almost certainly means that they could then reach in and fiddle with all the other guests as well.


DOUG. So, we’ve got a patch, and if you can’t patch, we have a temporary workaround.

So, two ways to sidestep this issue at the moment?


DUCK. Yes.

The patch is the right way to do it, because it’s not just these guest escape bugs that you’re patching.

There are a whole load of other bugs as well – they don’t seem to be as serious or severe, but why patch one thing when you can patch seven things at the same time?

But, like you said, if you can’t quite do that yet, for the guest escape bugs there is a workaround.

Unfortunately, as I understand it, it basically means you get no USB simulation or no USB access in your guest VMs.

So, if you have a guest VM where you expect to be able to emulate USB devices, for example, they’re not going to work.


DOUG. OK, that is: VMware fixes holes that could allow virtual machine escapes on nakedsecurity.sophos.com.

And next we’ve got a PHP flaw.


DUCK. Yes, it’s one of those things where you find yourself thinking, “OK, so somebody who is really naughty and who takes the proof-of concept could crash my PHP process, and that could stop my web server responding until the relevant process gets restarted.”

Is that a big deal?

But the crash is actually caused by deliberately forcing memory mismanagement.

In particular a memory “use-after-free”, which is where you basically go and poke a knitting needle into somebody else’s memory and potentially modify it in a completely untrusted way.


DOUG. When Mozilla issues patches, they are quick to point out that, when a bug shows evidence of memory corruption, they say you should “presume that with enough effort, some of these bugs could have been exploited to run arbitrary code.”


DUCK. Yes, that is quite right!

Because although that might be difficult to achieve, you can imagine that the payback, for a cyber crook who manages to figure it out, could be huge.

Once they know where to start looking, it’s a heck of a lot easier for the Bad Guys to reverse-engineer the exploit from the patch than it is for them to figure it out as a zero-day attack in the first place.

So, I always warm to Mozilla when they put that in basically every security update they do.

They could spend days or weeks on each one of them, to prove that it really is exploitable or not…

..instead, they just go, “You know, we’ve patched these and we’re assuming that, if somebody wanted to, and you didn’t patch, they could be exploited in the future.”

So, be warned.

And the irony, Doug, is that it was essentially incorrectly-managed input in a routine that was supposedly all about input validation.


BOTH. [LAUGHTER]


DUCK. We shouldn’t really laugh!


DOUG. No…

Fortunately, if you’re a PHP user, the fix is as simple as updating and patching.

We’ve also got some advice for programmers.

We do like to say, “Validate Thine Inputs”, just in case… but there’s other advice here as well.


DUCK. In the article, I’ve put two diffs (that means “code differences”) comparing the previous version and the fixed version.

In this case, the function deals with checking for validity what are called “floating point” numbers or “decimals”, like 2.5, or 3.14159 (that’s pi), or whatever.

And another thing you can do is you can say, “Oh, and I want to make sure whether the number provided falls within a certain range.”

For example, where somebody is giving a scaling factor, you might want that scaling factor to run from -1 to +1.

And it turns out that, under some circumstances, if you send input that fails the check, then instead of the check just failing, what happens is this: the code frees up the memory that you’re using to store the number, and then it’s supposed to immediately reallocate new storage to store the validated number.

In one of the places, that happens correctly.

Basically, the programmer does what you might call, “Look for oncoming traffic. If clear, step into the road, and cross in one go.”

In one of the places that’s the order they do it.

In the other place, they’ve managed to get the three lines of code in the wrong order…


DOUG. [LAUGHS]


DUCK. …and they basically go, “Step into the road. Then check if there are any cars coming…


DOUG. [LAUGHS]


DUCK. …and if there aren’t/you’re still alive, complete the crossing.”

And it looks to me as though what happened is that in one part of the code, the range checking was added, and then someone said, “Oh, we should put that in this other, similar part of the code.”

And they copied the line that does the error checking, but they pasted it into the wrong place – *between* the “memory free” and the “memory reallocate”, instead of either before both of them or after both of them.

(Obviously it should be before, because the idea is, if it’s an error, you’re just going to bail out immediately.)

So the fix was: moving one line of code down, one line in the file.

So, the advice to programmers is this…

Particularly in C, it is easy to make allocation and deallocation errors, and they *all* matter, so when you’re doing a code review, you need to check them all.

And the second thing is – I think if I were to refactor or rewrite this code… because it’s a frequent idiom in this particular module, “free up the memory and allocate a new block”, why do that in two lines of code?

Why not create a function called something like “free_up_­and_reallocate_­in_one_go()”?

That way the programmer who comes after you can’t copy and paste a line of code in between two lines and break things.

Because there’s only one line., so they can only paste ahead of it or after it.

And in this case, either of those would have worked out OK.

So, the devil is in the details, as they say. Douglas.


DOUG. Very good.

All right, that is: Irony alert! PHP fixes security flaw in input validation code on nakedsecurity.sophos.com.

And now we have a WordPress plugin bug.

OK, it’s a bug – we can talk about the bug, but the way the company *handled* the bug was really impressive, Paul.


DUCK. It’s not the most dramatic bug in the world…

..but it could be problematic, as the company that created it explained.

And so I thought it was worth reminding people who are WordPress users, and who have these particular plugins – there’s a free version called Updraft, and the premium version, the paid version, called Updraft Plus… if you’re using those, they’re backup plugins that help you look after the content of your site.

So, if someone messes something up, you can restore it.

But the bug could have bad consequences.

The problem is this.

Anybody who has a login on your site (so it’s not an “unauthenticated bug”, but with many sites, you might have the administrators, and you might have dozens or even hundreds of contributors who are allowed to upload and put articles in there, and then somebody else has to approve them)…

Any user who can log into your site could, in theory, if they knew how this bug worked, could just get the whole backup of your whole site in one lump.

Anyway, when I read the report from the Updraft team, I thought, “My goodness, although this is a somewhat modest bug, and it was quickly fixed, if only more security reports were like this one!”

Clear; written in plain English; no excuses; and a genuine and believable apology at the end.

Even if you don’t use this plugin, you might want to go and read this report, because I think it’s a good example of how you can do security reports well, and perhaps win back trust.

With a less considered response, you might have had exactly the opposite reaction, and actually made your customers feel worse off.


DOUG. All right, that is: WordPress backup plugin maker Updraft says you “should update”.

And it is time in the show for This Week in Tech History.

Well, we talked about Nokia earlier in the show, and this week, on 23 February 2005, we said hello to the first mobile phone virus.

It was a worm called Cabir that affected the Symbian operating system, which was popular on Nokia phones.

The worm spread via Bluetooth to nearby handsets, and didn’t actually do true damage, other than affecting battery life thanks to constant Bluetooth polling.

And it was believed to be released by its creators more as a proof-of-concept, or a warning that mobile malware was indeed possible.

So, Paul, where were you when Cabir broke out?


DUCK. Well, I was still a Nokia user!

This wasn’t followed by an absolute deluge of mobile phone malware, possibly to the collective relief of the cybersecurity industry and users.

But it was a reminder to all of us that, well, “Here’s another operating system that you have to know something about.”

And, boy, Symbian was kind of complicated, Doug!


DOUG. I remember, yes!


DUCK. Do you think Android has lots of variants?

Well, with Symbian it was the same sort of thing – it was a fascinating and complicated ecosystem, Symbian.

[QUIZZICAL] And then for, better or for worse, it just disappeared…


DOUG. I think I read somewhere that, at its height, it was on 70% of handsets…


DUCK. Which is why, when malware like Cabir came out, there was that sense of, “My golly, if the crooks really figure this out, and they figure out how to make money out of this, we’re all doomed, because everyone’s vulnerable!”

By good fortune, we got a few years to think about it before mobile malware did become the sort of problem it is today…

…which I think took more powerful phones.

Suddenly the crooks could go, “Hey, I don’t have to strip down my malware. I don’t have to write this super-miniature thing that doesn’t really do anything. I can just use the same techniques that I would when I’m writing a regular app. I’ll just be naughty about it.!


DOUG. All right.

If we stay in the “old school” for a little while here, we’ve got a new sextortion scam that uses an old-school tactic that I, for one, have not seen in quite some time. But it’s back!


DUCK. You mean, “Let’s send the entire text of the email not as an attachment, not as a link that has to be downloaded, but as a pre-rendered, decent-resolution inline image?”


DOUG. Exactly!


DUCK. It’s an old technique designed to present difficulties, particularly to mail scanning or content scanning software that relies on things like linguistic analysis.

You convert the text, in advance, into an image, so that anyone who wants to do any kind of text scanning or natural language processing on it – or, indeed, to look for any links that are in there – has to do some kind of text recognition first.

This is not only error-prone, it also adds a whole load of extra computational complexity to preprocess every image into text.

But it died out with crooks, I think, because when you have an image, you can’t easily have a clickable link in it

On the other hand, if what you want to do is scare the person, present what looks like an official document and say, “Read this, think about what we’re saying to you, and email us back and maybe you won’t be charged with these serious criminal offenses allegedly related to viewing of online porn”…

“Contact us: maybe you have a good explanation as to why this might have happened innocently. We’re all ears. Oh, by the way, you have 72 hours.”

So, it’s a nasty trick because, in this case, an image is absolutely fine.

You’re not looking for the person to click on something like “I reject your copyright infringement notice”, like the copyright scammers do.

The verbiage is: “It’s your choice. But there may be something you wish to say in your defence. Here’s the email address.”

In truth, you should spot this scam… because to those of us who have had any number of these before, they all look the same.

They’ve all got the same dramatic story in them.

They’ve all got crazy mistakes, if you know what to look for.

Like this one: the investigating officer you have to email isn’t just Sergeant So-and-so or Inspector So-and-so.

It is the Director General of the French Police!

And the email, amazingly, comes from a person called Jean Luc Godard [LAUGHS].

He is in his 90s now, I believe, and he is a very famous neo-marxist French cinematographer…


DOUG. [LAUGHS]


DUCK. He’s very well known, and made some amazing films, so I was surprised to find that, in his dotage, he had gone from being a dramatic filmmaker to a serving police officer.

But there you go – I think the thing the crooks are looking out for here is this: they don’t want a million people to respond, do they, to scams like this?

They just want to send out a million copies of the message.

The pretext here in the message is that, “Obviously, we didn’t want to put the gory details of the evidence in this email message, but you may want to contact us to try and clear this up.”

And you imagine that the crooks… their goal is that they want to draw you in.

They want those people who are scared enough, or vulnerable enough, or uncertain enough that they will actually type in the lengthy email address and that they will reply.

Then, they’re looking for what you might call a “long game” scam, where they’ll be in contact with this person over and over and over.

So, the lack of a call-to-action link, the lack of a “click here; put in your password right now so we can drain your credit card”…

…that doesn’t matter.

The crooks are looking for people whom they can scam for a long period of time, in a human-led attack.

They don’t want a million people to respond!

They just want people who have self-selected as those who were terribly scared.

And, as you can imagine, because of the subject matter, those vulnerable people, those easily scared people…

…given the subject matter, they’re not likely to turn to friends and family to help, are they?

“Hey, we’ve got you for cyberporn offenses. This is serious.”

You might think, “Well, I’ve been to some websites that I don’t think were dodgy, but who knows what they’re connected with? Who knows what they’ve got? I’d better find out.”

You’re probably going to think, “Maybe I should reply and just see what’s going on,” rather than, “I should ask my granddaughter, or my uncle, or my parents, or my best friend.”

And that’s really what these crooks are banking on.

And the other thing with the image, of course: it lets them make it look like it’s a scanned-in, official, printed document.

Bcause there’s an “APPROVED” stamp on it; there’s a stamp with someone’s signature.


DOUG. So, we have some advice for the good people here.

We talked a little bit about it…

* How likely does the message really seem?

We talked about that.

* If in doubt, don’t give it out!

We say that a lot – that’s probably a good place to start.

* Don’t be afraid to check with a trusted source.

That’s good, because if someone were to come to me as a trusted source, I would do the next thing in your advice, which would be:

* Check online for similar messages reported by other people.

So, everytime someone comes to me and says, “I was on Facebook and I saw that this is happening,” I go and Google it and I say, “No, that is a scam.”


DUCK. Yes, you’re quite right, Doug!

Because the reason why we write these stories up is precisely so that there is the kind of evidence that you mentioned there.

So if you think, “Well, I wonder if anyone else is getting these?”, and then you search…

…that won’t necessarily catch the crooks out, but sometimes it *will* catch them out.

Because usually it *does* show up that this was obviously a campaign where somebody is accusing [AMAZED TONE] 100 million people, at the same time, of exactly the same crime.

What’s the chance of that?

And so that’s a way that you can set your own mind at rest instead of allowing fear, uncertainty and doubt to eat away at you.


DOUG. All right, that is: French speakers blasted by sextortion scams with no text or links on nakedsecurity.sophos.com.

And, as the sun begins to set on our show for this week, we have our Oh! No! segment.

We have a reader question for you, Paul, in regards to our article Google announces zero-day in Chrome browser: update now.

Reader Diane MP asks a fair question:

What’s the casual, mildly proficient user supposed to do? Checking my Chrome version gives me a number that does not resemble the required one on Google Play. I just get ‘already installed’. If experts can’t figure out the complexities of this threat and how to protect against it, well, maybe it’s time for people like me to just move on from the computer era. I’ve been wanting to start painting again anyway, and my phone works just fine independent of the Internet.”

Diane… I would say, Diane, don’t go!

Don’t let this kill your joy for being connected to everyone else in the world.

But, Paul, what do you say to that?

The frustration of constantly updating and knowing which version you’re supposed to be running?


DUCK. Yes, I was very sympathetic with that comment.

I can’t remember how I exactly responded – I think I just said, “Look, here’s the official way that you find out, and it works on your mobile phone and on a regular browser on your laptop.”

So I’m very sympathetic to Diane… I figure, Diane, if you’re listening, maybe take more time to paint!


DOUG. I was going to say, Diane, “Do paint, but don’t move on from the computer.”


DUCK. I wouldn’t throw your phone away!

But sadly, it is true that sometimes even companies that pride themselves on being able to find needles in haystacks and present them to us, to our amazement, like the “next video” Google recommends to you and you think, “How did they know?”…

Yet, when it comes to giving you a version number, it’s like extra-super-complicated!

And unfortunately, for every app that you update, and for every app you use, there tends to be a different way to find the one true version number; a different way to look up online what the real, true, current, patched version number is.

Sometimes you just have to run around the houses a bit, trying to work out what the right version is, just to see whether you’ve got it.

Or come to a site like nakedsecurity.sophos.com… to be honest, that’s one of the things we aim to do: where there are simple answers, we’ll try and give them to you – and then, if there are anomalies, or exceptions, or weird things around the edge, you can ask for our comments in the Naked Security community.

And we will do our best to answer for you if we can.


BOTH. [LAUGHTER]


DUCK. But the fact that sometimes it’s a hassle for us to find out…


DOUG. …yes, it’s not just you, Diane.

No, it is absolutely frustrating and confusing for everyone!


DUCK. Fortunately, you will find on Naked Security, when you ask questions like that, people will chip in.


DOUG. Hang in there, Diane!

Well, if you have an Oh! No! you’d like to submit, we’d love to read it on the podcast.

You can email tips@sophos.com; you can comment on any one of our articles like Diane did; or you can hit us up on social: @NakedSecurity.

That’s our show for today; thanks very much for listening.

For Paul Ducklin, I’m Doug Aamoth, reminding you, until next time, to…


BOTH. Stay secure!

[MUSICAL MODEM]


Apple AirTag anti-stalking protection bypassed by researchers

When the Apple AirTag hit the market in 2021, it immediately attracted the attention of hackers and reverse engineers.

Could AirTags be jailbroken? Could AirTags be simulated? Could the AirTag ecosystem be used for purposes beyond Apple’s own imagination (or at least beyond its intentions)?

We soon found ourselves writing up the answer to the “jailbreak” question, given that a researcher with the intriguing handle of LimitedResults figured out a way to subvert the chip used in the AirTag (an nRF52832 microcontroller, if you want to look it up) into booting up with debugging enabled:

Using this trick, another researcher going by @ghidraninja was able not only to dump the AirTag firmware, but also to modify and reupload the firmware data, thus creating an unofficially modified but otherwise functional AirTag.

To prove the point, @ghidraninja altered just one text string inside an AirTag, modifying the URL found.apple.com so it pointed not at Apple’s lost device reporting portal, but at a YouTube video (you know what’s coming) of Rick Astley singing Never Gonna Give You Up.

Anyone finding @ghidraninja’s AirTag and trying to report it lost…

…would get rickrolled instead.

Covert message delivery

A few days after the rickroll business, we were writing up another AirTag hack that documented how to create Bluetooth messages that could hitch a ride on Apple’s AirTag network.

Researcher Fabian Bräunlein, from Berlin, Germany, figured out a way to use almost any low-power Bluetooth system-on-a-chip, such as the well-known and inexpensive ESP32, as a message generator to send free (but very low bandwith) messages via iPhones that just happen to be nearby.

ESP32 development system showing diminutive size.
The code on this board simply blinks the blue light. (Red denotes power.)

Every two seconds, regular AirTags broadcast an identifier via a low-energy Bluetooth; any passing iPhones in the vicinity that are AirTag-enabled and happen to pick up these broadcast messages co-operatively relay them back to Apple’s AirTag backend, where they’re saved for later lookup.

To protect your privacy, the pseudorandom sequence is keyed, or “seeded”, using a shared secret that is known only to the AirTag and the owner who originally paired their Apple device with it, and the identifier that’s broadcast isn’t the actual data generated in the sequence, but a hash of it.

This means that only the AirTag’s owner can check whether their AirTag called home to Apple, because only the owner knows what magic identification code would have been generated, and therefore only the owner can calculate the hash to look up in Apple’s database, which is essentially an anonymous, crowd-sourced record of AirTag broadcasts.

Additionally, the identifier used by any AirTag is updated every 15 minutes, following a pseudorandom sequence that only the AirTag and its owner can construct (or reconstruct later), so that AirTag sightings can’t be matched in Apple’s database, albeit anonymously, simply by looking for repeated broadcast hashes.

Bräunlein figured out how to use an ESP32 device to create correctly-anonymised broadcast messages that Apple’s network would relay and store.

Each of his “not-actually-an-AirTag” messages was encoded so that it included:

  • A unique value (repeated in each transmission in a batch) to denote a related series of data packets, which we’ll collectively refer to as D;
  • A sequence number (incremented by one every time) to denote a specific bit position in the current hidden message, which we’ll call X; and
  • A single bit (either zero or one) added at the end to make each transmitted identifier even or odd, denoting the value of bit X in packet series D, which we’ll call bitval(D,X).

Because he could precompute the hashes of all possible messages for any sequence, he could see which identifiers actually turned up in each sequence (he sent each message several times to increase the chance of it getting picked up and anonymously rebroadcast to Apple).

If he only ever spotted identifiers where bitval(D,X) came out as zero, he’d know that his special-purpose ESP32 device was signalling that the Xth bit of D was zero; if he saw only identifiers with bitval(D,X) == 1, he’d know that the Xth bit of D was one.

If neither sort of message showed up, that would mean the Xth bit of the hidden message had been lost, but thanks to the sequence numbers, the rest of the message could nevertheless be recovered. If any evens turned up, then there could never be any corresonding odds; if any odds arrrived, there could never be any correponding evens, so the presence of one value and the absence of the other reliably signalled the intended setting for each bit in the hidden message.

As you can imagine, the bandwith of this “network”, which he humorously dubbed Send My, was poor: about 20 bits per second in throughput, with a waiting time for collected messages to wend their way to Apple’s servers of up to an hour.

Nevertheless, it did represent an essentially undetectable covert channel for tiny devices with tiny batteries to piggy-back onto Apple’s Find My network in an entirely innocent-looking way – no “giveaway” connections to Wi-Fi or the mobile phone network were needed.

Once more unto the breach

Well, Bräunlein is back in the AirTag news with a similar sort of “bogus but apparently innocent AirTag message” trick, this time designed not to sneak arbitrary data back via Apple’s network, but instead to deliver covert location information while preventing Apple’s network from generating its expected privacy warnings.

This one is cheekily dubbed Find You, and its primary purpose is to demonstrate the limits of Apple’s own “anti-find-you” programming, known as Tracker Detect, that’s now built into the AirTag network.

Apple’s system aims to provide basic protection against other people’s AirTags being hidden on your person, in your luggage, or on your car, and then used to keep tabs on you.

That’s because the anonymous, privacy-perserving system that’s supposed to ensure that only you can track your own AirTags if they’re lost of stolen…

…can be turned against you when it’s someone else privately tracking their tags that were neither lost nor stolen, but instead deliberately placed where their location data would denote your whereabouts.

Two main protections exist:

  • AirTags that haven’t been in direct connection with their owners’ devices recently start emitting an irritating noise through their small internal speaker. This not only helps genuinely lost AirTags get noticed, but also draws attention to AirTags in your vicinity (e.g.in your handbag/purse or rucksack/backpack) that shouldn’t be there.
  • AirTags that remain in your vicinity for some time but don’t belong to you pop up a warning on your device. While you’re walking around you’d expect to come across a random buch of AirTags, but if a single tag that isn’t yours sticks with you when there aren’t lots of other tags coming and going, you’ll be warned.

Clearly, the first mitigation is far from perfect: you’re unlikely to hear an AirTag that’s attached to your car, for example; and, sadly, there’s a creepy online market for second-hand AirTags in which the speaker is broken. (By “broken”, we mean “deliberately and deviously disconnected or damaged to allow silent operation”.)

The second mitigation, of course, not only relies on you regularly checking for stalker alerts, but also relies on Apple’s software reliably deciding that a suspicious device is “standing out from the crowd” to a degree that’s worth alerting about.

Indeed, the entire crowd-sourced nature of the Find My network relies on participants listening out for, detecting, and anonymously reporting on AirTags that pass by, so that the genuine owner really can try to track them down (without knowing who submitted the report, of course) via Apple’s Find My portal.

In other words, turning every nearby appearance of every unknown AirTag into a “suspicious event” instead of simply quietly and anonymously calling home with it would not only drive you nuts with false alarms, but also stop the Find My system from working as intended.

Instead, as Apple puts it:

If any AirTag, AirPods, or other Find My network accessory separated from its owner is seen moving with you over time, you’ll be notified.

Weakness in numbers

You can probably guess where this is going.

Bräunlein already knows how to create non-Apple-generated Find My messages that Apple’s network nevetheless accepts, in order to relay data of his choice.

This time, he simply created a plentiful and varied supply of non-Apple-generated Find My messages, and broadcast them to trick Apple’s “moving with you over time” detector into ignoring devices that were indeed, right there with you.

Simply put, Bräunlein shrouded his location reports in what resembled “crowd noise”, thus disguising the usually-repetitive nature of AirTags being abused for stalking purposes.

Over a five-day period, Bräunlein had a volunteer carry one of his ESP32 “bogus message” generators, seeding his device with identification sequences for 2000 different simulated AirTags that broadcast every 30 seconds at random.

Because Bräunlein could identify any and all of these pseudo-Airtags in the system, given that he knew all their identification seeds, he could reliably track his volunteer…

…but he didn’t need to to buy 2000 different AirTags and try to hide them all where they wouldn’t get spotted (or heard).

At no time during those five days of testing did Apple’s Tracker Detect system warn the volunteer of suspicious, repeated appearances of any of the psuedo-tags.

With each pseudo-tag broadcasting only every 30 seconds (not every 2 seconds as a regular one would); with 2000 pseudo-tags to choose from; and with tag identifiers changing by design every 15 minutes, we’re guessing that there wasn’t enough repetition or any obvious pattern for Apple’s stalker detection software to latch onto.

What to do?

Unfortunately, there’s not much you can do to detect this sort of trickery at the moment, though we don’t doubt that Apple will revise its threat-detection modelling and its stalker detection code in the light of Bräunlein’s report.

Bräunlein does mention a free app from the Technical University of Darmstadt, called AirGuard, that can give you a bit more insight into fake trackers of this sort.

AirGuard can reveal a full list of AirTags or pseudo-AirTags seen near you; even though they all look different, the broadcasts generated by Bräunlein’s multi-tag reporting device nevertheless show up somewhat suspiciously – enough, perhaps, to encourage you to search for unexpected electronic gizmos (and their batteries) hidden in your vicinity.

However, the AirGuard app is only available for Android, so if you’re using Apple AirTags with Apple phones and laptops, this won’t work for you.


WordPress backup plugin maker Updraft says “You should update”…

WordPress plugins need to be kept up-to-date just as keenly as WordPress itself…

…especially if those plugins are designed to help you look after the entirety of your WordPress site data.

That’s why we thought we’d write about a recent warning from the creators of Updraft and Updraft Plus, which are free and premium plugins respectively that are dedicated to backing up, restoring and cloning WordPress sites.

As you can imagine, a security bug in a backup plugin that could allow an attacker to download a site backup without authorisation means, in theory, that your entire site, and all its accompanying data, could end up getting stolen in one go.

That, apparently, is the nature of CVE-2022-23303, a bug found and reported in the Updraft plugin by a security researcher at Automattic, the company behind the WordPress brand.

You can verify the connection between WordPress and Automattic from this site: we’re hosted by WordPress VIP [2022-02-22], as you can see by looking at the headers of our web replies (X-Powered-By: WordPress VIP <https://wpvip.com>); and then by looking up the administrative and technical contacts for the wpvip.com domain in the Whois database (Admin and Tech Org: Automattic, Inc.).

High-quality response

Actually, as well as acting as a gentle reminder to Updraft users to make sure they’re up-to-date (at the time of writing: 1.22.4 for the free version; 2.22.4 for Premium users), we thought we’d cover this patch as a positive example of how to deal with a cybersecurity flaw.

In our opinion, Updraft got several important things right in the update bulletin that it published on its blog:

  • The report was timely. The patch was available and written up within two days of the bug being responsibly disclosed by Automattic.
  • The report didn’t mince its words. The opening paragraph states, “The short version is: you should update. To get the details, read on.”
  • The report described the bug in plain English, and was clear about the risk posed. Simply put, any authorised user of your site, even one who usually only uploads articles for editing and approval by others, might be able to clone your whole site, including making off with all your non-public data.
  • The company offered a credible apology. Rather than leading with weasel words about how the bug wasn’t in the wild, or talking it down by emphasising that it didn’t allow totally unauthenticated access, the report explained the situation first, reiterated the importance of updating anyway, and presented its apparently genuine regret at the end.
  • The report was written by someone in the know. Rather than leaving the published verbiage to PR or marketing, the report was written by one of Updraft’s lead developers.

Try reading our satirical take on data breach notifications, written as a humorous article a few years ago, and then comparing it with Updraft’s security report.

We think you’ll agree that following up a cybersecurity blunder by telling the simple truth in plain English is not only genuinely helpful, but also more likely to persuade your customers to trust you in the future.

If nothing else, an open and explanatory security report shows that you’ve actually learned something positive from the incident, and thus reinforces any claims you may make about doing better next time.

What to do?

  • If you’re an Updraft or Updraft Premium user, make sure you have at least version 1.22.4 or 2.22.4 respectively. Even if you consider yourself low-risk through having no or few unprivileged users to worry about, update anyway. As Updraft correctly points out, although an active attack would depend on “a hacker reverse-engineering the changes in the latest [..] release to work it out, […] you should certainly not rely upon this taking long, but should update immediately.”
  • If you run a website of your own, whether it’s based on WordPress or not, practise how you would respond if you came across a data-threatening bug like this one. Preparing how you would respond if you were to fail is not the same as simply preparing to fail. In fact, being aware of the work you’d need to do in the event of a critical bug or a data breach is a good incentive for learning how to defend against such problems in the first place.

go top