“Dirty Pipe” Linux kernel bug lets anyone write to any file

Max Kellermann, a coder and security researcher for German content management software creators CM4all, has just published a fascinating report about a Linux kernel bug that was patched recently.

He called the vulnerability Dirty Pipe, because it involves insecure interaction between a true Linux file (one that’s saved permanently on disk) and a Linux pipe, which is a memory-only data buffer that can be used like a file.

Very greatly simplified, if you have a pipe that you are allowed to write to and a file that you aren’t…

…then, sometimes, writing into the pipe’s memory buffer may inadvertently also modify the kernel’s temporary in-memory copies – the so-called cache pages – of various parts of the disk file.

Annoyingly, even if the file is flagged as “read only” by the operating system itself, modifying its underlying kernel cache is treated as a “write”.

As a result, the modified cache buffer is flushed back to disk by the kernel, permanently updating the contents of the stored file, despite any operating system permissions applied to it.

Even a physically unwritable file, such as one on a CD-ROM or an SD card with the write-enable switch turned off, will appear to have been modified for as long as the corrupted cache buffers are kept in memory by the kernel.

Which versions are affected?

For those running Linux who want to cut to the chase and check if they’re patched, Kellermann reports that this bug was introduced (at least in its current, easily exploitable form) in kernel 5.8.

That means three officially supported kernel flavours are definitely at risk: 5.10, 5.15 and 5.16.

The bug was patched in 5.10.102, 5.15.25 and 5.16.11, so if you have a version that is at or above one of those, you’re OK.

Apparently, Android is affected too, and although a fix for the vulnerability was incorporated into the kernel source code on 2022-02-24, neither Google’s Android Security Bulletin for March 2022, nor the company’s Pixel-specific notes, mention this bug, now dubbed CVE-2022-0847.

Of all the many officially supported Android handsets we’ve surveyed so far, however, only the Pixel 6 series seems to use kernel 5.10, with most devices back on one of the older-but-apparently-not-vulnerable Linux 5.4 or 4.x versions.

User-friendly log files

Intriguingly, Kellermann discovered the vulnerablity due to intermittent corruption of HTTP log files on his company’s network.

He had a server process that would regularly take daily logfiles, compressed using the Unix-friendly gzip utility, and convert them into monthly logfiles in the Windows-friendly ZIP format for customers to download.

ZIP files support, and typically use, gzip compression internally, so that raw gzip files can actually be used as the individual components inside a ZIP archive, as long as ZIP-style control data is added at the start and end of the file, and in between each internal gzipped chunk.

So, to save both time and CPU power, Kellermann was able to avoid temporarily decompressing each day’s logfile for each customer, only to recompress it immediately into the all-inclusive ZIP file.

He ended up creating a writable Linux pipe to which he could export the all-in-one ZIP archive, and then he’d read from each gzip file in turn, sending them one-by-one into the output pipe, with the needed headers and trailers inserted at the right points.

For extra efficiency, he used the special Linux function splice(), which tells the kernel to read data from a file and write it into a pipe directly from kernel memory, which avoids the overhead of a traditional read()-and-then-write() loop.

Reading-and-writing using the traditional read() and write() functions means copying data from the kernel into a memory buffer assigned by the user, and then copying that buffer straight back into the kernel, so the data is copied around in memory at least twice, even though it’s not modified in the process. To avoid this overhead, splice() and its companion function sendfile() were introduced to Linux for use when programmers wanted to move data unaltered between two file system objects. For large files on a busy server, this is faster and reduces load.

Corrupted bytes, 8-at-a-time

Occasionally, however, Kellermann would find that the last 8 bytes of one of the original gzip files would get corrupted, even though he was only ever reading from these files.

All his output was going into the writable “output pipe” used to create the combined ZIP file.

There was nothing in his code that even tried to write to any of the input files, which were opened “read only” and should therefore have been protected by the operating system anyway.

One telltale he spotted was that the corrupted 8 bytes almost always showed up in the last gzip file of any month, and were always 50 4B 01 02 1E 03 14 00 in hexadecimal.

Threat researchers will recognise 50 4B 01 02 right away, because 50 4B comes out as PK in ASCII characters, short for Phil Katz, the creator of the ZIP file format.

Also commonly seen in malware analysis involving ZIP files are those bytes 01 02 immediately after the PK – that’s a special marker that denotes “what follows is a block of data in the end-of-archive ZIP trailer”.

The bytes 1E 03, in case you’re wondering, denote that the file was created on a Unix-like system (0x03), following zipfile specification 3 (0x1E is 30 in decimal, interpreted as version 3.). The 14 00 after that denotes that PKZIP 2.0 or later is needed to uncompresss (0x14 is 20 in decimal, interpreted as 2.0).

In other words, Kellerman was able to infer that the data bleeding into the very end of occasional “read only” gzip files was always the start of the additional data that he was adding at the end of his writable, all-in-one ZIP file.

No matter how carefully he perused at his own code, however, he couldn’t see how he could perpetrate this corruption with a bug of his own, even if he wanted to.

After all, the side-effect of the bug was that his software ended up corrupting 8 bytes
at the end of a file that the kernel itself was supposed to stop him writing to anyway.

As he writes of his feelings at the time:

Blaming the Linux kernel (i.e. somebody else’s code) for data corruption must be the last resort. That is unlikely. The kernel is an extremely complex project developed by thousands of individuals with methods that may seem chaotic; despite of this, it is extremely stable and reliable. But this time, I was convinced that it must be a kernel bug.

But with perseverance, he was able to create two minimalist programs, with just three and five lines of code respectively, that reproduced the misbehaviour in a way that could only be blamed on the kernel.

Following that, he was able to construct a proof-of-concept attack that allows an unprivileged user to modify even a well-locked-down file such as your list of trusted SSH keys, or the list of “known good” digital signatures you’re willing to connect to for updates.

Even worse, it seems that this bug, given its low-level nature, can be used inside a virtualised container (where any running program is not supposed to have write access to any objects outside its “sandbox” or “jail”) to modify files that would usually be off limits.

The good news, of course, is that Kellermans’s careful digging led not only to uncovering the bug and understanding its cause, but also to helping the community devise a patch to close the hole.

What to do?

  • If you’re a Linux 5.x user. Check your kernel version. You want 5.10.102, 5.15.25 or 5.16.11 (or above). If your distro uses older kernel versions with its own security patches “backported”, check with your distro maker for details. Otherwise, simply run the command uname -r to print your kernel release.
  • If you’re an Android user. We don’t quite know what to tell you at the time of writing [2022-03-08T17:00Z], given the variety of kernel versions in use by various products and vendors. So far, the only mainstream device we’re aware of that has kernel 5.10 is the Google Pixel 6 series, apparently at 5.10.43, and with no mention of a fix for this bug in the latest Pixel Update Bulletin. A few other devices (e.g. Google Pixel 5) are on kernel 5.4, with the majority of the rest from Google and other vendors alike still on a 4.x version. You can view your kernel version at Settings > About phone > Android version > Kernel version – if you are on 5.10, we suggest you keep your eye on the Android Security Bulletins overview page.
  • If you’re a programmer hunting bugs. When tracking down otherwise inexplicable behaviour, develop the art of creating the smallest “repro” you can – that’s the jargon term for code that reliably reproduces the incorrect behaviour so that others can investigate more easily. Eliminate as many possible variables and uncertainties from the bug-hunting equation as you can before handing it over to someone else. They will thank you for it.

Adafruit suffers GitHub data breach – don’t let this happen to you

Popular open-source computer hardware company Adafruit Industries accidentally exposed customer data…

…via the GitHub account of a former employee.

As you’ve probably figured out already, Adafruit is named after after Ada Lovelace, a nineteenth-century British intellectual who was a computer programmer long before any programmable computers existed.

As mysterious as that might sound, the story is both uplifting and disappointing in equal measure. In the 1830s, British inventor Charles Babbage designed a general-purpose computer that he dubbed the Difference Engine. While he was busy trying to construct the device, Ada started wrestling with how it might be used. She outlined numerous programming principles that today we take for granted, such as loops and subroutines for commonly repeated computations, essentially coding various algorithms that would run on it. She even began pondering the issues of artificial intelligence and whether computing machines might ever truly be considered capable of independent thought and creativity. (Her considered conclusion, dubbed Lady Lovelace’s Objection by twentieth-century computer scientist Alan Turing, was: “No.”)
Unfortunately, Babbage’s computer – which was, of nineteenth century necessity, entirely mechnical – turned out to be unbuildable: the lathes and milling machines of the day just weren’t up to the precision required to allow its many cogs and levers to operate reliably in unison. The cumulative effects of backlash in the mechanism meant that it never worked, so the Victorian age never acquired giant steampunk computers, and Ada’s code was never executed on an actual device.

The company sells a wide range of miniature open-source hardware boards and kits for hobbyists and professionals alike. (Think Raspberry Pi and Arduino, along with loads more custom hardware that’s even smaller and even funkier.)

What happened?

According to Adafruit’s public report:

The inadvertent disclosure involved an auditing data set used for employee training becoming public, on a GitHub repository associated with an inactive former employee’s account who was learning data analysis. The repository contained some names, email addresses, shipping/billing addresses and/or whether orders were placed successfully via credit card processor and/or PayPal, as well as details for some orders. There were no user passwords or financial information such as credit cards in the data analysis set.

Reading between the lines of the company’s notification, it sounds as though the leaked data had been sitting around in public for at least two years, given that the database entries exposed don’t go past 2019.

(Unfortunately, the report doesn’t say who reported the leaked data, when it was discovered, how obviously exposed it was, when the ex-employee concerned left the company, when the data was extracted from the company’s live data, or how many customers or records were involved.)

Adafruit claims that it got onto the job of removing the offending information within 15 minutes of hearing about the problem, contacting the ex-employee to get the data deleted, and kicking off an analysis to try to figure out who else might have seen it, and what they might have done with it.

From the report, it sounds as though the results of the forensic analysis were inconclusive – the company wasn’t able to specify with certainty whether the data was accessed or not, but it did comment: “[W]e are unaware of any actual misuse of the information”.

Nevertheless, Adafruit published a reminder that breaches of this sort, once reported, do provide a powerful pretext for cybercriminals.

The company is warning customers to watch out for apparently believable phishing campaigns that “warn” potential “victims” to take corrective action such as resetting their passwords via a handily-supplied but fake website, and for to bogus callers claiming to be offering “official support” and requesting personal information “for confirmation”:

As a reminder, for your security, we will never send you a link to reset your password as part of a security alert, our customer support team will never contact you asking for your password. If you receive an email of this nature, or otherwise suspect that someone is attempting to gain access to your account or solicit your personal information, or have any other questions about this process, please contact us at security@adafruit.com.

If phishing criminals do have access to actual names, addresses and order details from a company database breach, then their fraudulent emails can be made even more believable by including genuine historical data as believable but bogus “proof” that their scam warnings are real.

What to do?

  • If you’re a customer and you bought any Adafruit products before 2020. Take Adafruit’s own advice and be aware of possible phishing attempts that try to scare you with “urgent actions” allegedly necessitated by this breach.
  • If you’ve had a breach at your own company. By all means, use Adafruit’s official report as a partial example of how to respond, but try to include in your notification some of the content that Adafruit omitted. Firstly, offer a genuine apology – after all, if you aren’t sorry, why should customers think you care more about their security now than you did before? Secondly, if you have conducted or are still in the process of conducting a forensic analysis, as Adafruit claims, be clear what you have already found (including admitting what you don’t know and are unlikely ever to discover), or provide a date by which you expect to follow up with further details.
  • If you need test or training data to use outside your live systems. Don’t simply skim off real data and distribute it outside your secure servers for those purposes. Numerous tools exist both for redacting genuine data so that it reflects reality without revealing personal details, and for generating realistic but artificial data that is suitable for training.
  • If you’re entrusted with access to genuine company data. Don’t copy it or upload it anywhere other than official company locations. Especially don’t upload it to personal cloud accounts, such as GitHub storage – even if your motivations are honest and your intentions impeccable – where the company can’t fulfil its own data protection obligations, and can’t reliably revoke your access to it if you leave.

Firefox patches two in-the-wild exploits – update now!

Mozilla has published Firefox 97.0.2, an “out-of-band” update that closes two bugs that are officially listed as critical.

Mozilla reports that both of these holes are already actively being exploited, making them so-called zero-day bugs, which means, in simple terms, that the crooks got there first:

We have had reports of attacks in the wild abusing [these] flaw[s].

Access to information about the bugs is still restricted to Mozilla insiders, presumably to make it harder for attackers to get at the technical details of how to exploit these security holes.

Assuming that the existing zero-day exploits are not widely known (these days, true zero-days are often jealously guarded by their discoverers because they’re considered both scarce and valuable), temporarily limiting access to the source code changes does provide some protection against copycat attacks.

As we’ve mentioned many times before on Naked Security, finding and exploiting a zero-day hole when you know where to start looking, and what to start looking for, is very much easier than discovering such a bug from scratch.

The bugs are listed as:

  • CVE-2022-26485. Use-after-free in XSLT parameter processing. This bug has apparently already been exploited for remote code exection (RCE), implying that attackers with no existing privileges or accounts on your computer could trick you into running malware code of their choice simply by luring you to an innocent-looking but booby-trapped website.
  • CVE-2022-26486, Use-after-free in WebGPU IPC Framework. This bug has apparently already been exploited for what’s known as a sandbox escape. This sort of security hole can typically be abused on its own (for example, to give an attacker access to files that are supposed to be off limits), or in combination with an RCE bug to allow implanted malware to escape from the security confines imposed by your browser, thus making an already bad situation even worse.

Use-after-free bugs occur when one part of a program signals its intention to stop using a chunk of memory that was allocated to it…

…but carries on using it anyway, thus potentially trampling on data that other parts of the program are now relying on.

In the best case, a use-after-free bug typically leads to corrupted data or to a program crash, either of which can be considered a security problems in its own right.

In the worst case, a use-after-free leads to remote code execution, where the data that’s trampled on is wilfully modified by the attackers to trick the program into running untrusted code from outside.

What to do?

Go to the About Firefox dialog to check your current version.

If you are out of date then Firefox will offer to fetch the update and then present a [Restart Firefox] button; click the button, or exit and restart the browser, to deploy the update.

The version numbers you want are: Firefox 97.0.2 (if you are using the regular release), or Firefox 91.6.1 ESR (if you are using the extended support release), or Firefox 97.3.0 for Android.

If you’re on Android, check for updates via the Play Store.

If you’re a Linux user where Firefox is managed by your distro, check your distro creator.

Note that if you are not yet on the latest major version (97.0 for regular Firefox, or 91.6 for the Extended Support Release), you may need to complete the update in multiple stages, so be sure to re-visit the About Firefox dialog after each update has been installed, to make sure you have finished all needed update-and-restart cycles.


S3 Ep72: AirTag stalking, web server coding woes and Instascams [Podcast + Transcript]

LISTEN NOW

Click-and-drag on the soundwaves below to skip to any point. You can also listen directly on Soundcloud.

With Doug Aamoth and Paul Ducklin.

Intro and outro music by Edith Mudge.

You can listen to us on Soundcloud, Apple Podcasts, Google Podcasts, Spotify, Stitcher and anywhere that good podcasts are found. Or just drop the URL of our RSS feed into your favourite podcatcher.


READ THE TRANSCRIPT

DOUG. AirTag hacking, Y2K… [AMAZED] wait, Y2K?!?!!

And Instagram scams.

All that more on the Naked Security Podcast.

[MUSICAL MODEM]

Welcome to the podcast, everybody.

I am Doug; he is Paul.

And Paul, we’ve got a great line up today, and I love starting the show with a Fun Fact.

And I don’t know if you’re a fan of the Bard, Bill Shakespeare, but I spotted a quote on the Shakespeare Quote of the Day website…

…as you know, the Bard has a way with words, and although I’m not entirely sure which play this line comes from, I thought it was interesting and informative in these trying times.

The quote is as follows: “An SSL error has occurred and a secure connection to the server cannot be made.”

Beautiful.


DUCK. Wow!

When that one’s on at the Globe [Theatre] in London, I think I might go!

Quite a lot of history in that, isn’t there?

Because, of course, if you were to modernise it, you’d say: “A TLS error has occurred.”


DOUG. Yes.


DUCK. Obviously, back in the 16th and 17th centuries… it was still SSL back then.


DOUG. Let us talk about something new, then something old, then something kind-of in the middle.

So, we start with this AirTag story… Apple AirTags.

Now, my impression of how these work is: You buy this $29 device, which has got a Bluetooth Low Energy signal inside it, and then wherever it is, it leverages iPhones around it to relay the signal of this AirTag back to a central server somewhere, where only the location of AirTags that you own will be shown to you.

Yet it’ll use anyone else’s iPhone that’s nearby.


DUCK. Apple calls it Find My.

So, you put the AirTag in your rucksack… “Find My rucksack.”

And it sounds like a surveillance nightmare!

You’ve got all these devices (A) identifying themselves, (B) relying on other people knowing where they are so they can call home and dob them into Apple, and (C) Apple knowing where every individual tag is at every moment.

But it is actually much more secure than that…

…because Apple knows where AirTags are, but not which ones they are, because they use a randomly generated code that changes every 15 minutes.

And since you, the owner of the AirTag, are the only person who knows the magic code that gives you the object to look up in Apple’s database, it means that *you* can check whether your AirTag turned up anywhere and was called in by anybody.

But neither Apple nor the person who called home with your AirTag’s identifier can put two and two together.

So, it’s actually quite a clever system.


DOUG. OK, then there’s the anti-stalking feature, which is…

….someone puts *their* AirTag into *my* backpack.


DUCK. Yes, that’s the naughty side of it, isn’t it?

They are the only person who can track that AirTag, for privacy and anonymity reasons, but if they deliberately put that AirTag into your bag, then actually they’re tracking *you*.


DOUG. And my iPhone will say, “Hey, your phone keeps relaying someone else’s AirTag location. You might want to check it out.”

Right? Is that how it works?


DUCK. Pretty much, Doug.

The easiest way to think of it is to use Apple’s own words.

This is called Tracker Detect, and the idea is:

If any AirTag, AirPod or other Find My network accessory separated from its owner is seen moving with you over time, you’ll be notified.

So, Apple can’t tell you who’s tracking you, because there could be an innocent explanation.

But it’s a good indication that you might want to go looking through your bag to try and find this electronic item that you did not put there!


DOUG. And there’s another built in protection, is there not?


DUCK. Yes.

The AirTag knows if it hasn’t called its own registered “phone mothership” lately, and if it hasn’t been near your phone for a while, it will start emitting a high-pitched, annoying beeping noise.

And the idea is that this lets you discover AirTags that you’re wondering, “Where on Earth has that jolly thing gone?”

Like those 1990s whistle-me key rings…


DOUG. [LAUGHS]


DUCK. …and this is quite a good idea.


DOUG. [LAUGHS] It is…


DUCK. If you’ve lost your AirTag where it actually can’t see your phone but it’s still in your house, it’ll make a noise, and you’ll go, “Oh, golly, it’s down the back of the stove”, and you’ll dig it out with a stick.

But it also means that if someone plants an AirTag on you, it’s supposed to basically give itself away.


DOUG. OK, and it’s a good thing that there are two of those features for a little redundancy.

Because, as you say in the article, people are selling black market AirTags with the speaker disconnected.


DUCK. Yes – it’s a regular AirTag, but when it decides that it needs to warn everybody that it’s not where it should be, you won’t be able to hear it.

So, we know that the noise doesn’t necessarily solve the problem, because noise can be silenced by snipping a little wire.

But the other question is, “What about this Tracker Detect feature that warns you when there are rogue or unexpected AirTags that keep popping up more frequently than you might reasonably expect?”


DOUG. And so we get to the meat of our story!


DUCK. Indeed, Doug!

This research is from is Fabian Bräunlein.

He figured, “I wonder how sensitive Apple’s Tracker Detect is to what you might call ‘noise in the system’.”

And so he built a fake AirTag that pretended to be 2000 different AirTags at the same time.

He was doing his broadcasts only every 30 seconds, and he had 2000 different device code sequences to cycle through.

And he found, with a volunteer who agreed to do this, that over a five-day period, he was able to generate consistent location messages that, of course, he could receive because he knew how to look them up in Apple’s privacy-preserving network…

…but without triggering the Tracker Detect warning.

Because, obviously, none of his pseudo-AirTags were ever visible often enough to trip Apple’s warning that, ” Hey, someone seems to be following you around.”

I don’t think he’s expecting Apple to come up with a magic solution… there might not be one.

But it is just an important reminder that, sometimes, when you build privacy-preserving cryptography and anonymity into a network, then it does also lend itself to types of abuse that are quite hard to track, in exactly the same way as we find with technologies like TOR [The Onion Router].

So, it’s an interesting observation on the tussle between privacy and law enforcement, if you like.


DOUG. All right, we will keep an eye on that!

That is: Apple AirTag anti-stalking protection bypassed by researchers, on nakedsecurity.sophos.com.

And, Paul, we are on episode 72 of the podcast since I joined you in this venture, and I never thought we would be talking about Y2K this much!

It seems like we were just talking about Y2K… why are we talking about it again?


DUCK. [IRONIC] Well, it’s only been 22 years, Doug, and lessons sometimes take a lot longer to learn.

The headline in the article on Naked Security is a little bit of a joke: it isn’t actually Y2K- or date-related, but it *is* “number precision” related.

It turns out that, pretty much by coincidence, both Firefox and the Chromium series of browsers will go from version 99 to version 100 in the next few weeks or months.

Well, that means that a version number, which gets sent out in User-Agent strings and which gets parsed, recognized and used for who knows what purposes by web servers all over the world…

…it means that a two-digit number is suddenly going to become a three-digit number.

And *surely*, Doug, *surely* no web servers are going to trip up over the fact that 99 is followed by 100?

I mean, how hard can that be?


DOUG. What could possibly go wrong?


DUCK. But it turns out that an admittedly small, but nevertheless worryingly non-zero, number of web servers *do* have a problem with this!

Like this one… I don’t mean to pick on them; I just did this because they’re already on the official list that Mozilla programmers are building into a list of known exceptions “just in case”.

This was daimler.com.

I went there with the developer version of Edge, which is already on version 100 because it’s two versions ahead of the regular one.

And, Doug, daimler.com told me, “Your browser is a classic”, with a cute picture of an old, classic 1980s Merc-Benz.

It didn’t have a little picture of a Lynx browser running, which would have impressed me….


DOUG. [LAUGHS]


DUCK. …and yet when I visited with the regular version of Edge, which is still at version 98, it went, “Hello, visitor”, like nothing was wrong.

And it did make me stop to think… [SQUEAKY VOICE] seriously!?!

Choking because a number is carried over from 99 to 100? In the year 2022? Given what we learned in the year 1999?

But surprises never cease, Doug.


DOUG. So, one theory is that it’s taking the version number and, since it can only handle two digits, it’s truncating either the first digit or the last digit.

So it’s either zero-zero or ten, and it thinks you’re running a browser from decades ago


DUCK.Is it about ten or twelve years since Firefox went to version 10? I forget… but quite a long time!

So, this is one of those mystifying bugs: it shouldn’t have happened.


DOUG. All right, we have some advice for both web users and web programmers.

And my favourite, of course, is the advice you give to web programmers, which is [LAUGHS]… we’ll get to that.

But if you’re a user?


DUCK. You don’t really have to do anything; that’s the good part.

And there isn’t much you can do.

But if, when your browser gets to version 100, there are some sites you absolutely need to visit and suddenly you can’t, and it’s telling you, “Your browser is too ancient”, this is something you might want to investigate.

And there are some workarounds that both Mozilla and the Chromium crews are looking at.

So just be aware of this… that is all I’m saying.


DOUG. OK.

And if you’re a web programmer, you say, “Why…” [MUTTERS, LAUGHS]; “why are you having…”; basically, “Find a new job.”


DUCK. [AGHAST] I didn’t say that, Doug!


DOUG. [CONCILIATORY] I know, I know…


DUCK. [PAUSE] I thought it… but I didn’t say it.


DOUG. [LAUGHS]


DUCK. What can you say?

I just wrote, “If you’re a web programmer, then this shouldn’t be a problem.”

If you sit down, and you look in the mirror, and you think, “You know what, some of my code… maybe I have made too many hard-coded assumptions in there”…

…then you need to rethink your programming practices.

Imagine if this does happen to your web server.

What kind of an impression does it give about your attention to detail?

I think the average user who’s thinking a little bit about cybersecurity is going to go, “You know what? If they can’t tell the difference between 99 and 100, how good are they going to be when they come to processing 16-digit credit card numbers?”


DOUG. Or my username?

Or my password?

Or my Social Security number?


DUCK. Exactly!

So, it’s not a very good look if you’ve got this problem.

I can think of better ways of advertising how strongly your company thinks of cybersecurity as a value!


DOUG. All right: Did we learn nothing from Y2K? Why are some coders still stuck on two-digit numbers?, on nakedsecurity.sophos.com.

It’s time for our This Week in Tech History, segment, and this week, on 02 March 1969, the Concorde supersonic airliner made its first flight, before eventually spinning up commercial service in 1976.

The plane was able to cross the Atlantic in about half the time of a normal flight, all for the meagre sum of around $13,000 in today’s money for a round-trip ticket.

The Concorde operated until 2003, was eventually retired due to low demand and perceived danger, after an unfortunate crash in July of 2000.

And Paul, you have some great Concorde stories, although you have not ridden on it….


DUCK. [WISTFUL] No, but I was tempted.

One of the Air France aircraft, unfortunately, as you say, crashed due to debris left on the runway, I think.

So, they were taken out of service and then eventually they were allowed to resume.

But I think the zest had gone out of it because [STAGE WHISPER] to be honest, they’re not very green (how can I put it?), for reasons we will discuss in a moment.

So, there was a chance, a very brief chance of a few months, when you could actually get a surprisingly inexpensive one-way ride.

Basically, they blast you to New York from London and you arrive before you take off!

You take off at 10:30, I think, and you arrive at 09:30 in the morning; then they just fly you back on a regular plane.

You’re doing it so that you can sit, Doug, in a commercial passenger jetliner that has jet engines with reheat… or as you Americans perhaps more poetically put it, afterburner!

Can you imagine: a commercial airliner…


DOUG. Amazing!


DUCK. …”Oh, we need 20% more power”, WOARRRGH!

And it could exceed Mach 2!

55,000 feet, and you’d be going faster than 2000 kilometres an hour!


DOUG. Amazing.


DUCK. As far as I know, Concorde had half the thrust of an A380, but its maximum landing weight – obviously, once it has burned off all that fuel – was somewhere around about one-quarter of an Airbus A380.

So, when it came to power to weight ratio… !!!?!?!

I did see it come in to land twice…

…and, Doug, it’s just so different to any other plane you’ve seen that isn’t a jet fighter or something.

Modern planes are normally really long and really wide; this is really long and super thin.

It looks like something you might take into the pub in small scale and throw at a dartboard.. just incredible!

But I suppose we shan’t see that kind of thing again.

And given how much fuel it needed to transport 100 people across the Atlantic Ocean… maybe that is actually not such a bad thing.


DOUG. Yes.

Well, Concorde, we hardly knew ye…

…but something we know very well: Instagram scams.


DUCK. Oh, dear!


DOUG. And there are three new ones; not one; not two; but *three* that have been clogging our inboxes here, Paul!


DUCK. Yes.

I know we’ve talked about them before, and we write about them fairly regularly on Naked Security… but these were various messages; three different types of scam.

I don’t know whether it’s the same crooks, but the modus operandi is the same in terms of: there’s an email; you go to a dodgy page; and they’re looking for your details.

But the point is that crooks are trying lots of different *ways* of doing it.

One was a supposed “Community guidelines” violation.

And, of course, there’s a proposed solution, very convenient: “Just contact us. We’ll let you know the content that violates the guidelines. You can remove it and your account will be fine.”

The second one was the well known “Copyright infringement” scam.

And the proposed solution is: “If this is wrong, you can just click the button, fill in the form, show to us that it’s not copyright, and the strike against you will be removed.”

And the last one, which was quite a nasty one in my opinion, was “Suspicious login alert.”

You get those from lots of sites these days, don’t you?

Was this you logging in from X?

In this case, it claimed to be Vienna in Austria, although they made rather a mistake there!

They called the city “Vienna”, but they called the country “Osterreich”. [Note. Correct spelling is Österreich or Oesterreich].

So, the name of the city was in English while the name of the country was in German, but mis-spelled.

And the map they had behind it was, in fact, Riyadh.


DOUG. [LAUGHS] Riyadh!


DUCK. So, they didn’t quite get it right.

But, by choosing Vienna (slash Riyadh)…

…presumably they know they’re mailing it to people in the UK, so they know that you know that it’s not you.

So, they’re giving you a reason to click the button.

Of course, they’re all scams that want your username and your password.

And in one of the cases, they also said, “Now put in your two-factor authentication code as well.”

Instead of getting your username and password for later and then selling them on, or coming back tomorrow…

…basically, today’s generation of crooks, increasingly they’re going, “Give us your username; give us your password; and give us the 2FA code.”

And even though they’ve only got a minute, or a couple of minutes, to use it, they’ve got someone standing by to do just that, or they’ve got a computer standing by to do just that.

And they’re actually doing the intervention and the account takeover in near-real-time.


DOUG. Yes, that’s scary, because then they own the account!


DUCK. Yes.

Now, some of these, you should spot them… like “Vienna/Oesterreich”, the mix of languages.

And there are some grammatical mistakes.

One of them, interestingly, had a domain name that looked like Instagram, but the first “I” was actually lowercase “L”, which in most browsers comes out looking like an uppercase “I”, so it looked like the word “Instagram”.

There should be enough in each of these for you to spot that it doesn’t look right.


DOUG. Yes, I would give these a B for badness – these are not as good as I like to see out of a well-crafted scam.

But I can see… especially the “Copyright infringement” one.

I could see people just hammering that button and going, “I did *not* do this. I am outraged. I’m offended.!”


DUCK. Yes, I agree.

And that’s the one where the URL starts with… it’s actually “Lnstagram”, but it looks like “Instagram”.

It just says, “Please enter your username”, and then the crooks actually go to your account and fetch your publicly-visible login icon, and they add that into the next page, just for a little bit of verisimilitude.

They’re making it look believable.

And then, of course, they ask you for your password twice.

I think that’s because, these days, at least some people have got in the habit of: “Put in the wrong password first time, and if they accept it, then you know it’s a scam.”

Then, the crooks give you a nice cheery message: “We will contact you back in 48 hours.”

And then there’s a help button that gives you… it’s not grammatically perfect, but they give you a perfectly reasonable help page, don’t they?

And there’s nothing outright and obviously bad about this.


DOUG. Yes, that one’s not bad.


DUCK. There’s no deep threat, just, “Look, you can help yourself if you want to”, and then at the end they go, “Fine, we’ll sort this out for you.”


DOUG. What can people do to avoid such scams in the future?

First, we have: Don’t click “helpful” links in emails or other messages.


DUCK. Indeed!

If you’ve practised beforehand, “Where do I go to check who’s logged into my account recently? Where do I go to counter a copyright notice or to look it up?”…

…if you know the link yourself, then you never need to click on links in emails, *even if they’re emails that Instagram sent you*.

And if you never click on the links in the emails, then you can never be caught out.


DOUG. And then we’ve used this one before, but it is pertinent as ever: Think before you click.


DUCK. Yes.

That’s easy to say, and it’s obvious to say…

…but the reason that this article is mostly pictures, and not many words, is that it’s a great way to practise looking for the “less likely” tall tales.


DOUG. And then my personal favorite… if you’re doing it right, you should have no idea what your password is for any site you have an account on: Use a password manager if you can.


DUCK. Yes!

Because in this case, if you set up your password manager carefully, where you know you have carefully typed in “i-n-s-t-a-g-r-a-m DOT com”…

…that is how your password manager will remember the workflow needed for Instagram logins.

It will invent the password.

And it means that if ever you go to a website that looks like Instagram – even if it is a pixel-perfect copy of the Instagram login page; even if it has a URL that is different in only one character – your password manager will go, “Nope, I don’t know that one.”


DOUG. And then finally, we have a great video that you can watch… starring our friend Paul.


DUCK. Admittedly, this video is from about a year ago, but we talk about the things you can watch out for, and actually show you, “This is how it unfolds.”

Which was the same idea as this article: we took a series of screenshots of what would happen if you went right through, from go to woe, in three different scams.

If not for you, at least so you can show your friends and family.


DOUG. All right, that is: Instagram scammers as busy as ever: passwords and 2FA codes at risk, on nakedsecurity.sophos.com.

And, as the sun slowly begins to set on our show for this week, we shall turn to one of our readers in our Oh! No! segment.

On the Y2K story we discussed earlier, Naked Security reader 4caster comments:

Until retirement in 2001, I worked for the Meteorological Office, a client of Sophos, which I have always used at home ever since.

Thank you, 4caster!

The Met Office took great care with Y2K, so communications continued to work seamlessly except for planned failures of some ancient and obsolete automatic weather stations on North Sea platforms.

However, at 00:00 on 29 February 2000, all the UK military airfield weather reports stopped being transmitted.

Some [PAUSE] idiot long before had been told that there is no leap day at the turn of a century, and programmed the system accordingly.

People can cater for the ‘known unknowns’, but it’s the ‘unknown unknowns’ that catch us out.


DUCK. Yes, indeed!

And the irony is, if that person had never heard of the fact that there are exceptions to the “is the year divisible by 4” rule for leap years…

…they probably wouldn’t have had this bug.

If they’ve been double-slack, they would have got away with it!

Because, of course, any year that’s divisible by 4, in our modern calendar, is a leap year.

Except when it’s a century, *except* that you don’t make the correction every *fourth* century.

So if they’d actually done nothing, and gone, “Oh well, every year divisible by 4 is a leap year”…

…you can imagine somebody saying, “No, no, no! You’ve got it wrong, you’ve got it wrong: there’s an exception.”

And so, in trying to fix the bug, they actually introduced one!


DOUG. [LAUGHS] That’s the worst!


DUCK. That’s another reminder that sometimes half-fixing a problem can actually be worse than doing nothing about it at all.

So, a job worth doing, Douglas, is worth doing well!


DOUG. Excellent advice, and I agree with you.

If you have an Oh! No! you’d like to submit, we’d love to read it on the podcast.

You can email tips@sophos.com; you can comment on any one of our articles; or hit us up on social: @NakedSecurity.

That is our show for today – thanks very much for listening.

For Paul Ducklin. I’m Doug Aamoth, reminding you, until next time, to…


BOTH Stay secure!

[MUSICAL MODEM]


Ransomware with a difference: “Derestrict your software, or else!”

Just over a year ago, graphics card behemoth Nvidia announced an unexpected software “feature”: anti-cryptomining code baked into the drivers for its latest graphics processing units (GPUs).

Simply put, if the driver software thinks you’re using the GPU to perform calculations related to Ethereum cryptocurrency calculations, it cuts the execution speed of your code in half.

This restriction isn’t meant to protect you from yourself, for example to limit hardware damage if you try to drive the GPU too hard and cause it to overheat dangerously.

This is all about managing supply and demand.

Unfortunately for keen gamers, who love powerful GPUs because they improve their gaming experience with faster and more realistic graphics, cryptocurrency mining syndicates love good GPUs even more.

That’s because GPUs greatly accelerate the mining of Ethereum-based cryptocurrencies, with calculation speeds (or hashrates, as they are known in the jargon) anywhere from five to ten times higher than a normal CPU from the same amount of electricity.

Even more unfortunately for gamers, who might buy one or two GPUs each at a time, mining syndicates use their purchasing power to buy up GPUs in bulk.

This, in turn, encourages scalpers to buy in bulk too, aiming to sell their “second hand” cards well above new retail prices when official supplies run out.

Nvidia decided to appease its many avid gaming fans – surely the company’s most loyal long-term GPU customers, given that they actually want graphics cards for doing graphics – by splitting its processor card line in two.

Mining XOR Gaming

As Nvidia said last year:

To address the specific needs of Ethereum mining, we’re announcing the NVIDIA CMP [Cryptocurrency Mining Processor] product line for professional mining. CMP products, which don’t do graphics, are [… ]optimized for the best mining performance and efficiency. They don’t meet the specifications required of a GeForce GPU and thus don’t impact the availability of GeForce GPUs to gamers.

The idea is that GeForce GPUs run at full speed if used for graphics, but if used for Ethereum mining are deliberately hobbled by Nvidia’s Lite Hash Rate system, or LHR for short.

Public opinion at the time of the announcement was sharply divided, as a quick look at the many comments on last year’s article will reveal.

Naked Security readers reacted in many ways.

A gamer called Trillian said, “Good on Nvidia!”

Others claimed this LHR behaviour was unfair because they used their GPU cards for a mix of gaming and mining (intermingled, intriguingly, with comments from readers who claimed those claims were made up).

And a commenter called J Riley Castine was even more critical, wanting to know, “How is such a move […] not a violation of anti-trust laws?”

Exit light, enter night

Well, it looks as though this year-old community divide over LHR has spilled over into outright cybercrime.

Popular technology website Tom’s Hardware, amongst numerous other commenters, is reporting that cybercrime gang Lapsus$ claims to have hacked Nvidia and stolen a terabyte’s worth of data…

…only to issue what amounts to an unusual ransomware demand: Remove the Lite Hash Rate limiter, or else!

According to an IM screenshot posted by Tom’s Hardware, the alleged hackers wrote:

Hello,

We decided to help mining and gaming community, we want nvidia to push an update for all 30 series firmware that remove every lhr limitations otherwise we will leak hw folder.

If they remove the lhr we will forget about hw folder (it’s a big folder) We both know lhr impact mining and gaming.

Thanks.

The hw folder (hw is short for “computer hardware”) alluded to above is the claimed 1TB of allegedly stolen data, apparently including card schematics, driver and firmware code, internal documentation, and more.

Ironically, in the same message thread, these hackers also claim to be selling their own “LHR unlocker” for some Nvidia cards, although the underground market for such a cracking tool would clearly evaporate if Nvidia were to remove the LHR restrictions for everyone.

Perhaps the alleged existence of this darkweb LHR unlocker is supposed to make Nvidia feel even more pressurised, on the grounds that an LHR bypass could be made public anyway, so the company might as well go along with the blackmail demand?

What to do?

It’s hard to know what to believe when messages of this sort start circulating.

Did the hackers actually get in to start with? Did they really manage to steal the information they’re claiming? Was this a conventional ransomware attack, aiming at both stealing and scrambling data for extra leverage? If so, and we therefore assume that the data scrambling part was thwarted, why should we believe any of the boasts in the messages? Do the crooks really have an LHR unlocker of their own to add to the drama?

We may never know the answers to these questions, but we can learn from the allegations anyway, which reiterate the importance of defence-in-depth.

Defence-in-depth not only involves multiple layers of proactive protection aimed at early threat detection and prevention, but ideally also needs ongoing threat assessment and response, in order to figure out what really happened if anomalies are detected.

As the self-styled Nvidia hackers say:

We were into nvidia systems for about a week, we fastly escalated to admin of a lot of systems. We grabbed 1TB of data.

Whether that’s is true or not in this case, it does describe the nature of many modern cyberattacks, which aren’t simply automated “smash, gran and run” sallies any more.

Modern cyberintrusions typically involve human-led network exploration, privilege escalation, and data exfiltration, often over an extended period.

Intruders with administrator powers often introduce backdoors along the way, or add extra network accounts for themselves, thus giving themselves a quiet and easy way back in next time…

…if you don’t take the trouble to seek-and-destroy the boobytraps they left behind this time.


Learn more about Sophos Managed Threat Response here:
Sophos MTR – Expert Led Response  ▶
24/7 threat hunting, detection, and response  ▶


go top