Category Archives: News

DEADBOLT ransomware rears its head again, attacks QNAP devices

Yes, ransomware is still a thing.

No, not all ransomware attacks unfold in the way you might expect.

Most contemporary ransomware attacks involve two groups of criminals: a core gang who create the malware and handle the extortion payments, and “members” of a loose-knit clan of “affiliates” who actively break into networks to carry out the attacks.

Once they’re in, the affiliates then wander around the victim’s network, getting the lie of the land for a while, before abruptly and often devastatingly scrambling as many computers as they can, as quickly as they can, typically at the worst possible time of day.

The affiliates typically pocket 70% of the blackmail money for any attacks they conduct, while the core criminals take an iTunes-ike 30% of every attack done by every affiliate, without ever needing to break into anyone’s computers themselves.

That’s how most malware attacks happen, anyway.

But regular readers of Naked Security will know that some victims, notably home users and small business, end up getting blackmailed via their NAS, or networked attached storage devices.

Plug-and-play network storage

NAS boxes, as they are colloquially known, are miniature, preconfigured servers, usually running Linux, that are typically plugged directly into your router, and then act as simple, fast, file servers for everyone on the network.

No need to buy Windows licences, set up Active Directory, learn how to manage Linux, install Samba, or get to grips with CIFS and other network file system arcana.

NAS boxes are “plug-and-play” network attached storage, and popular precisely because of how easily you can get them running on your LAN.

As you can imagine, however, in today’s cloud-centric era, many NAS users end up opening up their servers to the internet – often by accident, though sometimes on purpose – with potentially dangerous results.

Notably, if a NAS device is reachable from the public internet, and the embedded software, or firmware, on the NAS device contains an exploitable vulnerability, you could be in real trouble.

Crooks could not ony run off with your trophy data, without needing to touch any of the laptops or mobile phones on your network, but also modify all the data on your NAS box…

…including directly rewriting all your original files with encrypted equivalents, with the crooks alone knowing the unscrambling key.

Simply put, ransomware attackers with direct access to the NAS box on your LAN could derail almost all your digital life, and then blackmail you directly, just by accessing your NAS device, and touching nothing else on the network.

The infamous DEADBOLT ransomware

That’s exactly how the infamous DEADBOLT ransomware crooks operate.

They don’t bother attacking Windows computers, Mac laptops, mobile phones or tablets; they just go straight for your main repository of data.

(You probably turn off, “sleep”, or lock most of your devices at night, but your NAS box probably quietly runs 24 hours a day, every day, just like your router.)

By targeting vulnerabilities in the products of well-known NAS vendor QNAP, the DEADBOLT gang aims to lock everyone else on your network out of their digital lives, and then to squeeze you for several thousands dollars to “recover” your data.

After an attack, when you next try to download a file from the NAS box, or to configure it via its web interface, you might see something like this:

In a typical DEADBOLT attack, there’s no negotiation via email or IM – the crooks are blunt and direct, as you see above.

In fact, you generally never get to interact with them using words at all.

If you don’t have any other way to recover your scrambled files, such as a backup copy that’s not stored online, and you’re forced to pay up to get your files back, the crooks expect you simply to send them the money in a cryptocoin transaction.

The arrival of your bitcoins in their wallet serves as your “message” to them.

In return, they “pay” you the princely sum of nothing, with this “refund” being the sum total of their communication with you.

This “refund” is a payment that is worth $0, submitted simply as a way of including a bitcoin transaction comment.

That comment is encoded as 32 hexadecimal characters, which represent 16 raw bytes, or 128 bits – the length of the AES decryption key you will use to recover your data:

The DEADBOLT variant pictured above even included a built-in taunt to QNAP, offering to sell the company a “one size fits all decryption key” that would work on any affected device:

Presumably, the crooks above were hoping that QNAP would feel guilty enough about exposing its customers to a zero-day vulnerability that it would pony up BTC 50 (currently about $1,000,000 [2022-09-07T16:15Z]) to get everyone off the hook, instead of each victim paying up BTC 0.3 (about $6000 now) individually.

DEADBOLT rises again

QNAP has just reported that DEADBOLT is doing the rounds again, with the crooks now exploiting a vulnerability in a QNAP NAS feature called Photo Station.

QNAP has published a patch, and is understandably urging its customer to ensure they’ve updated.

What to do?

If you have a QNAP NAS product anywhere on your network, and you are using the Photo Station software component, you may be at risk.

QNAP’s advice is:

  • Get the patch. Via your web browser, login to the QNAP control panel on the device and choose Control Panel > System > Firmware Update > Live Update > Check for Update. Also update the apps on your NAS device using App Center > Install Updates > All.
  • Block port-forwarding in your router if you don’t need it. This helps to prevent traffic from the internet from “reaching through” your router in order to connect and log in to computers and servers inside your LAN.
  • Turn off Universal Plug and Play (uPnP) on your router and in your NAS options if you can. The primary function of uPnP is to make it easy for computers on your network to locate useful services such as NAS boxes, printers, and more. Unfortunately, uPnP often also makes it dangerously easy (or even automatic) for apps inside your network to open up access to users outside your network by mistake.
  • Read up QNAP’s specific advice on securing remote access to your NAS box if you really need to enable it. Learn how to restrict remote access only to carefully-designated users.


Chrome and Edge fix zero-day security hole – update now!

Just three days after Chrome’s previous update, which patched 24 security holes that were not in the wild…

…the Google programmers announced the release of Chrome 105.0.5195.102, where the last of the four numbers in the quadruplet jumps up from 52 on Mac and Linux and 54 on Windows.

The release notes confirm, in the clipped and frustrating “indirect statement made in the passive voice” bug-report style that Google seems to have borrowed from Apple:

 CVE-2022-3075: Insufficient data validation in Mojo. Reported by Anonymous on 2022-08-30 [...] Google is aware of reportsrts [sic] that an exploit for CVE-2022-3075 exists in the wild.

Microsoft has put out an update, too, taking its browser, which is based on Chromium, to  Edge 105.0.1343.27.

Following Google’s super-brief style, Microsfoft wrote merely that:

 This update [Edge 105.0.1343.27] contains a fix for CVE-2022-3075, which has been reported by the Chromium team as having an exploit in the wild

As always, our translation of security holes written up in this non-committal way is: “Crooks or spyware vendors found this vulnerability before we did, have figured out how to exploit it, and are already doing just that.”

EoP or RCE?

We’d love to be able to determine, given that the bug relates to the incorrect handling of input data, whether this bug leads to a worrying security outcome such as EoP, short for elevation of privilege, or if it can be abused for a more disastrous result such as full-blown RCE, short for remote code execution.

EoP typically means that crooks need a malware foothold to start with, so that EoP bugs usually can’t be exploited for breaking in the first place.

They’re still vital to patch, because a crook who’s sneaking round your computer under cover of a limited user such as GUEST will often bring along an EoP exploit to “promote” themselves so they have root or sysadmin powers, aiming to turn what might otherwise have been a modest risk on a single computer into a total compromise of your whole network.

RCE exploits, on the other hand, are commonly used either to get a beachhead inside a network to initiate an attack, or to jump repeatedly from computer to computer once inside, or both.

Once again, the brevity of Google’s report means that, even though the bug report is High and not Critical, we’re going to invite you to infer that we’re talking about RCE here, and therefore to assume that a determined attacker could use this bug to implant malware from scratch.

Mojo and IPC

Mojo, in case you’re wondering, is a Google code library for what’s known as IPC, short for inter-process communication.

These days, for security reasons, browsers generally don’t run as a single, monolithic operating system process.

Loosely speaking, a process can consist of multiple threads, which are essentially “sub-processes” inside the main process, by means of which a single program can quietly get on with doing two things at the same time, such as printing out a document while you’re scrolling through it, or carrying out a spelling check in the background.

Splitting a single-process application into threads is more convenient (by which we mean “is much quicker and easier, but way less secure”) than splitting it into separate processes, because all the threads inside a process have access to the same chunk of memory.

That means that threads can interact and share data much more easily, because they can simply dip directly into the same common pool of data, including checking the current configuration settings, exchanging memory addresses, sharing file handles, re-using cached images directly from RAM, and much more.

On the other hand, sharing one big memory space means that a bug in one part of the program, such as the thread that is busily rendering and displaying your first browser tab, could trample on or affect code that’s busy with other things, such as the threads handling the rest of the tabs you have open.

As a result, modern browsers generally split themselves into numerous separate processes, for example so that each tab is handled in an independent process, thus preventing one runwaway tab from trivially leeching data such as cookies and access tokens from others tabs related to completely different websites.

Inter-process communication

This means you need a secure and reliable way of shuffling data between the separate processes of the browser.

Instead of tab A and tab B simply consulting a common block of memory M in the main browser thread, the indpendent processess of tab A and tab B processes need to be supplied with their own copies of the data they’ll need.

And that’s where you need an aptly named inter-process communincation system, or IPC.

Any processes that shuffling data between themselves via IPS need to agree on how to construct that data correctly for sending, and how to deconstruct it safely at the other end.

The jargon term for this is serialisation and deserialisation, because you’re taking chunks of data, possibly plucked out of content already stored in numerous different areas of memory, and converting those chunks into a structured list of “here is your very own record of the data items, the types and the values of the stuff you need to know”.

Once serialised, the data can then be transmitted to another process – perhaps via a shared block of memory, or over a communication pipe at the operating system level, via a network link, or even tapped out in Morse code for anyone to pick up – in such a way that the receiver can make sense of the data, and unpack it independently, without needing to know anything about the current or future internal state of the sender’s process.

For example, if A sends B a blob of 128 bytes, is that two 32-bit integers and two 64-bit floating point numbers (4+4+8+8 = 24 bytes so far), followed by the single byte 0x67 (103 in decimal), followed by 103 bytes of ASCII text (4+4+8+8+1+103 = 128 bytes overall)?

Or is it a UTF-8 text message of exactly 120 bytes, padded with zeros if necessary to fill out the space, followed by two 32-bit numbers that denote the width and height of the on-screen window in which to display it?

When sender and receiver disagree

As you can imagine, misinterpeting the data you receive via IRC, or failing to check that it makes sense before relying on it, could have serious consequences.

In the first example, if the string-length byte denotes a size bigger than the amount of data left (e.g. 0xFF instead of 0x67), then blindly trusting that erroneous size byte will cause you to read past the end of the buffer.

In the second example, if process A forgets about the width and height data and sends a full 128 bytes of UTF-8 text instead, then blindly “decoding” two 32-bit numbers at the end will produce incorrect values, perhaps even dangerously so.

If you multiply those incorrectly encoded numbers together to work out how many bytes of storage to allocate for the on-screen window, you are probably heading towards memory mismanagement problems somewhere down the line.

Ideally, senders will validate their IPC data outputs before transmitting them, and receivers will independently re-validate their IPC inputs before consuming and using them, but [a] that doesn’t always happen and [b] even when it does, you could still end up in trouble if you have inconsistent validation procedures at each end.

In other words, “insufficient data validation” of IPC data exchanged by co-operating processes is always a bug, and could end up being serious, as in this case.

What to do?

Patch early, patch often!

In Chrome, check that you’re up to date by clicking Three dots > Help > About Google Chrome, or by browsing to the special URL chrome://settings/help.

The Chrome version you are looking for (or Chromium version , if you’re using the non-proprietary, open source flavour) is: 105.0.5195.102 or later.

In Edge, it’s Three dots > Help and feedback > About Microsoft Edge.

The Edge version you’re after is: 105.0.1343.27 or later.

Google’s release notes also list an update to the Extended Stable Channel, which you might be using if you’re on a computer provided by work – like Mozilla’s Extended Support Release or ESR, it’s an official version that lags behind on features but keeps up with security patches, so you aren’t forced to adopt new features just to get patched.

The Extended Stable version you want is: 104.0.5112.114.

Google has also just announced a Chrome for iOS update, available (as always) via the App Store.

There’s no mention of whether the iOS version was affected by CVE-2022-3075, but the version you’re after, in any case, is 105.0.5195.100.

(We’re guessing that by iOS, Google means both iOS and iPadOS, now shipped as different variants of Apple’s underlying mobile operating system.)

Nothing in the release notes so far [2022-09-05T13:45Z] about Android – check in Google Play to see if you’re up to date.


Peter Eckersley, co-creator of Let’s Encrypt, dies at just 43

We don’t often write obituaries on Naked Security, but this is one of the times we’re going to.

You might not have heard of Peter Eckersley, PhD, but it’s very likely that you’ve relied on a cybersecurity innovation that he not only helped to found, but also to build and establish across the globe.

In fact, if you’re reading this article right on the site where it was originally published, Sophos Naked Security, you’re directly reaping the benefits of Peter’s work right now.

If you click on the padlock in your browser [2022-09-0T22:37:00Z], you’ll see that this site, like our sister blog site Sophos News, uses a web certificate that’s vouched for by Let’s Encrypt, now a well-established Certificate Authority (CA).

Let’s Encrypt, as a CA, signs TLS cryptographic certificates for free on behalf of bloggers, website owners, mail providers, cloud servers, messaging services…

…anyone, in fact, who needs or wants a vouched-for encryption certificate, subject to some easy-to-follow terms and conditions.

Remember that web certificates can’t, and don’t, vouch for the actual content that you ultimately serve up. But they do, and they can, provide evidence that you have demonstrated in some way that you actually control the internet domains that you claim to own, without which everyone could casually claim to be someone else, and anyone could easily phish or snoop on almost everyone.

A “wild idea” made real

As one of Peter’s former colleagues, Seth Schoen, wrote earlier today on the Let’s Encrypt community forum:

I’m devastated to report that Peter Eckersley […], one of the original founders of Let’s Encrypt, died earlier this evening [2022-09-02] at CPMC Davies Hospital in San Francisco.

Peter was the leader of EFF’s contributions to Let’s Encrypt and ACME over the course of several years during which these technologies turned from a wild idea into an important part of Internet infrastructure. […] You can find a very abbreviated version of this history in the Let’s Encrypt paper, to which Peter and I both contributed.

Peter had apparently revealed recently that he had been diagnosed with cancer – he turned just 43 shortly before midsummer’s day this year (or perhaps, given that he was originally from Melbourne in Australia, we should say midwinter’s day).

Making a confoundingly complex process simple, yet trustworthy

Let’s Encrypt wasn’t the first effort to try to build a free-as-in-freedom and free-as-in-beer infrastructure for online encryption certificates, but the Let’s Encrypt team was the first to build a free certificate signing system that was simple, scalable and solid.

As a result, the Let’s Encrypt project was soon able to to gain the trust of the browser making community, to the point of quickly getting accepted as a approved certificate signer (a trusted-by-default root CA, in the jargon) by most mainstream browsers.

Indeed, part of Let’s Encrypt’s appeal (and perhaps even its primary importance) is not just that you don’t have to pay a fee to get web certificates signed, but also that the whole process of generating, signing, validating, deploying and renewing certificates is free and easy (automatic, in fact!), yet safe and well thought out.

Before Let’s Encrypt, many website owners didn’t bother with HTTPS at all, and in many cases, especially for home users, charities, small businesses or hobbyists, the chief hassle wasn’t always the cost (though if you had several sites to protect, cost quickly became a big deal).

One of the chief hassles with HTTPS, until Let’s Encrypt came along, was… well, simply put, the hassle of it all.

The hassle of understanding the jargon, of generating the right sort of keypairs and certificates, of submitting the needed certificate signing requests, of actually paying the fee to have them processed, and of deploying them once the signing was done.

And then doing the same thing again, year after year, so that your keys and certificates didn’t expire and leave your visitors facing certificate warnings, or your website getting blocked.

Winning over the world

At first, the efforts of Let’s Encrypt weren’t universally popular, and some of the most vocal opponents (ironically, considering what Let’s Encrypt set out to do in terms of freedom and simplicity) came from the midst of those same hassled home users, hobbyists and boutique site operators whom we mentioned above.

A vigorous minority were somehow convinced that HTTPS was a con, a conspiracy, a cult…

…a coterie of cryptographic crusaders who were committed to compelling us all to use encryption, whether we wanted it or not.

Even for material that we wanted to make public! Even for content that was as boring and as uncontroversial as eating cornflakes for breakfast! Extra complexity with no obvious purpose! We never asked the “experts” to push HTTPS on us in the first place, not even for free!

Thanks to the perseverance, personality and persuasiveness of Peter Eckersley and his co-creators, however, we don’t hear those complaints much on Naked Security any more.

After all, end-to-end encryption of web traffic isn’t only about keeping the actual content you’re viewing confidential.

It’s also about keeping confidential the fact that you chose to view it (and when and where you did so), which really isn’t anyone else’s business.

It’s about preventing anyone who wants to from casually setting up a fake website that says it belongs to someone else, even to a well-known brand.

It’s about inhibiting the casual, continuous, warrantless surveillance of your web traffic by governments and cybercriminals alike.

And it’s about making it difficult for other internet users to fiddle with the content you’re reading along the way, or to tamper with the replies you send back, thus undetectably turning what you see and what you say into fake news, or stealing your passwords, or trashing your online reputation, or taking over your online accounts.

Ethics and safety of AI

In recent years, Peter founded the AI Objectives Institute, with the aim of ensuring that we pick the right social and economic problems to solve with AI:

We often pay more attention to how those goals are to be achieved than to what those goals should be in the first place. At the AI Objectives Institute, our goal is better goals.

To borrow the very words that Peter himself wrote to conclude his personal obituary for the late activist Aaron Schwartz, who was a close friend…

Peter Eckersley, may you read in peace.

And thanks for Let’s Encrypt.

It really has brought HTTPS to where it belongs – everywhere.


S3 Ep98: The LastPass saga – should we stop using password managers? [Audio + Text]

LISTEN NOW

With Doug Aamoth and Paul Ducklin.

Intro and outro music by Edith Mudge.

Click-and-drag on the soundwaves below to skip to any point. You can also listen directly on Soundcloud.

You can listen to us on Soundcloud, Apple Podcasts, Google Podcasts, Spotify, Stitcher and anywhere that good podcasts are found. Or just drop the URL of our RSS feed into your favourite podcatcher.


READ THE TRANSCRIPT

DOUG.  LastPass breached, Airgapping breached, and “Sanitizing” Chrome.

All that, and more, on the Naked Security podcast.

[MUSICAL MODEM]

Welcome to the podcast, everybody, I’m Doug Aamoth.

With me, as always, is Paul Ducklin.

Paul, how do you do today, Sir?


DUCK.  I’m very cheery, thank you, Doug.

Well, I’ve got a big smile on my face.


DOUG.  Great.


DUCK.  Just because!


DOUG.  I’ve got something that will put an extra-big smile on your face.

We’re going to talk about This Week in Tech History…

…on 20 August 1990, the Computer Misuse Act went into effect in your home, the UK.

The Act was meant to punish three types of offences: unauthorised access to computer material; unauthorised access meant to facilitate further offences; and unauthorised modification of computer material.

And the Act was spurred in part by two men accessing British Telecom’s voicemail system, including the personal mailbox of Prince Philip.

Paul, where were you when the Computer Misuse Act was enacted?


DUCK.  Well, I wasn’t actually living in the UK at that time, Doug.

But, all over the world, people were interested in what was going to happen in the UK, precisely because of that “Prestel Hacking” court case.

The two perpetrators were (actually, I don’t think I can call them that, because their conviction was overturned) Robert Schiffreen and Stephen Gold.

[Stephen] actually died a few years ago – silentmodems.com is a suitable-for-work memento to him.

They were tried for, I think, forging and uttering, which is where you create something fake and then convince someone it’s true, which was felt to be a bit of a legal stretch.

And although they were convicted and fined, they went to appeal and the court said, “No, this is nonsense, the law doesn’t apply.”

It was pretty obvious that, although sometimes it’s better to try and make old laws apply to new situations, rather than just churning out new legislation all the time, in this case, where computer intrusions were concerned…

…perhaps taking analogues from the old physical days of things like “forging” and “breaking and entering” and “theft” just weren’t going to apply.

So that’s exactly what happened with the Computer Misuse act.

It was meant to usher in rather different legislation than simply trying to say, “Well, taking data is kind of like stealing, and breaking into a computer is kind of like trespass.”

Those things didn’t really add up.

And so the Computer Misuse Act was famously meant to cross the bridge into the digital era, if you like, and begin to punish cybercrime in Britain.


DOUG.  And the world’s toughest segue here to our first story!

We go from the Computer Misuse Act to talking about static analysis of a dynamic language like JavaScript.


DUCK.  That’s what you might call an anti-segue: “Let’s segue by saying there is no segue.”


DOUG.  I try to pride myself on my segues and I just had nothing today.

There’s no way to do it. [LAUGHTER]


DUCK.  I thought it was pretty good…

Yes, this is a nice little story that I wrote up on Naked Security, about a paper that was presented recently at the 2022 USENIX Conference.

It’s entitled: Mining Node.js Vulnerabilities via Object Dependence Graph and Query.

JavaScript bugs aplenty in Node.js ecosystem – found automatically

And the idea is to try to reintroduce and to reinvigorate what’s called static analysis, which is where you just look at the code and trying to intuit whether it has bugs in it.

It’s a great technique, but as you can imagine, somewhat limited.

There’s nothing quite like testing something by using it.

Which is why, for example, in the UK, where there’s an annual safety test for your car, a lot of it is inspection…

…but when it comes to the brakes, there’s actually a machine that spins up the wheels and checks that they really *do* slow things down properly.

So, static analysis has sort-of fallen out of favour, if you like, because according to some schools of thought, it’s a bit like trying to use, say, a simple spelling checker on a document to judge whether it is actually correct.

For example, you put a scientific paper into a spelling checker, and if none of the words are misspelled, then the conclusions must be true… clearly, that’s not going to work.

So, these chaps had the idea of trying to update and modernise static analysis for JavaScript, which is quite tricky because in dynamic languages like JavaScript, a variable could be an integer at one moment and a string the next, and you can add integers and strings and it just automatically works things out for you.

So a lot of the bugs that you can identify easily with classic static analysis?

They don’t apply with dynamic languages, because they’re meant to allow you to chop and change things at runtime, so what you see in the code is not necessarily what you get at runtime.

But the [resesrchers] prove that there is what you might call “life in the old dog yet”, because they were able to take 300,000 packages from the NPM repository, and using their automated tools, fairly briskly I think, they found about 180 bugs, of which somewhere around 30 actually ended up getting CVEs.

And I thought this was interesting, because you can imagine – in a world of supply-chain attacks where we’re taking massive amounts of code from things like NPM, PyPI, RubyGems, PHP Packagist – it’s hard to subject every possible package to full dynamic analysis, compile it, run it and test it… before you even begin to decide, “Do I trust this package? Do I think that this development team is up to scratch?”

It’s nice to have some more aggressive tools that allow you to find bugs proactively in the giant, convoluted, straggly web of complication that is contemporary supply-chain source code dependencies.


DOUG.  Well, that’s great! Great work everybody!

I’m very proud of these researchers, and this is a good addition to the computing community.

And speaking of an addition to the computing community, it seems that the “airgap” has been breached so badly that you might as well not even use it.

Am I right, Paul?

Breaching airgap security: using your phone’s gyroscope as a microphone


DUCK.  Sounds like you’ve read the PR stuff. Doug!


DOUG.  [LAUGHING] I can’t deny it!


DUCK.  Regular Naked Security readers and podcast listeners will know what’s coming next… Ben-Gurion University of the Negev in Israel.

They have a team there who specialise in looking at how data can be leaked across airgaps.

Now, an airgap is where you actually want to create two deliberately separate networks for security purposes.

A good example might be, say, malware research.

You want to have a network where you can let viruses loose, and let them roam around and try stuff…

…but you don’t want them to be able to escape onto your corporate network.

And the best way to do that is not to try and set all kinds of special network filtering rules, but just say, “You know what, we’re actually going to have two separate networks.”

Thus the word airgap: there’s no physical interconnection between them at all, no wire connecting network A to network B.

Now, clearly, in a wireless era, things like Wi-Fi and Bluetooth are a disaster for segregated networks.

[LAUGHTER]

There are ways that you can regulate that.

For example, let’s say you say, “Well, we are going to let people take mobile phones into the secure area – it’s not a *super* secure area, so we’ll let them take their mobile phones”, because they might need to get a phone call from home or whatever.

“But we’re going to insist on their phones, and we’re going to verify that their phones, are in a specific lockdown condition.”

And you can do that with things like mobile device management.

So, there are ways that you can actually have airgapped networks, separate networks, but still be a little bit flexible about the devices that you let people bring in.

The problem is that there are all sorts of ways that an untrustworthy insider can seem to work perfectly *within* the rules, seem to be 100% compliant, yet have gone rogue and exfiltrate data in sneaky ways.

And these researchers at Ben-Gurion University of the Negev… they’re great at PR as well.

They’ve done things in the past like LANTENNA, which is where they use a LAN cable as a sort of radio transmitter that leaks just enough electromagnetic radiation from the wire inside the network cabling that it can be picked up outside.

And they had the FANSMITTER.

That was where, by varying the CPU load deliberately on a computer, you can make the fan speed up and slow down.

And you can imagine, with a microphone even some distance away, you can kind of guess what speed a fan is doing on a computer on the other side of the airgap.

Even if you only get a tiny bit of data, even if it’s just one bit per second…

…if all you want to do is surreptitiously leak, say, an encryption key, then you might be in luck.

This time, they did it by generating sounds on the secure side of the airgap in a computer speaker.

But computer speakers in most computers these days, believe it or not, can actually generate frequencies high enough that the human ear can’t hear it.

So you don’t have a giveaway that there’s suddenly this suspicious squawking noise that sounds like a modem going off. [LAUGHTER]

So, that’s ultrasonic.

But then you say, “Well, all the devices with microphones that are on the other side of the airgap, they’re all locked down, nobody’s got a microphone on.”

It’s not allowed, and if anyone were found with a mobile phone with a microphone enabled, they’d instantly be sacked or arrested or prosecuted or whatever…

Well, it turns out that the gyroscope chip in most mobile phones, because it works by detecting vibrations, can actually act as a really crude microphone!

Just enough to be able to detect the difference between, say, two different frequencies, or between two different amplitudes at the same frequency.

They were able to exfiltrate data using the gyroscope chip in a mobile phone as a microphone…

… and they did indeed get as low as one bit per second.

But if all you want to do is extract, say, an AES key or an RSA private key, which might be a few hundred or a few thousand bits, well, you could do it in minutes or hours using this trick.

So, airgaps are not always what they seem. Doug.

It’s a fascinating read, and although it doesn’t really put your home network at great risk, it’s a fun thing to know about.

If you have anything to do with running secure networks that are meant to be separate, and you want to try and protect yourself against potentially rogue insiders, then this is the sort of thing that you need to be looking at and taking into account.


DOUG.  OK, very good.

Moving right along, we are fans around here of saying “validate thine inputs” and “sanitise thine inputs”, and the newest version of Chrome has taken away the joy we will get from being able to say “sanitise thine inputs”, because it’s just going to do it automatically.

Chrome patches 24 security holes, enables “Sanitizer” safety system


DUCK.  Well, that’s great, it means we can say, “Sanitise thine inputs has become easier”!

Yes, Chrome 105 is the latest version; it just came out.

The reason we wrote it up on Naked Security is it patches no fewer than 24 security holes – one Critical, I think, with eight or nine of them considered High, and more than half of them are down to our good friends memory mismanagement flaws.

Therefore it’s important, even though none of them are zero-days this time (so there’s nothing that we know that the crooks have got onto yet)…

…with 24 security holes fixed, including one Critical, the update is important on that account alone.

But what’s interesting is this is also the version, as you’re saying, which Google has turned on a feature called “Sanitizer”.

It’s been knocking around in browsers in the background experimentally for about a year.

In Firefox, it’s off by default – you can’t turn it on, but you still have to go into special settings and enable it.

The Google crew have decided, “We’re going to put it on by default in our browser”, so I don’t doubt that Firefox will follow suit.

And the idea of this “Sanitizer”…

…it doesn’t fix any problems automatically on its own.

It’s just a new programming function you have that, as a Web programmer, when you generate HTML and shove it into a web page…

…instead of just setting some variable in JavaScript that makes the stuff appear on the web ppage, there’s now a special function called SetHTML, which will take that HTML and it will subject it to a whole load of “sanitise thine input” checks by default.

Notably, that if there’s anything in there, like script tags (even if what you are creating is like mashing together a whole load of variables – so, something that didn’t show up in static analysis, for example), by the time it comes to setting that in the browser, if there’s anything that is considered risky, the content will simply be removed.

The page will be created without it.

So rather than trying to say, “Well, I see you put some angle brackets and then [the word] script – you don’t really want to do that, so I’ll change the angle bracket to ampersand LT semicolon, so instead of *being* an angle bracket, it *displays* as an angle bracket, so it’s a display character, not a control character.

What the Sanitizer does, it says, “That shouldn’t be there”, and it actually strips it out automatically.

By default, the idea is if you use this function, you should be a lot safer than if you don’t.

And it means you don’t have to knit your own sanitisation checking every time you’re trying to process stuff.

You can rely on something that’s built into the browser, and knows what sort of things the browser thinks are important to remove automatically.

So the things to look out for are a new JavaScript function called SetHTML and a JavaScript object called Sanitizer.

And we’ve got links to Google’s pages and to MDN Web Docs in the article on Naked Security.

So, if you’re a Web programmer, be sure to check this out – it’s interesting *and* important.


DOUG.  OK, very good.

Also interesting and important: LastPass has been breached, and according to some reports on the web (I’m paraphrasing the band REM here), “It’s the end of the world as we know it.”

LastPass source code breach – do we still recommend password managers?


DUCK.  When this news first broke, Doug, I wasn’t really inclined to write this up on Naked Security at all.

I figured, ” This is really embarrassing negative PR for LastPass”, but as far as I can tell, it was their source code and their proprietary stuff, their intellectual property, that got stolen.

It wasn’t customer data, and it certainly wasn’t passwords, which aren’t stored in the cloud in plaintext anyway.

So, as bad as it was, and as embarrassing as it was, for LastPass, my take on it was, “Well, it’s not an incident that directly puts their customers online accounts or passwords at risk, so it’s a battle they have to fight themselves, really.”


DOUG.  That’s important to point out, because a lot of people, I think, who don’t understand how password managers work – and I wasn’t totally clear on this either… as you write in the article, your local machine is doing the heavy lifting, and all the decoding is done *on your local machine*, so LastPass doesn’t actually have access to any of the things you’re trying to protect anyway.


DUCK.  Exactly.

So, the reason why I did ultimately write this up on Naked Security is htat I received a lot of messages in comments, and emails, and on social media, from people who either weren’t sure, or people saying, “You know what, there’s an awful lot of guff floating around on social media about what this particular breach means.”

LastPass and other password managers have had security problems before, including bugs in the code that *could* have leaked passwords, and those got some publicity, but somehow they didn’t quite attract the attention of this: [DRAMATIC] “Oh golly, the crooks have got their source code!”

There was a lot of misinformation, I think, a lot of FUD [fear, uncertainty, doubt] flying around on social media, as you say.

People going, “Well, what do you expect when you entrust all your plaintext passwords to some third party?”

Almost as though the messages on social media where people say, “Well, that’s the problem with password managers. They’re not a necessary evil at all, they are an *unnecessary* evil. Get rid of them!”

So that’s why we wrote this up on Naked Security, as a sort of question and answer session, dealing with the key questions people are asking.

Obviously, one of the questions that I asked, because couldn’t really avoid it, is: “Should I give up on Last pass and switch to a competitor?”

And my answer to that is: that’s a decision you have to make for yourself.

But if you’re going to make the decision, make sure you make it for the right reasons, not for the wrong reasons!

And ,more importantly, “Should I give up on password managers altogether? Because this is just proof that they can never possibly be secure because of breaches.”

And as you say, that represents a misunderstanding about how any decent password manager works, where the master password that unlocks all your sub-passwords is never shared with anybody.

You only ever put it in on your own computer, and it decrypts the sub-passwords, which you then have to share with the site that you’re logging into.

Basically, the password manager company doesn’t know your master password, and doesn’t store your master password, so it doesn’t have your master password to lose.

And that’s important, because it means not only can the master password not be stolen from the password manager site, it also means that even if law enforcement show up there and say, “Right, show us all the person’s passwords,” they can’t do that either.

All they’re doing is acting as a storage location for, as you say, an encrypted BLOB.

And the idea is that it only ever should be decrypted on your device after you’ve put in your master password, and optionally after you’ve done some kind of 2FA thing.

So, as you say, all the live decryption and heavy lifting is done by you, with your password, entirely in the confines of your own device.


DOUG.  Very helpful!

So the big question, “Do we still recommend using password managers?”… I think we can safely say, “Yes.”


DUCK.  Yes, there is a last question, which is I guess is a more reasonable one: “Does suddenly having all the source code, which they didn’t have before, put the crooks at such a significant advantage that it’s game over for LastPass?”


DOUG.  Well, that is a great segue to our reader question!

If I may spike it over the net here in volleyball style…


DUCK.  Oh, yes.


DOUG.  On the LastPass article, Naked Security reader Hyua comments, in part: “What if the attackers somehow managed to modify the source code? Wouldn’t it become very risky to use LastPass? It’s like a SaaS service, meaning we can’t just not update our software to prevent the corrupted source code from working against us.”


DUCK.  Well, I don’t think it’s just software-as-a-service, because there is a component that you put on your laptop or your mobile phone – I must say, I’m not a LastPass user myself, but my understanding is you can work entirely offline if you wish.

The issue, was, “What if the crooks modified the source code?”

I think we have to take LastPass at its word at the moment: they’ve said that the source code was accessed and downloaded by the crooks.

I think that if the source code had been modified and their systems had been hacked… I’d like to think they would have said so.

But even if the source code had been modified (which is essentially a supply chain attack, well…

…you would hope, now LastPass knows that there’s been a breach, that their logs would show what changes had been made.

And any decent source code control system would, you imagine, allow them to back out those changes.

You can be a little bit concerned – it’s not a good look when you’re a company that’s supposed to be all about keeping people from logging in inappropriately, and one of your developers basically gets their password or their access token hacked.

And it’s not a good look when someone jumps in and grabs all your intellectual property.

But my gut feeling is that’s more of a problem for LastPass’s own shareholders: “Oh golly, we were keeping it secret because it was proprietary information. We didn’t want competitors to know. We wanted to get a whole lot of patents,” or whatever.

So, there might be some business value in it…

..but in terms of “Does knowing the source code put customers at risk?”

Well, I think it was another commenter on Naked Security said, [IRONIC] “We’d better hope that the Linux source code doesn’t get leaked anytime soon, then!”

Which I think pretty much sums up that whole issue exactly.


DOUG.  [LAUGHS]

All right, thank you for sending in that comment, Hyua.

If you have an interesting story, comment or question you’d like to submit, we’d love to read it on the podcast.

You can email tips@sophos.com, you can comment on any one of our articles, or you can hit us up on social: @NakedSecurity.

That’s our show for today – thanks very much for listening.

For Paul Ducklin, I’m Doug Aamoth, reminding you until next time to…


BOTH.  Stay secure!

[MUSICAL MODEM]


URGENT! Apple slips out zero-day update for older iPhones and iPads

Well, we didn’t expect this!

Our much-loved iPhone 6+, now nearly eight years old but in pristine, as-new condition until a recent UDI (unintended dismount incident, also known as a bicycle prang, which smashed the screen but left the device working fine otherwise), hasn’t received any security updates from Apple for almost a year.

The last update we received was back on 2021-09-23, when we updated to iOS 12.5.5.

Every subsequent update for iOS and iPadOS 15 has understandably reinforced our assumption that Apple had dropped iOS 12 support for evermore, and so we relegated the old iPhone to background duty, solely as an emergency device for maps or phone calls while on the road.

(We figured that another crash would be unlikely to wreck the screen any further, so it seemed a useful compromise.)

But we’ve just noticed that Apple has decided to update iOS 12 again after all.

This new update applies to the following models: iPhone 5s, iPhone 6, iPhone 6 Plus, iPad Air, iPad mini 2, iPad mini 3, and iPod touch 6th generation. (Before iOS 13.1 and iPadOS 13.1 came out, iPhones and iPads used the same operating system, referred to as iOS for both devices.)

We didn’t receive a Security Advisory email from Apple, but an alert Naked Security reader who knows we still have that old iPhone 6+ let us know about Apple Security Bulletin HT213428. (Thanks!)

Simply put, Apple has published a patch for CVE-2022-32893, which is one of the two mysterious zero-day bugs that received emergency patches on most other Apple platforms earlier in August 2022:

Malware implantation

As you will see in the article just above, there was a WebKit remote code execution bug, CVE-2022-32893, by means of which a jailbreaker, a spyware peddler, or some devious cybercriminal could lure you to a booby-trapped website and implant malware on your device, even if all you did was glance at an otherwise innocent-looking page or document.

Then there was a second bug in the kernel, CVE-2022-32894, by which said malware could extend its tentacles beyond the app it just compromised (such as a browser or a document viewer), and get control over the innards of the operating system itself, thus allowing the malware to spy on, modify or even install other apps, bypassing Apple’s much vaunted and notoriously strict security controls.

So, here’s the good news: iOS 12 isn’t vulnerable to the kernel-level zero-day CVE-2022-32894, which almost certainly avoids the risk of total compromise of the operating system itself.

But here’s the bad news: iOS 12 is vulnerable to the WebKit bug CVE-2022-32893, so that individual apps on your phone definitely are at risk of compromise.

We’re guessing that Apple must have come across at least some high-profile (or high-risk, or both) users of older phones who were compromised in this way, and decided to push out protection for everyone as a special precaution.

The danger of WebKit

Remember that WebKit bugs exist, loosely speaking, at the software layer below Safari, so that Apple’s own Safari browser isn’t the only app at risk from this vulnerability.

All browsers on iOS, even Firefox, Edge, Chrome and so on, use WebKit (that’s an Apple requirement if you want your app to make it into the App Store).

And any app that displays web content for purposes other than general browsing, such as in its help pages, its About screen, or even in a built-in “minibrowser”, is also at risk because it will be using WebKit under the covers.

In other words, just “avoiding Safari” and sticking to a third-party browser is not a suitable workaround in this case.

What to do?

We now know that the absence of an update for iOS 12 when the latest emergency patches came out for more recent iPhones was not down to the fact that iOS was already safe.

It was simply down to the fact that an update wasn’t available yet.

So, given that we now know that iOS 12 is at risk, and that exploits against CVE-2022-32893 are being used in real life, and that there is a patch available…

…then it’s an urgent matter of Patch Early/Patch Often!

Go to Settings > General > Software Update, and check that you have iOS 12.5.6.

If you haven’t yet received the update automatically, tap Download and Install to begin the process right away:

Go to Settings > General > Software Update.
You’re looking for iOS 12.5.6.
Use Download and Install if needed.

go top