Category Archives: News

Chrome patches 24 security holes, enables “Sanitizer” safety system

Google’s latest Chrome browser, version 105, is out, though the full version number is annoyingly different depending on whether you are on Windows, Mac or Linux.

On Unix-like systems (Mac and Linux), you want 105.0.5195.52, but on Windows, you’re looking for 105.0.5195.54.

According to Google, this new version includes 24 security fixes, though none of them are reported as “in-the-wild”, which means that there weren’t any zero-days patched this time.

Nevertheless, there’s one vulnerability dubbed Critical, and a further eight rated High.

Of the flaws that were fixed, just over half of them are down to memory mismanagement, with nine listed as use-after-free bugs, and four as heap buffer overflows.

Memory bug types explained

A use-after-free is exactly what it says: you hand back memory to free it up for another part of the program, but carry on using it anyway, thus potentially interfering with the correct operation of your app.

Imagine, for instance, that the part of the program that thinks it has now sole access to the offending block of memory receives some untrusted input, and carefully verifies that the new data is safe to use…

…but then, in the instant before it starts using that validated input, your buggy “use-after-free” code interferes, and injects stale, unsafe data into the very same part of memory.

Suddenly, bug-free code elsewhere in the program behaves as if it were buggy itself, thanks to the flaw in your code that just invalidated what was in memory.

Attackers who can figure out a way to manipulate the timing of your code’s unexpected intervention may be able not only to crash the program at will, but also to wrest control from it, thus causing what’s known as remote code execution.

And a heap buffer overflow refers to a bug where you write more data to memory than will fit in the space that was originally allocated to you. (Heap is the jargon term for the collection of memory blocks that are currently being managed by the system.)

If some other part of the program has a memory block just happens to be near to or next to yours in the heap, then the superfluous data that you just wrote out won’t overflow harmlessly into unused space.

Instead, it will corrupt data that’s in active use somewhere else, which similar consequences to what we just described for a use-after-free bug.

The “Sanitizer” system

Happily, as well as fixing misfeatures that weren’t supposed to be there at all, Google has announced the arrival of a new feature that adds protection against a class of browser flaws known as cross-site scripting (XSS).

XSS bugs are caused by the browser inserting untrusted data, say from a web form submitted by a remote user, directly into the current web page, without checking for (and removing) risky content first.

Imagine, for instance, that you have a web page that offers to show me what a text string of my choice looks like in your funky new font.

If I type in the sample text Cwm fjord bank glyphs vext quiz (a contrived but vaguely meaningful mashup of English and Welsh that contains all 26 letters of the alphabet in just 26 letters, in case you were wondering), then it’s safe for you to put that exact text into the web page you create.

In JavaScript, for example, you could rewrite the body of the web page like this, inserting the text that I supplied without any modification:

document.body.innerHTML = "<p style='font-family:funky;'>Cwm fjord bank glyphs vext quiz"

But if I cheated, and asked you to “display” the text string Cwm fjord<script>alert(42)</script> instead, then it would be reckless for you to do this…

document.body.innerHTML = "<p style='font-family:funky;'>Cwm fjord<script>alert(42)</script>"

…because you would be allowing me to inject untrusted JavaScript code of my choosing directly into your web page, where my code could read your cookies and access data that would otherwise be off-limits.

So, to make what’s known as sanitising thine inputs easier, Chrome has now officially enabled support for a new browser function called setHTML().

This can be used to push new HTML content through a feature called the Sanitizer first, so that if you use this code instead…

document.body.setHTML("<p style='font-family:funky;'>Cwm fjord<script>alert(42)</script>")

…then Chrome will scan the proposed new HTML string for security problems first, and automatically remove any text that could pose a risk.

You can see this in action via the Developer tools by running the above setHTML() code at the Console prompt, and then retrieving the actual HTML that was injected into the document.body variable, as we did here:

Note how the highlighted script tag has been “sanitised” from the HTML ultimately inserted into the page.

Even though we explicitly put a <script> tag in the input that we passed to the setHTML() function, the script code was automatically purged from the output that was created.

If you genuinely need to add potentially dangerous text into an HTML element, you can add a second argument to the setHTML() function that specifies various types of risky content to block or allow.

By default, if this second argument is omitted as above, then the Sanitizer operates at its maximum security level and automatically purges all dangerous content that it knows about.

What to do?

  • If you’re a Chrome user. Check that you’re up to date by clicking Three dots > Help > About Google Chrome, or by browsing to the special URL chrome://settings/help.
  • If you’re a web programmer. Learn about the new Sanitizer and setHTML() functionality by reading advice from Google and the MDN Web Docs.

By the way, if you’re on Firefox, Sanitizer is available, but isn’t yet turned on by default. You can turn it on to learn more about it by going to about:config and toggling the dom.security.sanitizer.enabled option to true.


JavaScript bugs aplenty in Node.js ecosystem – found automatically

Here’s an interesting paper from the recent 2022 USENIX conference: Mining Node.js Vulnerabilities via Object Dependence Graph and Query.

We’re going to cheat a little bit here by not digging into and explaining the core research presented by the authors of the paper (some mathematics, and knowledge of operational semantics notation is desirable when reading it), which is a method for the static analysis of source code that they call ODGEN, short for Object Dependence Graph Generator.

Instead, we want to focus on the implications of what they were able to discover in the Node Package Manager (NPM) JavaScript ecosystem, largely automatically, by using their ODGEN tools in real life.

One important fact here is, as we mentioned above, that their tools are intended for what’s known as static analysis.

That’s where you aim to review source code for likely (or actual) coding blunders and security holes without actually running it at all.

Testing-it-by-running-it is a much more time-consuming process that generally takes longer to set up, and longer to do.

As you can imagine, however, so-called dynamic analysis – actually building the software so you can run it and expose it to real data in controlled ways – generally gives much more thorough results, and is much more likely to expose arcane and dangerous bugs than simply “looking at it carefully and intuiting how it works”.

But dynamic analysis is not only time consuming, but also difficult to do well.

By this, we really mean to say that dynamic software testing is very easy to do badly, even if you spend ages on the task, because it’s easy to end up with an impressive number of tests that are nevertheless not quite as varied as you thought, and that your software is almost certain to pass, no matter what. Dynamic software testing sometimes ends up like a teacher who sets the same exam questions year after year, so that students who have concentrated entirely on practising “past papers” end up doing as well as students who have genuinely mastered the subject.

A straggly web of supply chain dependencies

In today’s huge software source code ecosystems, of which global open source repositories such as NPM, PyPI, PHP Packagist and RubyGems are well-known examples, many software products rely on extensive collections of other people’s packages, forming a complex, straggly web of supply chain dependencies.

Implicit in those dependencies, as you can imagine, is a dependency on each dynamic test suite provided by each underlying package – and those individual tests generally don’t (indeed, can’t) take into account how all the packages will interact when they’re combined to form your own, unique application.

So, although static analysis on its own isn’t really adequate, it’s still an excellent starting point for scanning software repositories for glaring holes, not least because static analysis can be done “offline”.

In particular, you can regularly and routinely scan all the source code packages you use, without needing to construct them into running programs, and without needing to come up with believable test scripts that force those programs to run in a realistic variety of ways.

You can even scan entire software repositories, including packages you might never need to use, in order to shake out code (or to identify authors) whose software you’re disinclined to trust before even trying it.

Better yet, some types of static analysis can be used to look through all your software for bugs caused by similar programming blunders that you just found via dynamic analysis (or that were reported through a bug bounty system) in one single part of one single software product.

For example, imagine a real-world bug report that came in from the wild based on one specific place in your code where you had used a coding style that caused a use-after-free memory error.

A use-after-free is where you are certain that you are finished with a certain block of memory, and hand it back so it can be used elsewhere, but then forget it’s not yours any more and keep using it anyway. Like accidentally driving home from work to your old address months after you moved out, just out of habit, and wondering why there’s a weird car in the driveway.

If someone has copied-and-pasted that buggy code into other software components in your company repository, you might be able to find them with a text search, assuming that the overall structure of the code was retained, and that comments and variable names weren’t changed too much.

But if other programmers merely followed the same coding idiom, perhaps even rewriting the flawed code in a different programming language (in the jargon, so that it was lexically different)…

…then text search would be close to useless.

Wouldn’t it be handy?

Wouldn’t it be handy if you could statically search your entire codebase for existing programming blunders, based not on text strings but instead on functional features such as code flow and data dependencies?

Well, in the USENIX paper we’re discussing here, the authors have attempted to build a static analysis tool that combines a number of different code characteristics into a compact representation denoting “how the code turns its inputs into its outputs, and which other parts of the code get to influence the results”.

The process is based on the aforementioned object dependency graphs.

Hugely simplified, the idea is to label source code statically so that you can tell which combinations of code-and-data (objects) in use at one point can affect objects that are used later on.

Then, it should be possible to search for known-bad code behaviours – smells, in the jargon – without actually needing to test the software in a live run, and without needing to rely only on text matching in the source.

In other words, you may be able to detect if coder A has produced a similar bug to the one you just found from coder B, regardless of whether A literally copied B’s code, followed B’s flawed advice, or simply picked the same bad workplace habits as B.

Loosely speaking, good static analysis of code, despite the fact that it never watches the software running in real life, can help to identify poor programming right at the start, before you inject your own project with bugs that might be subtle (or rare) enough in real life that they never show up, even under extensive and rigorous live testing.

And that’s the story we set out to tell you at the start.

300,000 packages processed

The authors of the paper applied their ODGEN system to 300,000 JavaScript packages from the NPM repository to filter those that their system suggested might contain vulnerabilities.

Of those, they kept packages with more than 1000 weekly downloads (it seems they didn’t have time to process all the results), and determined by further examination those packages in which they thought they’d uncovered an exploitable bug.

In those, they discovered 180 harmful security bugs, including 80 command injection vulnerabilities (that’s where untrusted data can be passed into system commands to achieve unwanted results, typically including remote code execution), and 14 further code execution bugs.

Of these, 27 were ultimately given CVE numbers, recognising them as “official” security holes.

Unfortunately, all those CVEs are dated 2019 and 2020, because the practical part of the work in this paper was done more than two years ago, but it’s only been written up now.

Nevertheless, even if you work in less rarified air than academics seem to (for most active cybersecurity responders, fighting today’s cybercriminals means finishing any research you’ve done as soon as you can so you can use it right away)…

…if you’re looking for research topics to help against supply chain attacks in today’s giant-scale software repositories, don’t overlook static code analysis.

Life in the old dog yet

Static analysis has fallen into some disfavour in recent years, not least because popular dynamic languages like JavaScript make static processing frustratingly hard.

For example, a JavaScript variable might be an integer at one moment, then have a text string “added” to it perfectly legally albeit incorrectly, thus turning it into a text string, and might later end up as yet another object type altogether.

And a dynamically generated text string can magically turn into a new JavaScript program, compiled and executed at runtime, thus introducing behaviour (and bugs) that didn’t even exist when the static analysis was done.

But this paper suggests that, even for dynamic languages, regular static analysis of the repositories you rely upon can still help you enormously.

Static tools can not only find latent bugs in code you’re already using, even in JavaScript, but also help you to judge the underlying quality of the code in any packages you’re thinking of adopting.


LEARN MORE ABOUT PREVENTING SUPPLY-CHAIN ATTACKS

This podcast features Sophos expert Chester Wisniewski, Principal Research Scientist at Sophos, and it’s full of useful and actionable advice on dealing with supply chain attacks, based on the lessons we can learn from giant attacks in the past, such as Kaseya and SolarWinds.

If no audio player appears above, listen directly on Soundcloud.
You can also read the entire podcast as a full transcript.


LastPass source code breach – do we still recommend password managers?

As you no doubt already know, because the story has been all over the news and social media recently, the widely-known and widely-used password manager LastPass last week reported a security breach.

The breach itself actually happened two weeks before that, the company said, and involved attackers getting into the system where LastPass keeps the source code of its software.

From there, LastPass reported, the attackers “took portions of source code and some proprietary LastPass technical information.”

We didn’t write this incident up last week, because there didn’t seem to be a lot that we could add to the LastPass incident report – the crooks rifled through their proprietary source code and intellectual property, but apparently didn’t get at any customer or employee data.

In other words, we saw this as a deeply embarrassing PR issue for LastPass itself, given that the whole purpose of the company’s own product is to help customers keep their online accounts to themselves, but not as an incident that directly put customers’ online accounts at risk.

However, over the past weekend we’ve had several worried enquiries from readers (and we’ve seen some misleading advice on social media), so we thought we’d look at the main questions that we’ve received so far.

After all, we regularly recommend our readers and podcast listeners to consider using a password manager, even though we’ve also written up numerous security blunders in password manager tools over the years.

So, we’ve put together six questions-and-answers below, to help you make an informed decision about the future of password managers in your own digital life.

Q1. What if my password manager gets hacked?

A1. That’s a perfectly reasonable question: if you put all your password eggs in one basket, doesn’t that basket become a single point of failure?

In fact, that’s a question we’ve been asked so often that we have a video specifically to answer it (click on the cog while playing to turn on subtitles or to speed up playback):

[embedded content]


Q2. If I use LastPass, should I change all my passwords?

A2. If you want to change some or all of your passwords, we’re not going to talk you out of it.

(One handy thing about a password manager, as we explain in the video above, is that it’s much quicker, easier and safer to change passwords, because you’re not stuck with trying to concoct and remember dozens of new and complicated text strings in a hurry.)

By all accounts, however, this security incident has nothing to do with the crooks getting at any of your personal data, least of all your passwords, which aren’t stored on LastPass’s servers in a usable form anyway. (See Q5.)

This attack doesn’t appear to involve a vulnerability in or an exploit against the LastPass software by which crooks could attack the encrypted passwords in your password vault, or to involve malware that knows how to insinuate itself into the password decryption process on your own computers.

Furthermore, it doesn’t involve the theft of any personally identifiable “real life” customer information such as phone numbers, postcodes or individual ID numbers that might help attackers to persuade online services into resetting your passwords using social engineering tricks.

Therefore, we don’t think you need to change your passwords. (For what it’s worth, neither does LastPass.)


Q3. Should I give up on LastPass and switch to a competitor?

A3. That’s a question you will have to answer for yourself.

As we said above, as embarrassing as this incident is for LastPass, it seems that no personal data was breached and no password-related data (encrypted or otherwise) was stolen, only the company’s own source code and proprietary information.

Did you ditch Chrome when Google’s recent in-the-wild zero day exploit was announced? Or Apple products after the latest zero-day double play? Or Windows after any Patch Tuesday update in which zero-day bugs were fixed?

If not, then we’re assuming that you are willing to judge a company’s likely future cybersecurity trustworthiness by how it reacted last time a bug or a breach occured, especially if the company’s blunder didn’t directly and immediately put you at risk.

We suggest that you read the LastPass incident report and FAQ for yourself, and decide on that basis whether you are still inclined to trust the company.


Q4. Doesn’t stolen source code mean that hacks and exploits are bound to follow?

A4. That’s a reasonable question, and the answer isn’t straightforward.

Generally speaking, source code is much easier to read and understand that its compiled, “binary” equivalent, especially if it is well-commented and uses meaningful names for things like variables and functions inside the software.

As a somewhat synthetic but easy-to-follow example, compare the Lua source code on the left below with the compiled bytecode (like Java, Lua runs in a virtual machine) on the right:

Left: Readable, commented source code.
Right: Compiled Lua bytecode, as executed at runtime.

In theory, therefore, source code means it ought to be quicker and easier to determine exactly how the software works, including spotting any programming blunders or cybersecurity mistakes, and therefore vulnerabilities ought to be easier to find, and exploits quicker to devise.

In practice, it’s true that acquiring source code to go along with the compiled binaries you are trying to reverse engineer will rarely, if ever, make the job more difficult, and will often make it easier.

Having said that, you need to remember that Microsoft Windows is a closed-source operating system, and yet many, if not most, of the security holes fixed each month on Patch Tuesday were reverse engineered directly from precompiled binaries.

In other words, keeping your source code secret should never be considered to be a vital part of any cybersecurity process.

You also need to remember that many projects rely explicitly on making their source code public, not merely so that anyone can scrutinise it, but also so that anyone who wants can use it, modifiy it and contribute for the greater good of all.

Yet even mainstream open-source projects with liberal usage licences, and with potentially many eyes on that source code over many years, have required critical security patches for bugs that could have been spotted many times over, but weren’t.

Lastly, many proprietary software projects these days (examples include Google’s Chrome browser; Apple’s iOS operating system; the Sophos XG firewall; and thousands more widely-used hardware and software tools) nevertheless make extensive use of numerous open-source components.

Simply put, most contemporary closed-source projects include significant parts for which source code can be downloaded anyway (because licensing demands it), or can be inferred (because licensing requires its use to be documented, even if some modifications to the code were subsequently been made).

In other words, this source code leak may help potential attackers slightly, but almost certainly [a] not as much as you might at first think and [b] not to the point that new exploits will become possible that could never have been figured out without the source code.


Q5. Should I give up on password managers altogether?

A5. The argument here is that if even a company that prides itself on providing tools to lock up your personal and corporate secrets more securely can’t lock up its own intellectual property safely, surely that’s a warning that password managers are a “fool’s errand”?

After all, what if the crooks break in again, and next time it’s not the source code they get hold of, but every individual password stored by every individual user?

That’s a worry – you might almost call it a meme – that’s regularly seen on social media, especially after a breach of this sort: “What if the crooks had download all my passwords? What was I thinking, sharing all my passwords anyway?”

Those would be a genuine concerns if password managers worked by keeping exact copies of all your passwords on their own servers, where they could be extracted by attackers or demanded by law enforcement.

But no decent cloud-based password managers work that way.

Instead, what’s stored on their servers is an encrypted database, or “blob” (short for binary large object) that is only ever decrypted after being transferred to your device, and after you’ve provided your master password locally, perhaps with some sort of two-factor authentication involved to reduce the risk of local compromise.

No passwords in your password vault are ever stored in a directly usable form on the password manager’s servers, and your master password is ideally never stored at all, not even as a salted-and-stretched password hash.

In other words, a reliable password manager company doesn’t have to be trusted not to leak your passwords in the event of a hack of its databases, or to refuse to reveal them in the event of a warrant from law enforcement…

…because it couldn’t reveal them, even if wanted to, given that it doesn’t keep a record of your master password, or any other passwords, in any database from which it could extract them without your agreement and collaboration.

(The LastPass website has a description and a diagram – admittedly a rather basic one – of how your passwords are protected from server-side compromise by not being decrypted except on your own device, under your direct control.)


Q6. Remind me again – why use a password manager?

A6. Let’s summarise the benefits while we’re about it:

  • A good password manager simplifies good password use for you. It turns the problem of choosing and remembering dozens, or perhaps even hundreds, of passwords into the problem of choosing one really strong password, optionally reinforced with 2FA. There’s no longer any need to cut corners by using “easy” or guessable passwords on any of your accounts, even ones that feel unimportant.
  • A good password manager won’t let you use the same password twice. Remember that if crooks recover one of your passwords, perhaps due to a compromise at a single website you use, they will immediately try the same (or similar) passwords on all the other accounts they can think of. This can greatly magnify the damage done by what might otherwise have been a contained password compromise.
  • A good password manager can choose and remember hundreds, even thousands, of long, pseudo-random, complex, completely different passwords. Indeed, it can do this just as easily as you can remember your own name. Even when you try really hard, it’s difficult to choose a truly random and unguessable password yourself, especially if you’re in a hurry, because there’s always a temptation to follow some sort of predictable pattern, e.g. left hand then right hand, consonant then vowel, top-middle-bottom row, or name of cat with -99 on the end.
  • A good password manager won’t let you put the right password in the wrong site. Password managers don’t “recognise” websites just becasuse they “look right” and have the correct-looking logos and background images on them. This helps to protect you from phishing, where you fail to notice that the URL isn’t quite right, and put your password (and even your 2FA code) into a bogus site instead.

Don’t jump to conclusions

So, there’s our advice on the issue.

We’re staying neutral about LastPass itself, and we’re not specifically recommending any password manager product or service out there, including LastPass, above or below any other.

But whatever decision you make about whether you’ll be better off or worse off by adopting a password manager…

…we’d like to ensure that you make it for well-informed reasons.

If you have any more questions, please ask in the comments below – we’ll do our best to answer promptly.


Firefox 104 is out – no critical bugs, but update anyway

Recent updates to Apple Safari and Google Chrome made big headlines because they fixed mysterious zero-day exploits that were already being used in the wild.

But this week also saw the latest four-weekly Firefox update, which dropped as usual on Tuesday, four weeks after the last scheduled full-version-number-increment release.

We haven’t written about this update until now because, well, because the good news is…

…that although there were a couple of intriguing and important fixes with a level of High, there weren’t any zero-days, or even any Critical bugs this month.

Memory safety bugs

As usual, the Mozilla team assigned two overarching CVE numbers to bugs that they found-and-fixed using proactive techniques such as fuzzing, where buggy code is automatically probed for flaws, documented, and patched without waiting for someone to figure out just how exploitable those bugs might be:

  • CVE-2022-38477 covers bugs that affect only Firefox builds based on the code of version 102 and later, which is the codebase used by the main version, now updated to 104.0, and the primary Extended Support Release version, which is now ESR 102.2.
  • CVE-2022-38478 covers additional bugs that exist in the Firefox code going back to version 91, because that’s the basis of the secondary Extended Support Release, which now stands at ESR 91.13.

As usual, Mozilla is plain-speaking enough to make the simple pronouncement that:

Some of these bugs showed evidence of memory corruption and we presume that with enough effort some of these could have been exploited to run arbitrary code.

ESR demystified

As we’ve explained before, Firefox Extended Support Release is aimed at conservative home users and at corporate sysadmins who prefer to delay feature updates and functionality changes, as long as they don’t miss out on security updates by doing so.

The ESR version numbers combine to tell you what feature set you have, plus how many security updates there have been since that version came out.

So, for ESR 102.2, we have 102+2 = 104 (the current leading-edge version).

Similarly, for ESR 91.13, we have 91+13 = 104, to make it clear that although version 91 is still back at the feature set from about a year ago, it’s up-to-the-moment as far as security patches are concerned.

The reason there are two ESRs at any time is to provide a substantial double-up period between versions, so you are never stuck with taking on new features just to get security fixes – there’s always an overlap during which you can keep using the old ESR while trying out the new ESR to get ready for the necessary switchover in the future.

Trust-spoofing bugs

The two specific and apparently-related vulnerabilities that made the High category this month were:

  • CVE-2022-38472: Address bar spoofing via XSLT error handling.
  • CVE-2022-38473: Cross-origin XSLT Documents would have inherited the parent’s permissions.

As you can imagine, these bugs mean that rogue content fetched from an otherwise innocent-looking site could end up with Firefox tricking you into trusting web pages that you shouldn’t.

In the first bug, Firefox could be lured into presenting content served up from an unknown and untrusted site as if it had come from a URL hosted on a server that you already knew and trusted.

In the second bug, web content from an untrusted site X shown in a sub-window (an IFRAME, short for inline frame) within a trusted site Y…

…could end up with security permissions “borrowed” from parent window Y that you would not expect to be passed on (and that you would not knowingly grant) to X, including access to your webcam and microphone.

What to do?

On desktops or laptops, go to Help > About Firefox to check if you’re up-to-date.

If not, the About window will prompt you to download and activate the needed update – you are looking for 104.0, or ESR 102.2, or ESR 91.13, depending on which release series you are on.

On your mobile phone, check with Google Play or the Apple App Store to ensure you’ve got the latest version.

On Linux and the BSDs, if you are relying on the version of Firefox packaged by your distribution, check with your distro maker for the latest version they’ve published.

Happy patching!


S3 Ep97: Did your iPhone get pwned? How would you know? [Audio + Text]

LISTEN NOW

With Doug Aamoth and Paul Ducklin.

Intro and outro music by Edith Mudge.

Click-and-drag on the soundwaves below to skip to any point. You can also listen directly on Soundcloud.

You can listen to us on Soundcloud, Apple Podcasts, Google Podcasts, Spotify, Stitcher and anywhere that good podcasts are found. Or just drop the URL of our RSS feed into your favourite podcatcher.


READ THE TRANSCRIPT

DOUG.  Bitcoin ATMs attacked, Janet Jackson crashing computers, and zero-days galore.

All that and more on the Naked Security podcast.

[MUSICAL MOODEM]

Welcome to the podcast, everybody.

I am Doug Aamoth.

With me, as always, is Paul Ducklin.

Paul, how do you do?


DUCK.  I’m very well, Douglas.

Welcome back from your vacation!


DOUG.  Good to be back in the safety of my own office, away from small children.

[LAUGHTER]

But that’s another story for another time.

As you know, we like to start the show with some Tech History.

This week, on 24 August 1995, the song “Start Me Up” by the Rolling Stones was unleashed, under licence, as the theme tune that launched Microsoft Windows 95.

As the song predicted, “You make a grown man cry,” and some Microsoft haters have been crying ever since.

[WISTFUL] I liked Windows 95…

…but as you say, you did need to start it up several times, and sometimes it would start itself.


DUCK.  Start me up?!

Who knew where *that* was going to lead?

I think we had an inkling, but I don’t think we envisaged it becoming Windows 11, did we?


DOUG.  We didn’t.

And I do like Windows 11 – I’ve got few complaints about it.


DUCK.  You know what?

I actually went and hacked my window manager on Linux, which only does rectangular windows.

I added a little hack that puts in very slightly rounded corners, just because I like the way they look on Windows 11.

And I’d better not saythat in public – that I used a Windows 11 visual feature as the impetus…

…or my name will be dirt, Douglas!


DOUG.  Oh, my!

All right, well, let’s not talk about that anymore, then.

But let us please stay on the theme of Tech History and music.

And I can ask you this simple question…

What do Janet Jackson and denial-of-service attacks have in common?


DUCK.  Well, I don’t think we’re saying that Janet Jackson has suddenly been outed as evil haxxor of the early 2000s, or even the 1990s, or even the late 80s..


DOUG.  Not on purpose, at least.


DUCK.  No… not on purpose.

This is a story that comes from no less a source than ueberblogger at Microsoft, Raymond Chen.

He writes the shortest, sharpest blogs – explaining stuff, sometimes a little bit counterculturally, sometimes even taking a little bit of a dig at his own employer, saying, “What were we thinking back then?”

And he’s so famous that even his ties – he always wears a tie, beautiful coloured ties – even his ties have a Twitter feed, Doug.

[LAUGHTER]

But Raymond Chen wrote a story going back to 2005, I think, where a Windows hardware manufacturer of the day (he doesn’t say which one) contacted Microsoft saying, “We’re having this problem that Windows keeps crashing, and we’ve narrowed it down to when the computer is playing, through its own audio system, the song Rhythm Nation“.

A very famous Janet Jackson song – I quite like it, actually – from 1989, believe it or not.

[LAUGHTER]

“When that song plays, the computer crashes. And interestingly, it also crashes computers belonging to our competitors, and it’ll crash neighbouring computers.”

They obviously quickly figured, “It’s got to do with vibration, surely?”

Hard disk vibration, or something like that.

And their claim was that it just happened to match up with the so called resonant frequency of the hard drive, to the point that it would crash and bring down the operating system with it.

So they put an audio filter in that cut out the frequencies that they believed were most likely to cause the hard disk to vibrate itself into trouble.


DOUG.  And my favorite part of this, aside from the entire story…

[LAUGHTER]

…is that there is a CVE *issued in 2022* about this!


DUCK.  Yes, proof that at least some people in the public service have a sense of humour.


DOUG.  Love it!


DUCK.  CVE-2022-23839: Denial of service brackets (device malfunction and system crash).

“A certain 5400 rpm OEM disk drive, as shipped with laptop PCs in approximately 2005, allows physically proximate attackers to cause a denial-of-service via a resonant frequency attack with the audio signal from the Rhythm Nation music video.”

I doubt it was anything specific to Rhythm Nation… it just happened to vibrate your hard disk and cause it to malfunction.

And in fact, as one of our commenters pointed out, there’s a famous video from 2008 that you can find on YouTube (we’ve put the link in the comments on the Naked Security article) entitled “Shouting at Servers”.

It was a researcher at Sun – if he leaned in and shouted into a disk drive array you could see on the screen there was a huge spike in a recoverable disk errors.

A massive, massive number of disk errors when he shouted in there, and obviously the vibrations were putting the disks off their stride.


DOUG.  Yes!

Excellent weird story to start the show.

And another kind of weird story is: A Bitcoin ATM skim attack that contained no actual malware.

How did they pull this one off?


DUCK.  Yes, I was fascinated by this story on several accounts.

As you say, one is that the customer accounts were “leeched” or “skimmed” *without implanting malware*.

It was only configuration changes, triggered via a vulnerability.

But also it seems that either the attackers were just trying this on, or it was more of a proof-of-concept, or they hoped that it would go unnoticed for ages and they’d skim small amounts over a long period of time without anyone being aware.


DOUG.  Yes.


DUCK.  It was noticed, apparently fairly quickly, and the damage apparently was limited to- well, I say “just” – $16,000.

Which is three orders of magnitude, or 1000 times, less than the typical amounts that we usually need to even start talking about these stories.


DOUG.  Pretty good!


DUCK.  $100 million, $600 million, $340 million…

But the attack was not against the ATMs themselves. It was against the Coin ATM Server product that you need to run somewhere if you’re a customer of this company.

It’s called General Bytes.

I don’t know whether he’s a relative of that famous Windows personality General Failure…

[LAUGHTER]

But it’s a Czech company called General Bytes, and they make these cryptocurrency ATMs.

So, the idea is you need this server that is the back-end for one or more ATMs that you have.

And either you run it on your own server, in your own server room, under your own careful control, or you can run it in the cloud.

And if you want to run it in the cloud, they’ve done a special deal with hosting provider Digital Ocean.

And if you want, you can pay them a 0.5% transaction fee, apparently, and they will not only put your server in the cloud, they’ll run it for you.

All very well.

The problem is that there was what sounds like an authentication bypass vulnerability in the Coin ATM Server front end.

So whether you’d put in super complicated passwords, 2FA, 3FA, 12FA, it didn’t seem to matter. [LAUGHTER]

There was a bypass that would allow an unauthorised user to create an admin account.

As far as I can make out (they haven’t been completely open, understandably, about exactly how the attack worked), it looks as though the attackers were able to trick the system into going into back into its “initial setup” mode.

And, obviously, one of the things when you set up a server, it says, “You need to create an administrative account.”

They could get that far, so they could create a new administrative account and then, of course, then they could come back in as a newly minted sysadmin… no malware required.

They didn’t have to break in, drop any files, do an elevation-of-privilege inside the system.

And in particular, it seems that one of the things that they did is…

…in the event that a customer inadvertently tried to send coins to the wrong, or a nonexistent, perhaps even maybe a blocked wallet, in this software, the ATM operators can specify a special collection wallet for what would otherwise be invalid transactions.

It’s almost like a sort of escrow wallet.

And so what the crooks did is: they changed that “invalid payment destination” wallet Identifier to one of their own.

So, presumably their idea was that every time there was a mistaken or an invalid transaction from a customer, which might be quite rare, the customer might not even realise that the funds hadn’t gone through if they were paying for something anonymously…

But the point is that this is one of those attacks that reminds us that cybersecurity threat response these days.. it’s no longer about simply, “Oh well, find the malware; remove the malware; apply the patches.”

All of those things are important, but in this case, applying the patch does prevent you getting hacked in future, but unless you also go and completely revalidate all your settings…

…if you were hacked before, you will remain hacked afterwards, with no malware to find anywhere.

It’s just configuration changes in your database.


DOUG.  We have an MDR service; a lot of other companies have MDR services.

If you have human beings proactively looking for stuff like this, is this something that we could have caught with an MDR service?


DUCK.  Well, obviously one of the things that you would hope is that an MDR service – if you feel you’re out of your depth, or you don’t have the time, and you bring in a company not just to help you, but essentially to look after your cybersecurity and get it onto an even keel…

..I know that the Sophos MDR team would recommend this: “Hey, why have you got your Coin ATM Server open to the whole Internet? Why don’t you at least make it accessible via some intermediate network where you have some kind of zero-trust system that makes it harder for the crooks to get into the system in the first place?”

It would have a more granular approach to allowing people in, because it looks as though the real weak point here was that these attackers, the crooks, were able just to do an IP scan of Digital Ocean’s servers.

They basically just wandered through, looking for servers that were running this particular service, and then presumably went back later and tried to see which of them they could a break into.

It’s no good paying an MDR team to come in and do security for you if you’re not willing to try to get the security settings right in the first place.

And ,of course, the other thing that you would expect a good MDR team to do, with their human eyes on the situation, aided by automatic tools, is to detect things which *almost look right but aren’t*.

So yes, there are lots of things you can do, provided that: you know where you should be; you know where you want to be; and you’ve got some way of differentiating the good behaviour from the bad behaviour.

Because, as you can imagine, in an attack like this – aside from the fact that maybe the original connections came from an IP number that you would not have expected – there’s nothing absolutely untoward.

The crooks didn’t try and implant something, or change any software that might have triggered an alarm.

They did trigger a vulnerability, so There will be some side effects in the logs…

…the question is, are you aware of what you can look for?

Are you looking regularly?

And if you find something anomalous, do you have a good way to respond quickly and effectively?


DOUG.  Great.

And speaking of finding stuff, we have two stories about zero-days.

Let’s start with the Chrome zero-day first.


DUCK.  Yes, this story broke in the middle of last week, just after we recorded last week’s podcast, and it was 11 security fixes that came out at that time.

One of them was particularly notable, and that was CVE-2022-2856, and it was described as “Insufficient validation of untrusted input in Intents.”

An Intent. If you’ve ever done Android programming… it’s the idea of having an action in a web page that says, “Well, I don’t just want this to display. When this kind of thing occurs, I want it to be handled by this other local app.”

It’s the same sort of idea as having a magical URL that says, “Well, actually, what I want to do is processes this locally.”

But Chrome and Android have this way of doing it called Intents, and you can imagine anything that allows untrusted data in a web page to trigger a local app to do something with that untrusted data…

…could possibly end very badly indeed.

For example, “Do this thing that you’re really not supposed to do.”

Like, “Hey, restart setup, create a new administrative user”… just like we were talking about in the Coin ATM Server.

So the issue here was that Google admitted that this was a zero-day, because it was known to have been exploited in real life.

But they didn’t give any details of exactly which apps get triggered; what sort of data could do the triggering; what might happen if those apps got triggered.

So, it wasn’t clear what Indicators of Compromise [IoCs] you might look for.

What *was* clear is that this update was more important than the average Chrome update, because of the zero-day hole.

And, by the way, it also applied to Microsoft Edge.

Microsoft put out a security alert saying, “Yes, we’ve had a look, and as far as we can see, this does apply to Edge as well. We’ve sort-of inherited the bug from the Chromium code base. Watch this space.”

And on 19 August 2022, Microsoft put out an Edge update.

So whether you have Chromium, Chrome, Edge, or any Chromium related browser, you need to go make sure you’ve got the latest version.

And you imagine anything dated 18 August 2022 or later probably has this fix in it.

If you’re searching release notes for whatever Chromium-based browser you use, you want to search for: CVE 2022-2856.


DOUG.  OK, then we’ve got a remote code execution hole in Apple’s WebKit HTML rendering software, which can lead to a kernel execution hole…


DUCK.  Yes, that was a yet more exciting story!

As we always say, Apple’s updates just arrived when they arrived.

But this one suddenly appeared, and it only fixed these two holes, and they’re both in the wild.

One, as you say, was a bug in WebKit, CVE-2022-32893, and the second one, which is -32894, is, if you like, a corresponding hole in the kernel itself… both fixed at the same time, both in the wild.

That smells like they were found at the same time because they were being exploited in parallel.

The WebKit bug to get in, and the kernel bug to get up, and take over the whole system.

When we hear fixes like that from Apple, where all they’re fixing is web-bug-plus-kernel-bug at the same time: “In the wild! Patch now!”…

..your immediate thought is, uh-oh, this could allow jailbreaking, where basically all of Apple’s security strictures get removed, or spyware.

Apple hasn’t said much more than: “There are these two bugs; they were found at the same time, reported by an anonymous researcher; they’re both patched; and they apply to all supported iPhones, iPads and Macs.”

And the interesting thing is that the latest version of macOS, Monterey… that got a whole operating system-level patch right away.

The previous two supported versions of Mac (that’s Big Sur and Catalina, macOS 10 and 11)… they didn’t get operating system-level patches, as though they weren’t vulnerable to the kernel exploit.

But they *did* get a brand new version of Safari, which was bundled in with the Monterey update.

This suggests that they’re definitely at risk of this WebKit takeover.

And, as we’ve said before, Doug, the critical thing about critical bugs in Apple’s WebKit are two-fold:

(1) On iPhones and iPads, ll browsers and all Web rendering software, if it is to be allowed into the App Store, *must use WebKit*.

Even if it’s Firefox, even if it’s Chrome, even if it’s Brave, whatever browser it is… they have to rip out any engine that they might use, and insert the WebKit engine underneath.

So just avoiding Safari on iPhones doesn’t get you around this problem. That’s (1).

Number (2) is that many apps, on Mac and on iDevices alike, use HTML as a very convenient, and efficient, and beautiful-looking way of doing things like Help Screens and About Windows.

Why wouldn’t you?

Why build your own graphics when you can make an HTML page which will scale itself to fit whatever device you have?

So, lots of apps *that aren’t Web browsers* may use HTML as part of their screen display “language”, if you like, notably in About Screens and Help Windows.

That means they probably use an Apple feature called WebView, which does the HTML rendering for them.

And WebView is based on WebKit, and WebKit has this bug!

So, this is not just a browser-only problem.

It could, in theory, be exploited against any app that just happens to use HTML, even if it’s only the About screen.

So, those are the two critical problems with this particular critical problem, namely: (1) the bug in WebKit, and, of course, (2) on Monterey and on iPhones and iPads, the fact that there was a kernel vulnerability as well, that presumably could be exploited in a chain.

That meant not only could the crooks get in, they could climb up the ladder and take over.

And that’s very bad indeed.


DOUG.  OK,that leads nicely into our reader question at the end of every show.

On the Apple double zero-day story, reader Susan asks a simple but excellent question: “How would a user know if the exploits had both been executed on their phone?”

How would you know?


DUCK.  Doug… the tricky thing in this case is you probably wouldn’t.

I mean, there *might* be some obvious side-effect, like your phone suddenly starts crashing when you run an app that’s been completely reliable before, so you get suspicious and you get some expert to look at it for you, maybe because you consider yourself at high risk of somebody wanting to crack your phone.

But for the average user, the problem here is Apple just said, “Well, there’s this bug in WebKit; there’s this bug in the kernel.”

There are no Indicators of Compromise provided; no proof-of-concept code; no description of exactly what side-effects might get left behind, if any.

So, it’s almost as though the only way to find out exactly what visible side-effects these bugs might leave behind permanently. that you could go and look for…

…would be essentially to rediscover these bugs for yourself, and figure out how they work, and write up a report.

And, to the best of my knowledge, there just aren’t any Indicators of Compromise (or any reliable ones) out there that you can go and search for on your phone.

The only way I can think of that would let you go back to essentially a “known good” state would be to research how to use Apple’s DFU system (which I think stands for Device Firmware Update).

Basically, there’s a special key-sequence you press, and you need to tether your device with a USB cable to a trusted computer, and basically it reinstalls the whole firmware… the latest firmware – Apple won’t let you downgrade, because they know that people use that for jailbreaking tricks). [LAUGHS]

So, it basically downloads the latest firmware – it’s not like an update, it’s a reinstall.

It basically wipes your device, and installs everything again, which gets you back to a known-good condition.

But it is sort of like throwing your phone away and buying a new one – you have to set it up from the start, so all your data gets wiped.

And, importantly, if you have any 2FA code generation sequences set up in there, *those sequences will be wiped*.

So, make sure, before you do a Device Firmware Update where everything is going to get wiped, that you have ways to recover accounts or to set up 2FA fresh.

Because after you do that DFU, any authentication sequences you may have had programmed into your phone will be gone, and you will not be able to recover them.


DOUG.  OK. [SOUNDING DOWNCAST] I…


DUCK.  That wasn’t a very good answer, Doug…


DOUG.  No, that has nothing to do with this – just a side note.

I upgraded my Pixel phone to Android 13, and it bricked the phone, and I lost my 2FA stuff, which was a real big deal!


DUCK.  *Bricked* it [MADE IT FOREVER UNBOOTABLE] or just wiped it?

The phone’s still working?


DOUG.  No, it doesn’t turn on.

It froze, and I turned it off, and I couldn’t turn it back on!


DUCK.  Oh, really?


DOUG.  So they’re sending me a new one.

Normally when you get a new phone, you can use the old phone to set up the new phone, but the old phone isn’t turning on…

…so this story just hit a little close to home.

Made me a little melancholy, because I’m now using the original Pixel XL, which is the only phone I had as a backup.

And it is big, and clunky, and slow, and the battery is not good… that’s my life.


DUCK.  Well, Doug, you could nip down to the phone shop and buy yourself an Apple [DOUG STARTS LAUGHING BECAUSE HE’S AN ANDROID FANBUOY] iPhone SE 2022!


DOUG.  [AGHAST] No way!

No! No! No!

Mine’s two-day shipping.


DUCK.  Slim, lightweight, cheap and gorgeous.

Much better looking than any Pixel phone – I’ve got one of each.

Pixel phones are great, but…

[COUGHS KNOWINGLY, WHISPERS] …the iPhone’s better, Doug!


DOUG.  OK, another story for another time!

Susan, thank you for sending in that question.

It was a comment on that article, which is great. so go and check that out.

If you have an interesting story, comment or question you’d like to submit, we’d love to read it on the podcast.

You can email tips@sophos.com; you can comment on any one of our articles; or you can hit us up on social: @NakedSecurity.

That’s our show for today – thanks very much for listening.

For Paul Ducklin, I’m Doug Aamoth, reminding you, until next time, to…


BOTH.  Stay secure!

[MUSICAL MODEM]


go top