Category Archives: News

S3 Ep49: Poison PACs, pointless alarms and phunky bugs [Podcast]

[00’18”] Sign up free for our Security SOS Week 2021!
[02’54”] Overlooked security flaw leaves web code vulnerable.
[13’51”] A home alarm system that almost anyone can turn off.
[25’06”] Some fascinating Firefox bugs fixed.
[31’02”] Oh! No! When you grab your laptop… but it’s not yours.

With Paul Ducklin and Doug Aamoth.

Intro and outro music by Edith Mudge.

LISTEN NOW

Click-and-drag on the soundwaves below to skip to any point in the podcast. You can also listen directly on Soundcloud.


WHERE TO FIND THE PODCAST ONLINE

You can listen to us on Soundcloud, Apple Podcasts, Google Podcasts, Spotify, Stitcher, Overcast and anywhere that good podcasts are found.

Or just drop the URL of our RSS feed into your favourite podcatcher software.

If you have any questions that you’d like us to answer on the podcast, you can contact us at tips@sophos.com, or simply leave us a comment below.


Windows zero-day MSHTML attack – how not to get booby trapped!

Details are scarce so far, but Microsoft is warning Office users about a bug that’s dubbed CVE-2021-40444, and described as Microsoft MSHTML Remote Code Execution Vulnerability.

The bug doesn’t have a patch yet, so it’s what’s known as a zero-day, shorthand for “the Good Guys were zero days ahead of the Bad Guys with a patch for this vulnerability.”

In other words: the crooks got there first.

As far as we can tell, the treachery works like this:

  1. You open a booby-trapped Office file from the internet, either via an email attachment or by downloading a document from a criminal-controlled web link.
  2. The document includes an ActiveX control (embedded add-on code) that ought not to have unrestricted access to your computer.
  3. The ActiveX code activates the Windows MSHTML component, used for viewing web pages, exploits a bug in it to give itself the same level of control that you yourself would have right from the Windows desktop, and uses it to implant malware of the attacker’s choice.

MSHTML isn’t a full-on browser, like Internet Explorer or Edge, but is a part of the operating system that can be used to create browsers or browser-like applications that need or want to display HTML files.

Even though HTML is most closely associated with web browsing, many apps other than browsers find it useful to be able to render and display web content, for example as a convenient and good-looking way to present documentation and help files, or to let users fill in and submit support tickets.

This “stripped down minibrowser” concept can be found not only on Windows but also on Google’s Android and Apple’s iOS, where the components Blink and WebKit respectively provide the same sort of functionality as MSHTML on Microsoft platforms. Mozilla products such as Firefox and Thunderbird are based on a similar idea, known as Gecko. On iOS, interestingly, Apple not only uses WebKit as the core of its own browser, Safari, but also mandates the use of WebKit in browsers or browser-like apps from all other vendors. That’s why Firefox on iOS is the only version of that product that doesn’t include Gecko – it has no choice but to use WebKit instead.

HTML isn’t just for browsing

What this means is that HTML rendering bugs don’t just affect your browser and your browsing activity, and therefore there may be many different ways than just sending you a dodgy web link for cybercriminals to poke a virtual stick into buggy web rendering code, and thereby to probe for exploits.

Even if there’s a bug that they can’t quite control closely enough to take over your browser of choice, they may be able to find other applications in which the vulnerability can not only be used to crash the app, but also to exploit it in order to grab control from it and implant malware.

That’s what CVE-2021-40444 seems to do, with the attack being delivered via Office files loaded into Word, Excel and so on, rather than by web pages viewed directly in your browser.

What to do?

  • Avoid opening documents you weren’t expecting. Don’t be tempted to look at content just because an email or a document happens to align with your interests, your line of work, or your current research. That doesn’t prove that the sender actually knows you, or that they can be trusted in any way – that information is probably publicly available via your work website or your own social media posts.
  • Don’t be tempted to break out of Office Protected View. By default, Office documents received via the internet (whether by email or web) open in a way that prevents active content such as Visual Basic macros and ActiveX controls from running. If you see a yellow bar at the top of the page, warning you that potentially dangerous parts of the document were not activated, resist clicking the [Enable Editing] button, especially if the text of the document itself “advises” you to!
  • Consider enforcing Protected View permanently for all external content. System administrators can enforce network-wide settings that prevent anyone from using the [Enable Editing] option to escape from Protected View in Office. Ideally, you should never need to trust so-called active content in external documents, and you sidestep a wide range of attacks if you prevent this happening altogether.
  • Disable ActiveX controls that use the MSHTML web renderer. Sysadmins can enforce this with a network-wide registry setting that stops ActiveX controls that arrive in new documents from working at all, regardless of whether the document is opened in Protected View or not. This workaround specifically prevents the CVE-2021-40444 vulnerability from being exploited.
  • Keep your eyes peeled for a patch from Microsoft. Next Tuesday (2021-09-14) is the September 2021 Patch Tuesday date; let’s hope Microsoft gets a full-blown fix ready by or before then!

Poisoned proxy PACs! The NPM package with a network-wide security hole…

Not long ago, independent software developer Tim Perry, creator of the HTTP Toolkit for intercepting and debugging web traffic…

…decided to add proxy support to his product, which, like lots of software these days, is written using Node.js.

ICYMI, Node.js is the project that took the JavaScript language out of your browser and turned it into a full-blown application development system in its own right, a bit like Java (which is unrelated to JavaScript, by the way, for all that the names sound similar).

As well as the JavaScript core, which uses the V8 JavaScript engine from Google’s Chromium project, Node.js sofware typically also relies on NPM, the Node package manager, and the NPM registry, a truly enormous repository of open-source Node tools and programming libraries.

The NPM registry covers everything from basic text formatting to full-on facial recognition, and almost anything in between.

Instead of writing all, of the code in your project yourself, or even most of it, you simply reference the add-on packages you want to use, and NPM will fetch them for you, along with any additional packages that your chosen package needs…

…and all the packages that those packages need, following the turtles packages all the way down until every piece of add-on code needed to complete the jigsaw has been located and installed automatically.

Alphabet soup

As you can imagine, this is a potential security nightmare.

Adding just one package to your own project may required a slew of additional packages, each of which may have been written by a different person whom you don’t know, have never met, and probably never will.

This alphabet soup is known as your software’s dependency tree, and we have written about the risky side-effects of this approach to software construction before, noting that:

You may find you can write a five-line JavaScript program that is elegantly simple, but only if your Node Package Manager drags in tens or even hundreds of thousands of lines of other people’s software. Automatically. From all over the internet. And keeps it updated. Automatically, from all over the internet.

Proxy support brings trouble

Perry rediscovered this risk recently, when he decided to use a popular NPM package called Proxy-Agent to provide the proxy support he wanted in his HTTP Toolkit product.

Fortunately, Perry didn’t just blindly fetch, install and start using Proxy-Agent and its entire dependency tree without doing a review of the newly-acquired components in his project.

Thus he came across a security flaw, now dubbed CVE-2021-23406, in a Proxy-Agent dependency called Pac-Resolver, which is a subcomponent that helps your code deal with the process of PAC, or proxy auto-configuration (see sidebar below).

A proxy server is one that makes outgoing connections on your behalf, typically for security (e.g. to filter web traffic), for performance (e.g. to keep local copies of files that get downloaded often, or to regulate bandwidth usage during busy periods), or for both. You connect to the proxy and tell it where you want to go; it makes the onward connection for you, collects the replies, and returns them to you . Many corporate networks are configured so that certain outbound connections, notably HTTP requests, are only possible from a designated proxy server. This ensures that everyone inside the network sends their traffic via the proxy, instead of going directly to external sites. Numerous corporate-style tools exist to help computers on a network locate their official internal proxies automatically, including PAC, short for proxy auto-configuration, and WPAD, short for web proxy auto-discovery.

Believe it or not

PAC files, believe it or not, aren’t just data-only lists of IP numbers or server names where your network’s official proxy servers are located.

Because they were intended to be ingested and used inside your browser, PAC files were deliberately designed to be more flexible than just a static data list.

Indeed, a PAC file consists of JavaScript that can dynamically determine whether a proxy is needed, and if so where to find it on the network.

As Perry notes, the PAC file format dates back a quarter of a century, and first appeared as a “feature” in the Netscape browser:

PAC files provide a way to distribute complex proxy rules, as a single file that maps a variety of URLs to different proxies. They’re widely used in enterprise environments, and so often need to be supported in any software that might run in an enterprise environment. [… A PAC file is] a JavaScript file you have to execute to connect to the internet, which is loaded remotely, often insecurely and/or from a location that can be silently decided by your local network. 1996 was truly a simpler time. What could go wrong?

Of course, Perry wasn’t planning on running PAC files inside the somewhat limited strictures of a browser, but as part of his HTTP Toolkit software, which runs as a regular application, potentially giving the JavaScript it launches much more reach and power that script code gets inside a browser.

He therefore decided to take a look at how the programmers of the proxy configuration code he’d chosen had addressed the security implications of fetching and running external JavaScript.

He discovered that the code used a Node component called vm, short for virtual machine, which lets you set up a new JavaScript instance, or state, where you won’t interfere with code running in other Node instances in your application.

This is a handy precaution if you want to have two parts of your code doing separate things in such a way that they can’t trample on each other by mistake.

In the words of the vm library documentation:

The vm module enables compiling and running code within V8 Virtual Machine contexts. […] JavaScript code can be compiled and run immediately or compiled, saved, and run later. A common use case is to run the code in a different V8 Context. This means invoked code has a different global object than the invoking code.

Safety is good, but security is better

Perry realised that the original programmer, whose code he had now adopted, was using the vm library as much for programmatic security as for safety, apparently assuming that a new vm instance was not only separate from other vm instances in the application, but also strictly sandboxed in its own little secluded JavaScript world.

However, as the vm documentation makes clear, in loud, boldfaced type:

The vm module is not a security mechanism. Do not use it to run untrusted code.

Perry quickly worked out how to use a regular JavaScript programming technique to run code inside the new vm instance that had full access to the external data of his main Node.js application.

Technically, that constitutes an RCE bug in the proxy configuration process, where RCE is short for remote code execution.

Loosely speaking, RCE means that untrusted content fetched from an untrusted source can deliberately do something treacherous that isn’t supposed to be allowed, without any warnings or popup dialogs showing up first.

Is this really a problem?

As some commenters on Perry’s discovery pointed out, exploiting this bug typically means altering a private network’s official proxy PAC file to include booby-trapped JavaScript.

But if you already have the power to alter an organisation’s proxy setup, then you can simply redirect everyone on the network to a fake proxy anyway, with or without any JavaScript bugs in the equation…

…and if you can silently redirect every browser on the network, then surely you already have more than enough cybercriminal control to wreak havoc on the organisation?

Therefore, some commenters argued, CVE-2021-23406 is little more than a storm in a teacup.

Except that redirecting everyone’s browsers via a fake proxy, as risky as this might ultimately be, simply isn’t as dangerous as having the power to run an arbitrary program on every computer on the network as a side-effect of the proxy configuration…

…while leaving the original proxy configuration unchanged, so that everything else still seems to be working as usual

Hacking a network by overtly reconfiguring every computer to start using a different proxy server is much more likely to produce troublesome side-effects that will get noticed, reported and investigated.

Contemporary cybercrooks like to stay “under the radar” by avoiding changes that regular users might notice even if they weren’t being watchful for cybersecurity incidents.

What to do?

  • Do you have any Node.js software that uses Pac-Resolve, Pac-Proxy-Agent, or Proxy-Agent? If so, ensure you have version 5.0.0 or later of these packages.
  • Do you reguarly review the many Node.js modules on which your products rely? If not, make plans to do so. This means factoring the extra time and expertise into your software release process. Move fast and break things might be a usable motto for prototypes and in-house experiments, but it’s a reckless way to build shipping products.
  • Do you explore the security limitations of libraries you use? If not, then you should do so. Creating a new JavaScript VM instance sounds as though it ought to improve security, because each “virtual machine” runs separately, but that’s not the same as running in a security-controlled sandbox or walled garden – and the documentation clearly says so.
  • Do you assume that widely-used packages can be treated as secure? If so, then don’t. CVE-2021-23406 is not the sort of bug that is likely to show up in regular use, unlike a buffer overflow that may reveal itself through unexpected crashes. Some bugs are only found because someone decided to take a careful look, as Tim Perry did here.

For what it’s worth, Perry notes that the packages in this story receive about 3,000,000 downloads a week, so popularity alone is no guarantee of correctness.

Never forget, when it comes to so-called supply chain bugs of this sort, that you can outsource the coding, but you can’t outsource the responsibility.


S3 Ep48: Cryptographic bugs, cryptocurrency nightmares, and lots of phishing [Podcast]

[02’00”] Security code flushes out security bugs.
[15’48”] Recursion: see recursion.
[26’34”] Phishing (and lots of it).
[33’09”] Oh! No! The Windows desktop that got so big it imploded.

With Paul Ducklin and Doug Aamoth.

Intro and outro music by Edith Mudge.

LISTEN NOW

Click-and-drag on the soundwaves below to skip to any point in the podcast. You can also listen directly on Soundcloud.


WHERE TO FIND THE PODCAST ONLINE

You can listen to us on Soundcloud, Apple Podcasts, Google Podcasts, Spotify, Stitcher, Overcast and anywhere that good podcasts are found.

Or just drop the URL of our RSS feed into your favourite podcatcher software.

If you have any questions that you’d like us to answer on the podcast, you can contact us at tips@sophos.com, or simply leave us a comment below.


Pwned! The home security system that can be hacked with your email address

A researcher at vulnerability and red-team company Rapid7 recently uncovered a pair of risky security bugs in a digital home security product.

The first bug, reported back in May 2021 and dubbed CVE-2021-39276, means that an attacker who knows the email address against which you registered your product can effectively use your email as a password to issue commands to the system, including turning the entire alarm off.

The affected product comes from the company Fortress Security Store, which sells two branded home security setups, the entry-level S03 Wifi Security System, which starts at $130, and the more expensive S6 Titan 3G/4G WiFi Security System, starting at $250.

The intrepid reseacher, Arvind Vishwakarma, acquired an S03 starter system, which includes a control panel, remote control fobs, a door or window sensor, a motion detector, and an indoor siren.

(The company also sells additional fobs and sensors, outdoor sirens, which are presumably louder, and “pet-immune” motion detectors, which we assume are less sensitive than the regular ones.)

Unfortunately, it didn’t take much for Vishwakarma to compromise the system, and figure out how to control it without authorisation, both locally and remotely.

RESTfulness explained

Like many modern Internet of Things (IoT), products, the Fortress Security products make use of cloud-based servers on the internet for control and monitoring purposes, accessing the Fortress cloud via what’s known in the jargon as a web API, short for application programming interface.

And, like most modern web APIs, Fortress uses a programming style known as REST, short for representational state transfer, where data is sent to the API via HTTP POST commands, and retrieved using GET commands, using a set of static, well-defined URLs that act programmatically as web-based “function calls”.

In the old days, web APIs often embedded the data they wanted to send and receive into the URL itself, along with any needed data, including passwords or authentication tokens, tacked on the end as parameters, like this:

GET /functions/status?id=username&password=W3XXgh889&req=activate&value=1 HTTP/1.1
Host: example.com

But constructing a unique URL every time is troublesome to servers, and tends to leak sensitive data into logfiles, because URL access histories are often stored for troubleshooting or other purposes, and therefore ought to avoid having query-specific data coded into them.

(Cautious firewalls and security-conscious web servers do their best to redact sensitive information from URLs before logging them, but it’s better not to have confidential data in the URL at all.)

RESTful APIs, as they are commonly called, use a consistent list of URLs to trigger specific functions, and typically package their uploaded and downloaded parameters into the rest of the HTTP request, thus keeping the URLs free of potentially confidential data.

For example, if you are using Sophos Intelix (free accounts available!) to do live threat lookups, you first need to authenticate with your username and password, which gives you a session password called an access_token that works for the next 60 minutes.

There’s just one RESTful authentication URL, and your username and password (Intelix only accepts HTTPS requests, to ensure that data gets exchanged securely) is sent in the request, like this:

POST https://api.labs.sophos.com/oauth2/token HTTP/1.1
Host: api.labs.sophos.com
Authorization: [REDACTED - username and password supplied here]
Content-Type: application/x-www-form-urlencoded,
Connection: close grant_type=client_credentials

The Fortress API works in a similar way, but Vishwakarma quickly discovered that he didn’t need to go through an authentication stage up front.

Instead, he was able to send a request along these lines:

POST https://fortress.[REDACTED].com/api? HTTP/1.1
Host: fortress.[REDACTED].com/api?
Content-Type: application/json
Connection: close { cmd=GETINFO [...], user_name="XXXXXX" }

Even though he only supplied a user_name in the request, he received a reply that contained a JSON string labelled IMEI, an acronymn usually used in the context of mobile phones and short for international mobile equipment identity.

Every phone, whether it has a SIM card inserted or not, has an IMEI that is burned into the device by the manufacturer (dial the magic phone number *#06# to see yours), and mobile phone operators use it to track your physical device on the network and to blocklist stolen phones.

IMEI considered harmful

Because the IMEI (pronounced eye-me, rhymes with blimey) is unique to your device, you’re strongly advised not to reveal it to anyone else.

You certainly shouldn’t use your IMEI as if it were a username or a public identifier, and mobile apps in curated online markets such as Google Play and the Apple App Store aren’t allowed to collect them, because IMEI-grabbing apps are considered malicious by default.

Even though the entry-level S03 home security product doesn’t take a SIM card, and only works via a Wi-Fi network, it seems that Fortress nevertheless uniquely identifies each device with a numeric code it refers to an an IMEI.

(We’re guessing that this is so that the S03 can share source code with the more expensive S6 Titan product, which has a SIM card slot and therefore a built-in IMEI of its own.)

Unfortunately, this IMEI is used not just as a username, which would be ill-advised on its own, but as a full-blown password that can be used as a permanently valid authentication token in future requests to the Fortress web API.

In other words, simply by knowing your Fortress user_name, Vishwakarma could acquire your device’s IMEI, and then simply by knowing your IMEI, he could issue authenticated commands to your device, for example like this:

POST https://fortress.[REDACTED].com/api? HTTP/1.1
Host: fortress.[REDACTED].com/api?
Content-Type: application/json
Connection: close { cmd=ACTIVATE [...], imei="KNOWNVALUE", op=0, user_name="XXXXXX" }

In the above command, the data item op appears to stand for operand, the name commonly given to the data that’s supplied to a computer function or machine code instruction. (In an assembler code line such as ADD RAX,42, the values RAX and 42 are the operands.)

And the value zero given as an operand to the command ACTIVATE, as shown above, does exactly what you might expect: it turns the alarm off!

Of course, to figure out the KNOWNVALUE of the IMEI/password for your account, an attacker would first need to know the values of XXXXXX.

Sadly, as Vishwakarma found out, XXXXXX is simply your email address, or more precisely the email address you used when setting up the system.

In short: guess email address ==> get permanent authentication code ==> deactivate alarm remotely at will.

Fobs defeated, too

Vishwakarma also took a look at the security of the keyfobs (the remote control buttons, like the button that opens your garage door or unlocks your car) that come with the system.

Vishwakarma used a funky but increasingly affordable setup known as an SDR, short for software defined radio, a reprogrammable transmitter and receiver system that can be adapted to work at a huge range of different frequencies and to emulate all sorts of different radio kit.

You’ll need a high-end SDR setup to handle very high-frequency devices such as Wi-Fi (at 5GHz and 2.4GHz), but a hardware dongle costing under $50 has enough performance to “listen in” on transmissions at 433MHz, the frequency band commonly used by remote control devices such as keyfobs.

In theory, a correctly configured SDR can reliably and easily record the exact radio signal emitted by a keyfob when it’s locking or unlocking your car, your garage or your home security system.

The same SDR could then play back an identical tranmission later on.

In that sense, an SDR works like a wax block or a bar of soap would with an old-school key, whereby an attacker (or a private detective in a crime novel) could make an impression of your doorkey today, and then cast a copy of their own to use tomorrow.

A digital keyfob, however, has one significant advantage over a traditional key, namely that it can “shape-shift” between each button press.

By using a cryptographic algorithm to vary the actual data it transmits each time, much like those ever-changing 2FA codes that mobile phone security apps produce, a well-designed keyfob should be resistant to what’s known as a replay attack.

That sort of dynamic code recalculation, typically based on a digital secret that’s securely shared between the keyfob and the control unit when the keyfob is configured, means that a radio code recorded today will be no use tomorrow, or even in two minutes’ time.

Sadly, as you’ve probably guessed already, Vishwakarma found that the Fortress S03 fobs didn’t produce one-time codes each time they were pressed, but simply used the same code over and over, a vulnerability now dubbed CVE-2021-39277.

In short: hover near someone’s home ==> capture keyfob “alarm off” transmission once ==> deactivate alarm later at will.

What to do?

According to Rapid7, Fortress decided not to respond to these bugs, closing the report back in May 2021, and didn’t object to the company’s proposed disclosure of the flaws at the end of August 2021.

Thus it looks as though the company isn’t planning any sort of firmware update, whether for its control units or keyfobs, and therefore that these vulnerabilities will not be patched, at least in systems that have already been sold.

So, if you have one of these systems, or a similar-looking system under a different brand that you suspect may be derived from the same original equipment supplier, there are two workarounds you can use:

  • Use an email address that an attacker is unlikely already to know or to guess. Webmail services such as Outlook and Gmail, for example, allow you to have multiple email aliases for your main account, simply by adding text such as +ABCDEFG at the end of your regular email name. For instance, if you use nickname@example.com as your regular email address, then messages to nickname+random5XXG8@example.com ought to be delivered to the same mailbox, even though the two addresses don’t match. Note that this is an example of security by obscurity, so it’s not an ideal solution, but it does make things harder for an attacker or an ill-disposed friend or family member.
  • Avoid using the keyfob remote controls at all. This means you’ll always need to have your laptop or mobile phone handy, or to do everything directly from the control panel, but if you never set up your keyfobs to work with your own control unit, they can’t give away any secrets that an attacker could use in subsequent replay attacks.

Calling all coders

Once again, as we seem to say so often when we’re talking about IoT security: if you’re a programmer, don’t take shortcuts that you know will come back to haunt both you and your customers.

As we’ve mentioned many times, device identifiers such as MAC addresses, UUIDs and IMEIs are not suitable as cryptographic secrets or passwords, so don’t use them for that purpose.

And cryptographic material that you transmit or display unencrypted, whether that’s a 2FA code from an authenticator app, an initialisation vector for an encrypted file, or a radio burst from a keyfob, must never be re-used.

There’s even a jargon term in cryptography for this sort of data: it’s known as a nonce, short for number used once, and that word means exactly what it says.


go top