[02’01”] A scarily exploitable hole in Microsoft open source code. [10’00”] A simpler take on delivery scams. [19’26”] Memory lane: cool mobile devices from the pre-iPhone era. [23’24”] A Face ID bypass hack, patched for the initial release of iOS 15. [35’21”] Oh! No! When you can’t get into the server (room).
(If you’re a coder why not check out :Sophos Intelix, as mentioned in the podcast?)
With Paul Ducklin and Doug Aamoth. Intro and outro music by Edith Mudge.
LISTEN NOW
Click-and-drag on the soundwaves below to skip to any point in the podcast. You can also listen directly on Soundcloud.
If you’ve already listened to this week’s Naked Security Podcast you’ll know that we had finally concluded that iOS 12, the version before the version before the latest-and-greatest iOS 15, which arrived this Monday…
…had been dumped forever by Apple.
Apple notoriously won’t tell you anything about the security situation in its products unless and until it has a patch out.
So when iOS 14 got updated in the last couple of patch cycles, but iOS 12 didn’t, we couldn’t tell whether it was still safe and didn’t need the patches, whether it needed the patches but they’d be a bit late, or whether it needed the patches but would never get them.
And with iOS 15 arriving as the new kid on the block this week, we assumed the worst, following the “one-in-one-out” principle.
We haven’t finished because we haven’t even started
We speculated in the podcast that iOS 12 didn’t get any patches not because Apple hadn’t finished creating them yet, but because Apple hadn’t even started on the updates, and never would.
We guessed that iOS 12, along with the older devices that it runs on, had run out of support, and thus that Apple wouldn’t be updating iOS 12 again. (The words we used were, “If only Apple would say, ‘iPhone 6 and earlier; iOS 12: curtains closed, we won’t be supporting you any more.’”)
Well, we were wrong.
We just received Apple’s latest email security notification – ironically, delivered directly to our iPhone 6 running iOS 12.5.4 – to tell us about the latest security update, iOS 12.5.5.
So there’s life in the old phone yet, and more importantly, two critical zero-day bugs that were fixed in iOS 14 last week have now been patched in iOS 12 as well.
Turns out that iOS 12 wasn’t dead after all, merely resting.
Update in progress. What a pity – this phone was in as-new condition until the bike accident two weeks ago. My wounds are healing well but the phone’s never will.
Better late than never
The first of the bugs is the infamous CVE-2021-30860 flaw, also dramatically dubbed FORCEDENTRY by Citizen Lab, the organisation that originally disclosed it to Apple so it could be patched.
According to Citizen Lab, the malware by means of which CVE-2021-30860 was investigated came from from an activist’s iPhone, where it had allegedly been implanted via an exploit embedded in a booby trapped iMessage communication.
The second iOS 12 bug fixed that had already been patched in iOS 14 was CVE-2021-30858, a mysterious WebKit zero-day vulnerability.
This bug was also seen in the wild, and is presumably just as dangerous as the Citizen Labs one, although it lacked any dramatic backstory, and didn’t have a research company behind it to talk it up to the media – it was credited simply to “an anonymous researcher.”
An injury to one is an injury to all
Those bugs, as we can now see, were apparently not introduced via new features in the iOS 14 code, but were inherited by iOS 14 from iOS 13 (now officially superseded by iOS 14, and therefore not supported as a version on its own any more), which got them in turn from iOS 12.
Just as importantly, the iOS 12.5.5 update also fixes a THIRD zero day hole, this time in XNU, the open source heart of Apple’s operating system kernel.
We don’t have any details about that bug, dubbed CVE-2021-30869, other than that it was patched and that Apple has said “an exploit for this issue exists in the wild.”
By the way, the CVE-2021-30869 bug exists in Catalina, the previous but still-supported version of the macOS operating system, which therefore gets an update, too.
What to do?
Get iOS 12.5.5 for older iPhones and iPads. Until iOS 13 came out and split into two separate strains called iOS and iPadOS, the same update was used for Apple’s phones and tablets.
Get Security Update 2021-006 for Macs running macOS Catalina. This sort of update doesn’t bump up the Catalina version number, which remains at 10.15.7.
Use Settings > General > Software Update on your Apple phones and tablets, and use Apple menu > System Preferences > Software Update on laptop and desktop Macs.
Thanks to James Cope and Rajeev Kapur of Sophos IT for their help with this article.
Researchers at a cybersecurity startup called Guardicore just published a report about an experiment they conducted over the past four months…
…in which they claim to have collected hundreds of thousands of Exchange and Windows passwords that were inadvertently uploaded to their servers by unsuspecting Outlook users from a wide range of company networks.
The problem, according to the researchers, is down to a Microsoft feature known as Autodiscover, which is used by various parts of Windows, notably Outlook, to simplify the setup of new accounts.
For example, if I want to hook up Outlook on my laptop to “the Exchange server” that’s run by IT, I don’t need to know and type in a whole pile of technical specifications correctly before I get as far as setting up a password and sending my first email.
Left. Opening form in setup process. Right. Next step in autodiscover setup.
If you’ve ever gone through the process, you’ve probably seen the two simple setup screens above, where you put in your email address, tell it you’re looking for an Exchange server, and Outlook goes out and autodiscovers the configuration details for you.
The Autodiscover process
Microsoft’s autodiscover process can include numerous different steps, as explained in its own Autodiscover documentation, and different apps may use slightly different variants on the Microsoft’s central theme.
For email accounts, Autodiscover typically involves creating a short list of URLs where configuration file data can be expected, and then trying to access those URLs and fetch the setup data that’s stored there, until one of them succeeds (or all of them fail).
For an email address such as duck@naksec.test, as shown above, the documentation suggests that you’d look for the following configuration files:
Indeed, when we tried setting up Outlook 2016 on a network with no autodiscover files or servers present, and where we therefore expected Outlook to go through its entire repertoire of possible autodiscover file locations, we observed it looking for the following sequence of network names within our own domain:
(The last request above was a DNS lookup known as an SRV record, a common way of looking up server names for specific services, including autodiscover, in Microsoft domains. The data returned by that SRV record is, like the previous two items, under the control of the owner of the naksec.test domain, given that the DNS name is a subdomain of naksec.test.)
According to Guardicore, however, in their tests – perhaps conducted with an older version of Windows and Outlook, but we’re not sure – there was an extra step in the process, namely that if both of these sites failed…
autodiscover.naksec.test <--if this failed
naksec.test <--and this failed
…then the autodiscover code would go up one more step in the domain hierarchy, and would also try:
autodiscover.test <--then Guardicore reported that this site was tried as well
External domains considered harmful
That looks dangerous, because the owner of the domain naksec.test gets to control the usage of the servername autodiscover.naksec.test, but the domain autodiscover.test could belong to someone else entirely.
And that third-party owner could have registered it maliciously, specifically intending to keep an eye out for accidental “autodiscover request spillage” from inside other people’s networks.
We’re guessing that if the email address had actually been duck@naksec.co.test rather than merely duck@naksec.test, as it might be in a country with a strictly two-level commercial domain system such as New Zealand (.co.nz) or South Africa (.co.za), then Guardicore’s sequence might have been…
autodiscover.co.test <--go back up one domain level
autodiscover.test <--and then go back up one more as well
That could, in theory, expose you first to a sneakily registered third-party domain called autodiscover.co.test, followed (if that failed) by the same, uncertain autodiscover.test domain referred to above. (Some two-level domain countries sell both second-level domains, e.g. .co.uk, and top-level domains e.g. .uk.)
Guardicore therefore went out and registered a whole raft of “autodiscover” domains in various two-level and one-level domain hierarchies, and set up listening web servers on all of them, including:
The researchers claim that over the next four months, they collected more than 1,000,000 unsolicited and unexpected autodiscover requests, of which a significant minority included authentication tokens or plaintext passwords that could, in theory, give access to the leaked accounts.
Worse still, they say, their fake autodiscover servers, when faced with logon information such as NTLM credentials from which the original password could not be recovered, were frequently able to reply to the sender with a “please downgrade” response that caused the client software at the other end (presumably Outlook) to try again using HTTP Basic Authentication.
In Basic Authentication, the password isn’t salted or hashed in any way to protect it from being reversed and recovered.
Instead, the password is merely encoded using the base64 algorithm, so that the original data can be extracted as needed.
How bad is this?
Clearly, for most companies with Outlook clients trying to autodiscover Exchange servers on the corporate network, this sort of data leakage can be considered unlikely.
All the internal locations where the autodiscover data would usually be found would need to fail first, leaving only the under-the-control-of-someone-else domains left to receive the follow-up requests.
Additionally, the appropriate autodiscover domain would need to be registered, and active, and in the hands of an owner whose intention was to abuse it to scoop up password and authentication data that it was not supposed to receive at all.
Nevertheless, Guardicore’s own researchers claim to have seen and collected a significant amount of traffic over a four-month period, plus tens or hundreds of thousands of unique passwords.
So the risk is worth thinking about, especially if your network is usually immune (because it has its own autodiscover servers that will usually answer first), but might “fail open” unexpectedly (if there’s an internal network outage that would suddenly cause clients to go looking for autodiscover servers externally).
What to do?
Consider blocking external domains that start with the word autodiscover, using your web filtering firewall. That will stop any app inside your network from connecting inadvertently to external autodiscover servers in the first place. Note that you may need to add some legitimate cloud sites to your allowlist, e.g. autodiscover.outlook.com, but we can’t ourselves remember ever visting a regular website with a name that started with the word “Autodiscover”.
Consider activating Outlook’s Disable Autodiscover protection using Group Policy. In the GPEDIT policy editor or from the Group Policy Management Console, go to User Configuration > Administrative Templates > Microsoft Outlook 2016 [amend number by version] > Account Settings > Exchange. Click on Disable Autodiscover, choose [Enable] and turn on Exclude the query for the AutoDiscover domain. According to Microsoft, this means that “Outlook [will] not use the following URL: https://autodiscover.[DOMAIN]]/autodiscover/autodiscover.xml”.
Group Policy setting for regulating AutodiscoverGroup Policy setting that supposedly prevents “autodiscover” domain names being used.
If you don’t have the template files installed, or don’t want to use GPEDIT or Group Policy for this process, you can turn on the setting in the registry yourself:
Registry Key: HKCU\software\policies\microsoft\office\[VERSIONNUMBER]\outlook\autodiscover
Create Value: excludehttpsautodiscoverdomain
Value type: DWORD
Set value to: 1
Registry entry that supposedly prevents “autodiscover” domain names being used.
What we observed
As simple as the Group Policy workaround might sound, and as much as Microsoft’s own help file for Office group policy settings seems to reassure you that the setting we’ve listed will suppress the use of “autodiscover” domain names…
…we have to say that this wasn’t how things worked out in our own (necessarily brief) tests.
The bad news is that, even after setting the excludehttpsautodiscoverdomain option, we nevertheless observed Outlook 2016 trying to locate autodiscover.naksec.test in our experiments. (We also tried with realistic external TLD and 2LD domains, e.g. .fr and .co.za.)
The good news is that we were unable to provoke Outlook to visit any domains that would have been outside our own network.
In other words, using an email domain of naksec.test, we were unable to get Outlook to try autodiscover.test, even after autodiscover.naksec.test had failed. (Once again, we also tested this behaviour with realistic external TLD and 2LD domains.)
So although we couldn’t get our own workaround (based on Microsoft’s documentation) to work…
… we simply couldn’t get the “Autodiscover Great Leak” hack to work in the first place either (based on Guardicore’s paper).
Whether that means you’re safe as long as you are using Office 2016, and Guardicore is wrong, we can’t be sure.
We can only tell you that it’s what we observed on a standalone Windows 10 Enterprise computer when we tried to connect to a non-existent Exchange server and watched Outlook run through its autodiscover list – our result was different from the behaviour described by Guardicore.
If you have earlier versions of Outlook, or other email clients that you can try on your own network while monitoring the network requests from the relevant app, we’d love you to share your results below!
VMware’s latest security update includes patches for 19 different CVE-numbered vulnerabilities affecting the company’s vCenter Server and Cloud Foundation products.
All of the bugs can be considered serious – they wouldn’t be enumerated in an official security advisory if they weren’t – but VMware has identified one of them, dubbed CVE-2021-22005, as more critical than the rest.
Indeed, VMware’s official FAQ for Security Advisory VMSA-2021-0020 urges that:
The ramifications of this vulnerability are serious and it is a matter of time – likely minutes after the disclosure – before working exploits are publicly available.
In particular, the company explains:
The most urgent [patch] addresses CVE-2021-22005, a file upload vulnerability that can be used to execute commands and software on the vCenter Server Appliance. This vulnerability can be used by anyone who can reach vCenter Server over the network to gain access, regardless of the configuration settings of vCenter Server.
VMware unabashedly says that “this needs your immediate attention,”, and we think it’s a good thing to see a software vendor talking about cybersecurity response in plain English instead of mincing its words.
Upload vulns explained
Generally speaking, file upload vulnerabilities happen when an untrusted user is allowed to upload files of their own choosing…
…but those untrusted files end up saved in a location where the server will subsequently treat them as trusted files instead, perhaps executing them as scripts or programs, or using them to reconfigure security settings on the server.
A classic example of this sort of hack is what’s known as a webshell, as abused in the infamous HAFNIUM attack against Microsoft Exchange servers in early 2021:
In that attack, cybercriminals uploaded innocent-looking text files that were actually server-side scripts.
Usually, when you visit a web server URL that directly corresponds to a file on that server, you simply get the contents of that file sent back to you.
But if server-side scripting is enabled, and the file is a script, then the server runs the script locally, and uses the output of the script as the web content to send back, thus turning the uploaded file into a vehicle for carrying out a remote code execution attack.
Obviously, being able to upload files that shouldn’t be there is dangerous enough on its own, but when untrusted files can be uploaded by unauthenticated users, and the server will then execute those files, it’s as though you just granted administrator access to anyone who wants it, with no password required.
What to do?
As VMware goes out of its way to explain: patch early, patch often!
If you can’t or won’t patch just yet, VMware has provided a temporary workaround that turns off the vulnerable code on affected VMware vCenter systems.
To do this, you need to modify the configuration file /etc/vmware-analytics/ph-web.xml, and comment out parts of it, in order to stop various vulnerable server plugins from running.
Then you need to restart the vmware-analytics service.
VMware has published various Python scripts that will make these changes for you, as well as giving full instructions for editing the file by hand.
Even though a reliable workaround is available, we nevertheless echo VMware’s own advice to patch now, and to use the workaround only as a last resort.
The VMSA-2021-0020 security update includes 18 other security fixes, such as privilege escalations and security bypass bugs.
Privilege escalation usually means that a user at a low access level can sneakily boost themselves to gain administrator powers, and security bypass bugs typically allow data that ought to be secret to be winkled out of the system by unauthorised users.
Apple’s iOS 15 is now out – the very latest software version for iPhones, just in time for the official launch of the new iPhone 13 later in the week.
(Yes, you can buy an iPhone 13 today, but only by placing what modern sales and marketing jargon refers to as a pre-order, which is known simply as an order in plain English: pay now, and collect it or get it delivered in the near future when it’s ready.)
Most of the articles you’ll read about iOS 15 understandably focus on new features in the operating system and the built-in apps, such as improvements to Notifications, Messages, Safari and even how your local weather gets displayed.
What you might not realise, however, is that even though iOS 15 is brand new to most of the world, it’s been baking in the oven in its pre-release form for many months…
…which means that the official initial release nevertheless comes with its own security advisory, detailing 22 vulnerabilities that have been patched, including ten bugs that could lead to what’s known as remote code execution, or RCE.
RCE vulnerabilities mean that, in theory, simply viewing, opening or otherwise using a booby-trapped file of some sort could allow an attacker to secretly launch malware such as spyware on your device.
Two of the code execution flaws could even give crooks the access they need to implant rogue code in the kernel itself, essentially allowing them to take over the whole device, for example to “jailbreak” it in order to escape from Apple’s walled garden of security controls altogether.
Here’s lookin’ at you
But the iOS security notification that most caught our eye was CVE-2021-30863, fixing a bug in Face ID that’s excitingly described as:
A 3D model constructed to look like the enrolled user may be able to authenticate via Face ID.
Fake heads! (Cue dystopian scifi movie music.)
Bypass attacks against Face ID have been announced before, notably by a Vietnamese researcher who claimed in 2017 to be able to get past Face ID using a mask, and by Chinese researchers from cybersecurity company Tencent in 2019, who were able to get around Face ID’s “are you awake?” detection and unlock the device of someone who was asleep.
Neither of those attacks were very practical, if ever they would have worked in real life.
As far as we know, the Vietnamese result was never successfully replicated, and the Tencent researchers relied on the unlikely scenario of the victim “looking” at the camera without noticing that they were wearing spectacles (eyeglasses) deliberately doctored with black tape and a reflective spot, so they couldn’t see the phone at all.
Sadly, there aren’t any details of how this new Face ID hack was carried out, so we don’t know how effective and reliable it was.
Perhaps more importantly, we can’t estimate how likely it is that the technique could be adapted to get past Apple’s latest security update, which states merely that “this issue was addressed by improving Face ID anti-spoofing models.”
Multiple updates for multiple platforms
Along with updates for the otherwise brand-new iOS 15, iPadOS 15, tvOS 15 and watchOS 8 (for some reason, watchOS version numbers have not been aligned with the rest of Apple’s mobile range), the latest security announcements also cover iTunes, macOS, Safari and Apple’s Xcode developer tools, as well as iOS 14.8 and iPadOS 14.8.
Confusingly, the iOS 14.8 and iPadOS 14.8 updates were actually released more than a week ago, apparently as emergency patches to close off a hole that was allegedly being exploited in the wild for government surveillance against an activist.
(The 14.8 update actually fixed two in-the-wild security exploits, but the better-known of the two was the the one allegedly found on the activist’s phone, dubbed FORCEDENTRY by the organisation that investigated the attack and reported it to Apple.)
It turns out, however, that the iOS 14.8 update fixed a number of other security holes as well, including many that were also fixed in iOS 15.
We’re guessing these are bugs that iOS 15 inherited from its predecessor.
There’s also an update called Safari 15, available for macOS 10.15 (Catalina) and 11.6 (Big Sur).
This is additional to the security updates for Big Sur and Catalina that came out last week to patch the FORCEDENTRY hole.
Although there are no new patches specifically for macOS itself, we’re assuming that the Safari 15 update brings the macOS version of the Safari browser in line with its mobile counterpart on iOS and iPadOS.
The security advisories for which we received notifications are as follows:
Once again, poor old iOS 12 is a victim of Apple’s doctrine that prohibits the company from telling you what’s going on from a security perspective “until an investigation has occurred and patches or releases are available.”
Unfortunately, the fact that you won’t be told about the patch situation unless and until there is a patch means that if there isn’t going to be a patch, you’ll never know.
In other words, the logical conjunction investigate() AND dopatch() means that “no patch available” could be the consequence of any of these scenarios:
No investigation conducted.A patch might be desirable or even vital, but no one can say.
Investigation still going on,still not sure what remains to be done.
Investigation completed,no patch needed.
Investigation completed,patch needed but not yet ready.
Investigation completed,patch won’t be done no matter what.
Our suspicion is that iOS 12 has finally reached the end of the line.
With iOS 15 out, and iOS 14 still officially supported (given that it just received its own security update and advisory), we’re assuming that iOS 12, now the pre-previous version, is no longer going to get any more updates, in the same way that Apple is calling out just Big Sur (current) and Catalina (previous) by name for macOS updates.
So it’s Catch-22 for iOS 12: if there aren’t going to be any more security patches, then we won’t find out until there’s a security advisory to say so; but there won’t be a security advisory to say so until there’s another security patch.
Note. Sophos mobile security products already support iOS 15, and will therefore continue working seamlessly after you update from iOS 14.