Apple’s Privacy Protection feature – watch out if you have a Watch!

Tommy Mysk and Talal Haj Bakry describe themselves as “two iOS developers and occasional security researchers on two continents.”

In other words, although cybersecurity isn’t their core business, they’re doing what we wish all programmers would do: not taking application or operating system security features for granted, but keeping their own eyes on how those features work in real life, in order to avoid tripping over other people’s mistakes and assumptions.

We’ve written about their findings before, such as when they presented a well-made argument that persuaded TikTok to embrace HTTPS for everything, and now we’re writing about what you might call a nano-article…

…a security finding that Tommy Mysk compressed elegantly into a single tweet:

This is an interesting reminder of how difficult it can be to ensure that general-purpose security features really do work as intended across the board, or at least that they work as any reasonable user might infer.

Tracking your email usage

To explain.

Apple’s iOS 15 introduced a neat anti-tracking feature for your email, dubbed Mail Privacy Protection:

The idea is quite neat and simple: to shield you from annoying marketing tricks such as tracking pixels, you can ask Apple to fetch your remote email content first, and then relay it to to you indirectly, thus using Apple as a proxy for images and links in your messages.

This acts as a sort of pseudo-VPN (virtual private network) that shows up at the other end of the connection as “some server at Apple came calling”, rather than “a specific user on home network X paid us a visit”, thus providing you with a modest privacy boost.

In an ideal world

In an ideal world, this wouldn’t be necessary, because everyone who sent you emails would package images such as logos into the message itself, or just send messages in plain text, without any images at all.

But many marketing departments like to link to uniquely-named images in each individual email in a campaign, often using images that don’t actually serve any visual purpose (e.g. that are 1×1 pixel in size), as well as using uniquely identifiable clickable links in messages.

This means that when your email client fetches the image, or if you visit any links in it, the web server at the other end can create a log entry that records your IP number against the unique URL used, thus tracking you, possibly quite accurately, by the time and the place that you read the email.

Of course, marketing deparments generally don’t host those images and tracking links themselves – they typically rely on a third-party tracking and analytics company, and that’s where the tracking database ends up.

As minor and as inoffensive as this sort of tracking data might sound, considered one email at a time, it all adds up over time, especially if several different online services happen to use the same analytics company, which then gets a chance to track you across multiple services and websites if it wants to.

As a result, modern browsers and email clients generally offer built-in anti-tracking features to help limit the precision of online tracking and therefore to improve your privacy somewhat.

These features reduce the casual but considerable collection of this sort of information as you browse or read your emails.

More anonymity

Apple’s Mail Privacy Protection is another mild level of anonymisation that helps to reduce your trackability, even when you genuinely want to see the external images in an email (you might actually be interested in the product being advertised), or are willing to click the embedded links for further information.

Everyone who views the images of the latest and greatest products gets to see what they look like, which means that the advertising process works as intended.

But all those potential customers show up as generic visitors from “somewhere in Apple’s server empire”, rather than as “the family at 72 Acacia Avenue, next to the post office, just before you get to Church Lane,” so the tracking process that is sneaked in along with the ads no longer works as intended.

Not everyone

Well, not everyone, it turns out, and not all potential customers.

The Tommy Mysk/Talal Haj Bakry cyberduo noticed that this IP anonymisation doesn’t work on the Apple Watch.

Ironically, the device that you’d think would most benefit from having remote content pre-fetched by a proxy server, and perhaps scaled down or otherwise minimised or simplified to improve its appearance, if nothing else…

…doesn’t seem to honour the setting of the Protect Mail Activity option.

So tracking pixels embedded in emails you view on your iPhone will be shielded by this feature, but will give away your real IP number if the same email is viewed via your Watch.

We don’t know why this discrepancy exists, but our buest guess is that Apple’s watchOS doesn’t have what you might call “feature parity” with iOS 15.

After all, iOS 12 for iPhones and iPads is still (as far as we know) supported by Apple, but there’s no Protect Mail Activity option available there.

So, even though you set up your Apple Watch by pairing it with your iPhone, and then configure it via the iOS 15 menus, it’s not actually running iOS 15 itself.

Indeed, the latest version of watchOS at the time or writing is numbered 8.1, compared to iOS and iPadOS, which are both at 15.1.

What to do?

For those with Apple Watches who would like to have at least some of the privacy shielding offered by the Mail Privacy Protection feature, we asked Tommy Mysk if there was a workaround.

He replied to say that you can explicitly set the following options on the Settings > Mail > Mail Privacy Protection page:

This blocks remote content, including tracking images, by default on both your phone and your watch, thus preventing you from giving away by mistake the “when and where” history of your email reading habits. (Apparently, tne Hide IP Address option, which is part of a feature called iCloud Private Relay, is not yet available to all users.)

But you still need to remember not to tap on Load All Images when you’re reading emails on your Watch, because if you authorise those images to be fetched, your IP number won’t be hidden as you might expect.

Tommy also notes that this IP non-shielding problem also applies to the Messages app, where tapping links in instant messages or text messages (SMSes) on your Watch takes you directly to the server in the URL, straight from your Watch’s IP number, even if Hide IP Address is turned on.

Is this is bug, an oversight, or merely an expected side-effect of the fact that watchOS simply isn’t iOS, even if you think of your Watch as a sort of “paired extension” of your iPhone?

We don’t know.

And we doubt that Apple will issue any sort of notification to explain the situation, given its restrictive attitude to security bulletins…

…so until watchOS and iOS reach “feature parity”, and someone such as Tommy or Talal notices and points that out, you’ll need to steer your own way around this issue if email tracking protection is important to you.


The self-driving smart suitcase… that the person behind you can hijack!

The Internet of Things (IoT) has become infamous for providing us, in a worrying number of cases, with three outcomes:

  • Connected products that we didn’t know we needed.
  • Connected products that we purchased anyway.
  • Connected products that ended up disconnected in a cupboard.

To be fair, not all IoT products fall into all, some or even any of these categories, but there are many that have made it into at least one.

There was the home video camera with a “unique identifier” that wasn’t unique, leaving one couple from Australia who thought they both had access to view their own living room, but suddenly found that each of them was inadvertently spying on a different third party.

There was the surveillance system that showed an unwitting homeowner in England the outside of an unknown pub, which he eventually tracked down with the help of search engines and visited to enjoy a fortifying pint of ale.

At the pub, he took a selfie on his own phone of himself enjoying his drink… using the pub’s camera. (He showed the pic to the landlord, who shared both his amusement and his concern.)

And there was the $99 smart bike padlock – no more combinations to remember! no more fussing with keys in cold hands! – that allowed you to open your own lock with the official app (or with your fingerprint) in 0.8 seconds, or to open anyone’s lock with an unofficial app in just 2 seconds.

No hacksaw required

The padlock hackers (no literal hacking or hacksaws required) in the why-did-they-even-bother-to-call-it-a-lock story above were from well-known UK penetration testing outfit PTP, short for Pen Test Partners.

And when researchers at PTP come across a connected product that they didn’t know they needed…

…they immediately know they need it!

So when they spotted a digital suitcase called the Airwheel SR5, they simply had to get one, because who can resist a Bluetooth-enabled, self-driving robot suitcase? (We’re not making this up.)

Why drag your carry-on luggage behind you when you can simply strap on a Bluetooth wristband and let the luggage follow you through the airport, steering its way around obstacles (and, one hopes, other passengers, with or without their own self-driving luggage), thus saving you the hassle of dragging round all the extra weight that the suitcase needs, in the form of batteries and motors, to drag itself around for you?

Well, PTP quickly found out one reason why they might not trust the SR5 in a busy airport, namely that it wasn’t very accurate.

While it made vaguely confident progress, it didn’t hold its course very well, weaving off line and bumping into things in the fashion of a traveller who has spent far too long at the airside bar.

But it was a design flaw that worried PTP the most, namely that the SR5 allows itself to be paired with two different devices at the same time – an unusual and actually pretty cool Bluetooth achievement, as the researchers admitted – with inadequate security controls over the pairing process.

Once you’ve paired your SR5 with its supplied wristband so it will follow you around autonomously, you don’t really need (and might never bother) to use its other feature: letting you drive it around the airport concourse like an RC car, in a worryingly zippy fashion, using an app on your phone.

But if you don’t get around to installing the app and pairing it with your own suitcase…

….then anyone else can pair with it instead, even if you’ve instructed it to follow behind you.

By following your suitcase as it follows you, a suitacasejacker could pair their phone with your luggage and simply drive it off, without ever laying a hand on it, thanks to a hardwired pairing code.

See if you can guess the “secret” PIN.

Did you figure it out?

Yes, that’s right, it’s: 11111111.

(We guessed at 78482273, on the grounds that it spells SUITCASE, but 1 on a phone keypad doesn’t correspond to any letters at all.)

PTP also discovered that the suitcase firmware doesn’t seem to be digitally signed, which could allow rogue firmware updates (tracking beacons, anyone?), and that the company hasn’t yet managed to get its app into Google’s Play Store, forcing you to sideload it instead.

What to do?

  • If you can’t resist this self-driving suitcase, make sure you pair it with your own phone as well as with your wristband, so that fellow airport travellers can’t trivially hijack it. (You can assume, at least for now, that chaperoning a vaguely autonomous digital suitase round a modern airport is certain to draw attention to the suitcase, it not to you.)
  • If you’re a programmer, don’t use hardwired passwords. In fact, don’t enable remote pairing by default, either, to prevent unauthorised surprises. As PTP points out, picking a random password and putting a printout inside the suitcase before delivery would be a simple place to start. Home router vendors do this with their wireless access points these days, and it has largely eliminated the problem of default Wi-Fi credentials.
  • If you’re relying on an official Android app, do your best to get it into the Play Store first. Google Play is far from perfect at keeping malware out, but being unable to make the grade in the first place is not a good look for your product, and won’t encourage your customers to install it. Ironically, in this case (see what we did there?), you can’t secure your luggage against rogue pairing attempts without installing the unvetted app first.

We couldn’t resist emedding the PTP video, showing the self-driving, remotely commandeerable suitcase in its surprisingly brisk drive-me-around mode:

[embedded content]


Emotet malware: “The report of my death was an exaggeration”

You’ve probably seem the breathless media headlines everwhere: “Emotet’s back!”

One cybersecurity article we saw – and we knew what it was about right away – didn’t even give a name, announcing simply, “Guess who’s back?”

As you almost certainly know, and may sadly have experienced first hand, Emotet is a blanket term that typically refers both to a family of “command-and-control” malware and the gang who are its commanders-and-controllers.

The idea is simple: instead of building a single-purpose malware program for each attack, and unleashing it on its own, why not spearhead the attack with a general purpose malware agent that calls home to report its arrival, and awaits further instructions?

In popular terminology, that sort of malware is often referred to as a zombie or bot, short for software robot, and a collection of bots with the same command-and-control servers (known as C&C or C2 servers in the jargon), under the same botmasters, is known as a botnet.

Emotet, however, was not just a bot – to many sysadmins and threat responders, it was the bot, run by a notoriously resilient and determined criminal gang who operated their botnet as a disturbingly effective content delivery network for cybercrime.

An attack chain of attack chains

A common Emotet attack chain typically ran in mutiple stages, something like this:

  1. Emotet first, to form a beachhead inside your network;
  2. Followed by Trickbot or some other network-snooping malware to learn, plunder, hack, tweak, reconfigure and manipulate your computer estate until the crooks behind the stealing and surveillance had learned as much as they felt they needed to know (or made as much money as they thought they could, or both);
  3. Followed by a final, apocalyptic, flaming-skulls-on-your-wallpaper-type blast of ransomware and an associated, possibly breathtakingly expensive, blackmail demand.

As we wrote in February 2021:

The [Emotet crew] typically use the zombies under their control as a sort of content delivery network for other cybercriminals, offering what amounts to a pay-to-play service for malware distribution.

The Emotet gang does the tricky work of building booby-trapped documents or web links, picking enticing email themes based on hot topics of the day, and tricking victims into infecting themselves…

…and then sells on access to infected computers to other cybercriminals so that those crooks don’t have to do any of the initial legwork themselves.

That quote, notably, comes from an article entitled Emotet take”down – Europol attacks “world’s most dangerous malware”

All quiet on the Emotet front

Since then, the Emotet ecosystem, if we may use that word to describe it, has been essentially off the radar, silent, and invisible.

But as we mentioned in February 2021, the same gang went quiet in February 2020, only to reappear suddenly in July of that year.

And, according to current reports, something similar has happened again, with researchers around the world noting a return of “Emotet-like” activity, and announcing, as Mark Twain famously did after reading in the newspapers that he had passed away, that the report of its death was an exaggeration.

What to do?

We’ve always been happy to report on malware takedowns, cybercrime busts and other disruptions that have removed or reduced cybercriminality, but we’ve also always advised against relaxing too much when that sort of report appears.

Here’s our advice, whether this Emotet “revival” is the same criminals who’ve returned from takedown to active duty or new recruits; whether it’s the old malware code or a re-written variant; whether the new botnet has the same goals or yet more aggressive ones:

  • Old malware rarely actually dies. Sometimes, as happened with floppy disk boot sector viruses, malware families get killed off by technological changes. But the truth is that once a technique is out there, and is known to work, even modestly well, someone new is likely to copy it, re-use it, or revive it. So we live with the sum of the threats of the past as well as all the genuinely new tools, techniques and procedures that come along.
  • Don’t focus on individual malware families or malware types when planning your protection. Emotet may be well-known, and rightly feared, but its method of operation (MO) is widely copied in many, perhaps most, malware attacks these days, and this MO has been in use since malware first became a money-making game. In some senses, an initial infection by nmalware like Emotet is the end of one attack chain, because it doesn’t itself contain specific malware tools such as password stealers, keyloggers, cryptominers or ransomare scramblers. But it is also very much the start of a whole new attack chain, ready to receive and deploy “updates” or “plugins” – new malware samples that may vary over time, by region, by victim’s computer type, or simply at the whim of the criminals in command-and-control.
  • Consider managed threat response (MTR). If you don’t have the time or expertise to keep track of criminality on or against your network on your own, an MTR service can help you ensure that you chase back any attacks that you do detect to their root cause. Sometimes, this might be a weak password or an unpatched server, but often it’s down to “beachhead” malware like Emotet. If you find and remove only the end of the attack chain, but leave the entry point in place, then the command-and-control crooks behind that beachhead malware will simply sell you out to the next cybergang that’s willing to pay the asking price.

Not enough time or staff? Learn more about Sophos Managed Threat Response:
Sophos MTR – Expert Led Response  ▶
24/7 threat hunting, detection, and response  ▶


DHS warning about hackers in your network? Don’t panic!

Well-known email tracking organisation Spamhaus, which maintains lists of known senders of spams and scams, is warning of a fraudulent “FBI/Homeland Security” alert that has apparently been widely circulated to network administrators and other IT staff in North America.

Indeed, some of our own colleagues have reported receiving messages like this:

Urgent: Threat actor in systems Our intelligence monitoring indicates exfiltration of several of your virtualized clusters in a sophisticated chain attack. We tried to blackhole the transit nodes used by this advanced persistent threat actor, however there is a huge chance he will modify his attack with fastflux technologies, which he proxies trough multiple global accelerators. We identified the threat actor to be [REDACTED], whom is believed to be affiliated with the extortion gang TheDarkOverlord, We highly recommend you to check your systems and IDS monitoring. Beware this threat actor is currently working under inspection of the NCCIC, as we are dependent on some of his intelligence research we can not interfere physically within 4 hours, which could be enough time to cause severe damage to your infrastructure.

Spamhaus suggests that at least some of the recipients’ email addresses have been scraped from already public sources such as databases published by ARIN, the [North] American Registry for Internet Numbers.

Note that this does’t imply that ARIN has suffered any sort of breach.

It is merely evidence that the crooks behind this disinformation campaign have focused primarily on email addresses that seem to be associated with network adminstration, in the same way that contact email addresses picked deliberately from podcast feeds would probably go to people who record or produce podcasts.

Call to distraction

Intriguingly, the fake messages don’t include any attachments, phone numbers or web links, making it unlikely that your email filter would consider them risky because of any so-called calls to action they contain.

But the text in the email consists of a bunch of technobabble that looks scary at first sight, including sentences like this:

Urgent: Threat actors in systems.

Our intellience monitoring indicates exfiltration of several of your virtual clusters in a sophisticated chain attack.

We recommend you check your systems and IDS monitoring.

As you can see in the screenshot above, the email also plausibly suggests that US law enforcement and security services can’t currently blocklist or take down the servers being used by the “attackers” for at least four hours, because they need to keep those servers online as part of an intelligence gathering operation.

In other words, you’ve been warned, but you’re on your own, so Do Something At Once.

The rogue messages, redacted above, also explicitly name a perpetrator, claiming that he belongs to the cybercrime clan known as Dark Overlord.

As you probably know, it’s most unlikely – both for operational and legal reasons – that the US authorities would name and shame an alleged perpetrator up front, while active surveillance was still in place, and no charges had been presented to or unsealed by a court.

The person named, as it happens, is a cybersecurity researcher who has published a book entitled Hunting Cyber Criminals, including Dark Overlord.

What to do?

  • Don’t panic. Whatever threat detection and response procedures you have in place, keep on doing them. Unless there is a clear, present, widespread and properly-documented new danger that you genuinely think you are unprepared for, avoid diverting your regular resources from what they are supposed to be doing anyway. Cybercriminals love to create distractions. Setting you off to search for an illusory attack that you are never going to find is a good way for them to trick you into leaving other parts of your infrastructure under-monitored and therefore at heightened risk of compromise.
  • Avoid contacting the FBI for further details. If this were a genuine warning, it would almost certainly be easy to find further details, including Indicators of Compromise (IoCs), without calling the FBI’s or any other US agency’s hotline. Either the government’s own well-known cybersecurity information portals, or cybersecurity community sites (including this one), would have further information by now. Leave those government cybersecurity hotlines open for people who really need them.
  • Ignore the accusations made in the email. If the individual named as the culprit really were in the sights of the Department of Justice (DOJ), and the DOJ were permitted by law to reveal his name as a suspect or a “person of interest”, you would almost certainly be able to read more about the matter on the DOJ’s own website. Creating “revenge havoc” against innocent individuals is known as Joe Jobbing, after an early spam campaign that made false accusations aimed at provoking an angry online reaction to Joe Doll, operator of a 1990s online hangout called Joes’ Cyberpost.

Occasionally, for example if you become aware of a looming ransomware attack in your own network, or if there’s a sudden global cybersecurity issue such as the Heartbleed bug, you may need to divert your cybersecurity experts in order to deal with the emergency.

But don’t let yourself get distracted by Joe Job messages of this sort – “fake news” like this is not only unfair to the people who are accused in it, but also potentially disruptive to your own cybersecurity protection.


Not enough time or staff? Learn more about Sophos Managed Threat Response:
Sophos MTR – Expert Led Response  ▶
24/7 threat hunting, detection, and response  ▶


Samba update patches plaintext passwork plundering problem

If you use the venerable Samba open source tool anywhere on your network, you’ll want to read up on the latest update, version 4.15.2.

Samba is the closest pronounceable word to SMB that Andrew Tridgell, who created the project back in the 1990s, could come up with.

SMB, short for Server Message Block is (or, more precisely, used to be) the general name for Microsoft’s once-proprietary networking protocol, inherited from IBM.

Tridge, as Dr Andrew Tridgell OAM is better known, wanted a way for his Linux computers to be able to join Windows networks, without which the job of exchanging data between Windows and Unix networks required a bunch of messy workarounds.

(There weren’t even USB drives in those days to help with getting data across an airgap – and a typical floppy disk could hold just 1.44MB or even less. Plus, networks were supposed to connect computers, not to segregate them.)

SMB turned into CIFS

Microsoft eventually allowed SMB to become an open standard, which you may know as CIFS, short for Common Internet File System, but the name Samba stuck for the open source implementation.

As you can imagine, SMB, and therefore CIFS, and therefore Samba, have evolved enormously over the years, and some early aspects of SMB have been retired, mainly for security reasons.

More precisely, they’ve been junked by default by everyone, including Microsoft, for insecurity reasons, namely that they were designed and first coded long before we became as serious about cybersecurity as we are today, or at least before cybersecurity became something we are rightly expected to take seriously whether we want to or not.

Microsoft itself notably published an article back in 2019 with the unequivocal title of Stop using SMB1, the first version of the file sharing protocol.

The SMB2 and SMB3 flavours of the protocol are not only much faster and more scalable, but also get rid of a bunch of insecure operating “features” permitted by the ancient SMB1.

In fact, right back in 2017, Microsoft stopped installing SMB1 support by default in Windows 10 v1709 and Windows Server v1709.

If you desperately need SMB1 for legacy reasons (and if you do, why not use this article as the impetus to figure out how to get rid of it at last?), you can add it as a Windows component later on, but by default, it’s not installed and you therefore cannot turn it on, whether by accident or design.

Beware downgrade attacks

One significant reason for making sure you don’t have SMB1 is that it’s vulnerable to manipulator-in-the-middle (MiTM) and downgrade attacks.

That’s where someone monitors the SMB1 traffic on your network, and replies to new users on your network to say, “Oh, really sorry, we’re very old fashioned here. Please don’t send encrypted passwords to log in, use plaintext passwords instead.”

Even if your clients and your servers don’t normally support SMB1, a rogue reply of this sort can trick an otherwise secure client (one that hasn’t been instructed never to comply with requests of this sort) into communicating insecurely…

…and thus allow the attackers to sniff out the plaintext password for later.

Of course, once the interlopers know your password, they no longer need to bother with SMB1 at all.

They can use the now-purloined password to login themselves using SMB2, and thereby connect uncontroverially, without raising any anomalies in your security logs.

Well, one of the bugs fixed in Samba 4.15.2 is dubbed CVE-2016-2124, and it’s described as follows:

An attacker can downgrade a negotiated SMB1 client connection and its capabitilities. […] The attacker is able to get the plaintext password sent over the wire even if Kerberos authentication was required.

Before you blame Samba

Before you blame Samba for having had this bug, however, stop to think that you shouldn’t still be using SMB1 at all, and that Samba, like Windows, doesn’t enable it by default.

So you would need a very backward-looking and unusual smb.conf file (Samba’s configuration files for clients and servers) for this bug to have been exploitable in the first place.

In particular, the Samba team note that you would need all of these Samba options set at the same time:

 client NTLMv2 auth = no client lanman auth = yes client plaintext auth = yes client min protocol = NT1 # or lower

The defaults (if you don’t have any entries with these names in your /etc/samba/smb.conf file) are all different, as follows:

 client NTLMv2 auth = yes client lanman auth = no client plaintext auth = no client min protocol = SMB2_02

Notably, plaintext authentication is suppressed by default, meaning that Samba clients won’t generate sniffable network packets containing plaintext passwords in the first place.

What to do?

  • Stop using SMB1 anywhere. On Windows, uninstall the SMB1 component from Windows computers altogether. For Samba, consider adding an explicit client plaintext auth = no entry to your configuration file to make your intentions clear.
  • Upgrade to Samba 4.15.2. The patches fix a bunch of other CVE-numbered bugs as well. If you are running earlier but still-supported Samba versions, the exact version numbers you want are 4.14.10 or 4.13.14 or later.
  • Plan to review all your authentication, password hashing and protocol settings regularly. Whether it’s deprecated ciphers such as RC4, withdrawn digest algorithms like MD5, dangerous password hashing functions such as LANMAN, or unwanted protocols such as SMB1, don’t simply assume they’ve been removed from your ecosystem. Make a point of checking as a matter of routine.

go top