Category Archives: News

US Department of Justice reignites the Battle to Break Encryption

The US Department of Justice (DOJ), together with government representatives from six other countries, has recently re-ignited the perennial Battle to Break Encryption.

Last weekend, the DOJ put out a press release co-signed by the governments of the UK, Australia, New Zealand, Canada, India and Japan, entitled International Statement: End-To-End Encryption and Public Safety.

You might not have seen the press release (it was put out on Sunday, an unusual day for news releases in the West), but you can almost certainly guess what it says.

Two things, mainly: “think of the children,” and “something needs to be done”.

If you’re a regular reader of Naked Security, you’ll be familiar with the long-running tension that exists in many countries over the use of encryption.

Very often, one part of the public service – the data protection regulator, for instance – will be tasked with encouraging companies to adopt strong encryption in order to protect their customers, guard our privacy, and make life harder for cybercriminals.

Indeed, without strong encryption, technologies that we have come to rely upon, such as e-commerce and teleconferencing, would be unsafe and unusable.

Criminals would be trivially able to hijack financial transactions, for example, and hostile countries would be able to eavesdrop on our business and run off with our trade secrets at will.

Even worse, without a cryptographic property known as “forward secrecy”, determined adversaries can intercept your communications today, even if they aren’t crackable now, and realistically hope to crack them in the future.

Without forward secrecy, a later compromise of your master encryption key might grant the attackers instant retrospective access to their stash of scrambled documents, allowing them to rewind the clock and decrypt old communications at will.

So, modern encryption schemes don’t just encrypt network traffic with your long-term encryption keys, but add in what are known as ephemeral keys into the mix – one-time encryption secrets for each communication session that are discarded after use.

The theory is that if you didn’t decrypt the communication at the time it was sent, you won’t be able to go back and do so later on.

Unfortunately, forward secrecy still isn’t as widely supported by websites, or as widely enforced, as you might expect. Many servers still accept connections that reuse long-term encryption keys, presumably because a significant minority of their visitors are using old browsers that don’t support forward secrecy, or don’t ask to use it.

Similarly, we increasingly rely upon what is known as “end-to-end encryption”, where data is encrypted for the sole use of its final recipient and is only ever passed along its journey in a fully scrambled and tamper-proof form.

Even if the message is created by a proprietary app that sends it through a specific provider’s cloud service, the company that operates the service doesn’t get the decryption key for the message.

That means that the service provider can’t decrypt the message as it passes through their servers, or if it is stored there for later – not for their own reasons; not if they’re told to; and not even if you yourself beg them to recover it for you because you’ve lost the original copy.

Without end-to-end encryption, a determined adversary could eavesdrop on your messages by doing the digital equivalent of steaming them open along the way, copying the contents, and then resealing them in an identical-looking envelope before passing them along the line.

They’d still be encrypted when they got to you, but you wouldn’t be sure whether they’d been decrypted and re-encrypted along the way.

The other side of the coin

At the same time, another part of the government will be arguing that strong encryption plays into the hands of terrorists and criminals – especially child abusers – because, well, because strong encryption is too strong, and gets in the way even of reasonable, lawful, court-approved surveillance and evidence collection.

As a result, justice departments, law enforcement agencies and politicians often come out swinging, demanding that we switch to encryption systems that are weak enough that they can crack into the communications and the stored data of cybercriminals if they really need to.

After all, if crooks and terrorists can communicate and exchange data in a way that is essentially uncrackable, say law enforcers, how will we ever be able to get enough evidence to investigate criminals and convict them after something bad has taken place?

Even worse, we won’t be able to collect enough proactive evidence – intelligence, in the jargon – to stop criminals while they are still at the conspiracy stage, and therefore crimes will become easier and easier to plan, and harder and harder to prevent.

These are, of course, reasonable concerns, and can’t simply be dismissed out of hand.

As the DOJ press release puts it:

[T]here is increasing consensus across governments and international institutions that action must be taken: while encryption is vital and privacy and cyber security must be protected, that should not come at the expense of wholly precluding law enforcement, and the tech industry itself, from being able to act against the most serious illegal content and activity online.

After all, in countries such as the UK and the US, the criminal justice system is largely based on an adversarial process that starts with the presumption of a defendant’s innocence, and convictions depend not merely on evidence that is credible and highly likely to be correct, but on being sure “beyond reasonable doubt”.

But how can you come up with the required level of proof if criminals can routinely and easily hide the evidence in plain sight, and laugh in the face of court warrants that allow that evidence to be seized and searched?

How can you ever establish that X said Y to Z, or that A planned to meet B at C, if every popular messaging system implements end-to-end encryption, so that service providers simply cannot intercept or decode any messages, even if a court warrant issued in a scrupulously fair way demands them to do so?

Meet in the middle?

Impasse.

We can’t weaken our current encryption systems if we want to stay ahead of cybercriminals and nation-state enemies; in fact, we need to keep strengthening and improving the encryption we have, because (as cryptographers like to say), “attacks only ever get better.”

But we’re also told that we need to weaken our encryption systems if we want to be able to detect and prevent the criminals and nation-state enemies in our midst.

The dilemma here should be obvious: if we weaken our encryption systems on purpose to make it easier and easier to catch someone, we simultaneously make it easier and easier for anyone to prey successfully on everyone.

O, what a tangled web we weave!

There’s an additional issue here caused by the fact that “uncrackable” end-to-end encryption is now freely available to anyone who cares to use it – for example, in the form of globally available open source software. Therefore, compelling law-abiding citizens to use weakened encryption would make things even better for the crooks, who are not law-abiding citizens in the first place and are unlikely to comply with any “weak crypto” laws anyway.

What to do?

Governments typically propose a range of systems to “solve” the strong encryption problem, such as:

  • Master keys that will unlock any message. The master keys would be kept secret and their use guarded by a strict legal process of warrants. In an emergency, the judiciary could access a specific message and reveal only that message to investigators.
  • Sneakily engineered encryption flaws. If covertly designed in from the start, these would be known to the intelligence services but unlikely to be found or exploited from first principles by cryptographic researchers. In an emergency, this might give the state a fighting chance of cracking specific vital messages, while leaving the rest of us without enough computing power to make much headway against each other.
  • Message escrow with a trusted third party. Every message that’s end-to-end encrypted would effectively be sent twice: once to the intended recipient, and once to a trusted store where it would be kept for a defined period in case of a search warrant.
  • Interception triggers built into end-user apps. The apps at each end of an end-to-end encrypted message must, of necessity, have access to the unencrypted data, either to encrypt it in the first place or to decrypt it for display. By special command, the app could be forced to intercept individual messages and send them to an escrow system.

The problem with all these solutions is that they can all be considered variations on the “master key” theme.

Endpoint interception only when it’s needed is just a specialised, once-in-a-while case of general message escrow; message escrow is just a specialised case of a master key; and a deliberate cryptographic flaw is just a complicated sort of master key wrapped up in the algorithm itself.

They all open up a glaring threat, namely, “What happens when the Bad Guys uncover the secrets behind the message cracking system?”

Simply put: how on earth do you keep the master key safe, and how do you decide who gets to use it anyway?

The DOJ seems to think that it can find a Holy Grail for lawful interception, or at least expects the private sector to come up with one:

We challenge the assertion that public safety cannot be protected without compromising privacy or cyber security. We strongly believe that approaches protecting each of these important values are possible and strive to work with industry to collaborate on mutually agreeable solutions.

We’d love to think that this is possible, but – in case you were wondering – we’re sticking to what we call our #nobackdoors principles:

[At Sophos,] our ethos and development practices prohibit “backdoors” or any other means of compromising the strength of any of our products – network, endpoint or cloud security – for any purpose, and we vigorously oppose any law that would compel Sophos (or any other technology supplier) to intentionally weaken the security of its products.

Where you do stand in this perennial debate?

Have your say in the comments below. (If you omit your name, you will default to being “Anonymous”.)


Windows “Ping of Death” bug revealed – patch now!

Every time that critical patches come out for any operating system, device or app that we think you might be using, you can predict in advance what we’re going to say.

Patch early, patch often.

After all, why risk letting the crooks sneak in front of you when you could take a resolute stride ahead of them?

Well, this month, the Offensive Security team at SophosLabs (that’s offensive as in the opposite of defensive, by the way, not as in the opposite of polite; and it’s the security that’s offensive anyway, not the team) has come up with some even more compelling “patch now” advice.

It’s in the form of a short video, and it shows an unpatched Windows 10 computer being crashed at will across the network by a simple bug-tripping Python script:

[embedded content]

If the person running the script can aim a specially crafted IPv6 network packet at your computer – specifically, a booby-trapped ICMP packet – then they can bring you down without warning.

You see a Blue Screen of Death (BSoD), and any work you hadn’t saved is lost, probably forever.

ICMP is short for Internet Control Message Protocol, and it’s a low-level type of network packet that’s much simpler than setting up a regular TCP connection, and even simpler than UDP. The best known sort of ICMP message is probably a ping packet, generated by the ping utility that exists on almost every operating system. You ping a computer by its IP address and if it gets the packet, it sends a reply – a pong packet, if you like. Pinging checks whether you can communicate with another device at all, as a basic but useful starting point for network diagnostics. Loosely speaking, if someone can ping your unpatched Windows 10 or Windows Server 2019 computer from theirs, they can probably crash you with this bug.

We’re not going to go into any detail here – and even in the SophosLabs report our experts have avoided giving away enough for you start exploiting this vulnerability at will – but what you need to know is that this bug is denoted CVE-2020-16898.

The bug was discovered in a Windows component called TCPIP.SYS, and as the filename suggests, this isn’t just any old program.

TCPIP.SYS is a kernel driver, meaning that if you trigger this bug, you are exploiting a vulnerability inside the kernel itself, which is the very core of any running Windows system.

That’s why the system crashes with a BSoD rather than just shutting down one application with an error while leaving everything else running.

After all, shutting down the kernel means that there is no “anything else” to keep running, given that it’s the kernel that controls everything else.

So, a kernel crash, also known as a panic in Unix jargon, forces a total shutdown, typically followed by an automatic reboot.

Interestingly, the bug you see triggering in the video above that provokes the BSoD is caused by a buffer overflow.

TCPIP.SYS doesn’t correctly check the size of one of the data fields that can optionally appear in IPv6 ICMP packets, so you can shove too much data at it and corrupt the system stack.

Bang! Down it goes.

Two decades ago, almost any stack-based buffer overflow on Windows could be used not only to crash a system, but also, with a bit of care and planning,to take over the processor’s flow of execution and divert it into a program fragment – known as shellcode – of your own choosing.

In other words, Windows stack overflows in neworking software almost always used to lead to so-called remote code execution exploits, where attackers could trigger the bug from afar with specially crafted network traffic, run code of their own choosing, and thereby inject malware without you even being aware.

But numerous security improvements in Windows, from Windows XP SP3 onwards, have made stack overflows harder and harder to exploit, and these days they can often only be used to force crashes, not to take over completely.

Nevertheless, a malcontent on your network who could crash any computers at will, servers and laptops alike, could cause plenty of harm just through what’s known as a denial of service attack, especially because recovering from each crash requires a complete reboot.

In theory, of course, a determined crook might be able to figure out how to exploit CVE-2020-16898 to take over a remote computer, not merely to crash it, so that Microsoft has classified this bug as critical, given it a severity rating of 9.8 (out of 10), and flagged it with an exploitability assessment of 1, short for “exploitation more likely”.

Slightly annoyingly, severity ratings get worse on a scale of 0 up to 10, while exploitability assessments get worse on a scale of 3 down to zero. 0 means “is already being exploited, so you are already in direct danger” and 3 means “this bug will probably amount to very little”. A value of 1 means that even if the bug turns out to be very hard to exploit, you can expect attackers to try really hard at it, because previous bugs of this sort have been exploited successfully.

In other words, even though CVE-2020-16898 hasn’t been turned into a working attack yet, you should patch right now, because you can bet your boots that cybercrimnials are working on it.

In the vaguely militaristic jargon of cybersecurity research, this means that someone, somewhere, is trying to weaponise this bug right now.

For an explanation of why modern versions of Windows aren’t easy to exploit using this flaw, and for a justification of why our own Offensive Security Team thinks it’s unlikely – but not impossible! – that anyone will succeeed, please read the SophosLabs report.

What to do?

As we’ve said, you need to patch.

Although an exploit may never be found, it’s a fair bet that any working exploit that does turn up will be what’s called wormable, meaning that it could be used not only to break into your computer from someone else’s, but then also to break in to a third person’s computer automatically from yours.

That would allow the bug to be used to create a self-contained, self-replicating computer virus, or worm, that could spread both far and fast, entirely without human intervention.

If you genuinely can’t patch yet, there are two workarounds:

  • Turn off IPv6 in Windows. This is only an option if you have a pure IPv4 network.
  • Turn off the buggy ICMP feature in Windows, known as IPv6 IMCP RDNSS (short for Recursive DNS Server).

Instructions for turning ICMP RDNSS off (and back on after you have patched) can be found on Microsoft’s CVE-2020-16898 advisory page.


Creepy covert camera “feature” found in popular smartwatch for kids

If you nearly didn’t read this article because you thought the headline sounded unsurprising, like “Dinosaurs Still Extinct” or “Sun to Rise in East”…

…then be aware that we nearly didn’t write it for the same reason.

Bugs and vulnerabilities in built-down-to-a-price devices made for kids are, very sadly, not a new or even an unusual problem.

However, according to the Norwegian cybersecurity researchers who analysed the XPLORA 4 watch described below, the company that sells it claims to have close to half a million users, and annual revenues approaching $10,000,000.

So it seems that writing up smartwatch security blunders is still important, because these devices are steady sellers despite their sometimes worrying cybersecurity history.

You can see why kids’ smartwatches are popular.

Getting your first watch after learning to tell the time is still a delightful childhood rite of passage, at least in countries where watches are affordable.

And a smartwatch helps parents kill three birds with one stone.

Firstly, for the kids, it’s a watch! You can tell the time all by yourself! It’s a fashion statement! (And it means you no longer have an excuse to be late any more, but that’s a realisation that only dawns after you’ve adopted the watch into your own lifestyle and aren’t going to give it up lightly.)

Secondly, it’s not just an old-fashioned watch, because it’s cool and modern enough to slot right into our contemporary, connected world.

Thirdly, for the parents, it’s an emergency lifeline that their own parents never had but would probably have welcomed if they did.

Smartwatches can help you keep track of your kids, so the chance that you’ll be frantic with worry when your children don’t show up on time is greatly reduced.

The creepiness factor

Unfortunately, smartwatches that can track your kids have a creepiness factor that the wind-up wristwatches and simple battery-powered LCD timepieces of yesteryear simply didn’t have.

That’s because most smartwatches keep track of where you are via some sort of internet service that requires always-on (or almost-always-on) network access.

This means that even budget smartwatches usually include mobile phone connectivity; that they often run a full-blown, albeit stripped-down, mobile phone operating system such as Android; and that they regularly make network connections that permit two-way communications.

Those network connections can be used not only to upload and store tracking data to the vendor’s cloud servers, but also to download updates and commands.

So there’s a lot that can go wrong, even in a childrens’ smartwatch programmed with the most noble aims, that could put the privacy of your kids and your family at needless risk.

The irony of buying a watch to improve your child’s safety only to find that it simultaneously reduces their security is not lost on the researchers who wrote up the findings we’ll be covering here.

Harrison Sand and Erlend Leiknes of mnemonic, a Norwegian cyberthreat response company, worked with the Norwegian Consumer Council on cybersecurity in smartwatches back in 2017 for a report ultimately entitled #WatchOut – Analysis of smartwatches for children, so they’ve been there before.

This year, they decided to revisit the latest model of one of the smartwatch brands they looked at last time:

Since the [2017 report], [Norwegian smartwatch vendor] Xplora is emerging as one of the leaders in their geographical markets, and expanding into new territories. With this in mind, we thought it was a worthwhile endeavor to look at their updated model, the XPLORA 4. In our previous assessment, our scope and focus was limited to the communication between the watch and the local servers, and the parental application. This time around decided to take a deeper look at the watch itself.

We’re not going to explain in detail how the researchers performed their task – for that we recommend you read their report yourself.

It’s very well-written – it’s non-technical enough that you don’t need to be an experienced reverse engineer or Android coder to follow what they did, but it has sufficient detail to act as an excellent guide if you would like to get started in Android cybersecurity spelunking yourself.

What they found

In brief, the researchers:

  • Rigged up a USB cable that would connect to the proprietary connector on the watch.
  • Used an already-available software tool to download the firmware from the device.
  • Modified the firmware to enable root-level (administrator) access over USB.
  • Uploaded the modified firmware back to the device.
  • Connected using ADB (short for Android Debug Bridge), the standard tool for USB access.
  • Got a root shell on the device.

Root, as you probably know, is pretty much to Linux and Android what Administrator and SYSTEM rolled into one would be for Windows.

With a root-level command prompt, the researchers were able to explore the operating system and Android apps on the watch, and quickly discovered an app called Persistent Connection Service.

This app seemed, amongst other things, to be some sort of debugging or system monitoring process that automatically kept track of which programs were running and what control messages they would accept.

Handily, the Persistent Connection Service wrote a debug trail that identified these processes and their supported control messages, so the researchers quickly noticed numerous fascinating lines in the debug log, which is helpfully sent over the USB connection via adb.

Example debugging output shown in the paper identifies apps and control messages (known in Android as Intents) with highly suspicious names, such as:

 134=com.qihoo.kidwatch.action.COMMAND_LOG_UPLOAD 165=com.qihoo.kidwatch.action.REMOTE_EXE_CMD 126=com.qihoo.watch.action.WIRETAP_INCOMING 317=com.qihoo.watch.action.WIRETAP_BY_CALL_BACK 320=com.qihoo.watch.action.REMOTE_SNAPSHOT 303=com.qihoo.kids.smartlocation.action.SEND_SMS_LOCATION

Further analysis revealed that this connection service would itself accept incoming messages via SMS, and use the content of those messages to trigger one of the abovementioned control messages.

Not just any incoming SMS would do, though – the researchers discovered that these “metamessages” (the jargon word we’re using here to describe control messages used to send other control messages) were encrypted with a secret encryption key shared between the manufacturer or vendor and the phone.

Apparently, the secret key is unique to each device, and is programmed into the NVRAM of the watch, either before it’s shipped or when it is made. (NVRAM refers to computer memory that is non-volatile, so it keeps its contents even when the battery goes flat.)

So, the researchers picked one of the more juicy-sounding secret commands show above, REMOTE_SNAPSHOT; read the encryption key out of the watch’s NVRAM; constructed their own encrypted control message; and SMSed it to the phone number of the SIM card in the watch.

Bingo!

With no visible or audible feedback, the watch snapped a covert picture with its built-in camera and uploaded the image to the vendor’s cloud servers without waiting for any sort of approval.

The researchers didn’t investigate any further, presumably quite reasonably thinking that their point was already well proved, and assuming that the other control messages they discovered probably did exactly what their names suggested, too.

As they succinctly put it:

[I]n short – an encrypted SMS can be sent to the watch to trigger the surveillance functions.

What next?

Apparently, the researchers supplied the vendor, Xplora, with their findings and Xplora creditably came up with a security patch that was pushed out before the researchers went live with their report.

We couldn’t find any mention of the report or the patch on Xplora’s website or blog, however, so we are relying here on our friends over at Ars Technica, who quoted from a statement they received from the Xplora that stated:

This issue the testers identified was based on a remote snapshot feature included in initial internal prototype watches for a potential feature that could be activated by parents after a child pushes an SOS emergency button. We removed the functionality for all commercial models due to privacy concerns. The researcher found some of the code was not completely eliminated from the firmware.

Xplora also pointed out that any attacker not affiliated with the manufacturer or the vendor of the watch would need physical access to it in order to read out the phone’s copy of the encryption key from NVRAM, plus the phone number from the SIM card, before they could target any specific device.

We haven’t seen the code that handles the encryption, but the mnemonic analysis describes it as using the RC4 cipher, an encryption algorithm with serious flaws that should no longer be used at all, with a 32-bit key, which is well short of current guidelines. So further analysis might have shown that only the watch’s phone number was truly needed for a well-researched attack and that the encryption key could be computed, guessed or inferred.

What to do?

The question “What to do?” is nuch trickier than usual in a case like this.

Usually, we’d reply, “Patch early, patch often,” but in this case, some parents might not think that goes far enough.

After all, there are still two important and apparently unanswered questions:

  • Were the other remote commands removed too, such as the worryingly named REMOTE_EXE_CMD and WIRETAP_INCOMING?
  • How did those features come to be included in the manfacturer’s firmware in first place?

Or, as mnemonic punningly asked, “This is [a set of Intents] that has been created with intent. What exactly is that intent?”

Unfortunately, if you own a smart device, especially one intended for children, that turns out to have such baked-in-from-the-start flaws that you’re no longer sure you want to trust it, you may have little choice to stop using it altogether, no matter how much it cost in the first place.

Oh, and if you’re a programmer or a software designer, don’t add in undocumented security “features” just because you think they might be useful in the future.

And don’t use outdated and inadequate cryptography just because you think no one will notice.


Microsoft on the counter­attack! Trickbot malware network takes a hit

Good news, for a while at least.

Microsoft went to US District Court for the greater good of all of us and came away with a court order permitting it to take over a whole raft of internet servers.

The company was authorised to take over a wide range of IP numbers, effectively ripping them out from under their existing users and repurposing them for use by Microsoft itself.

As you can imagine, the courts don’t take decisions like this lightly, especially if those IP numbers were allocated in good faith for another company to operate its business.

After all, the IP-ripper would be able to shut down some or all of the operations of the IP-rippee just like that, by “blackholing” all the servers so they appeared to have vanished from the internet.

Even more seriously, the ripper could do some sort of selective blackholing, pretending to keep the servers alive to see who came along, potentially reading other people’s emails, taking over login pages, replacing the rippee’s software downloads with competing products, and more.

But these weren’t IP numbers that were being used in good faith by a legimate business.

These were servers that Microsoft had tied back to the operation of a large, long-lived and destructive zombie network known as Trickbot.

Trickbot in the spotlight

Sadly, we’ve had to write about Trickbot many times over the years, as the criminals behind the operation have spammed out wave after wave of deviously constructed emails under a wide variety of guises, all with the ultimate goal of infecting as many victims as possible with zombie malware.

In March 2020, for example, these crooks sent out a sea of messages tapping into early fears about the coronavirus pandemic, falsely telling recipients in Italy that “[b]ecause there are documented infections in your area […] we strongly recommend that you read the document attached to this message!

In June 2020, the crooks hooked into the momentum of the Black Lives Matter movement, sending out this apparently innocent-looking but malware-infected email:

Sneakily, these crooks were smart enough not to pick a side, not to pile on any pressure, and not to play on emotions such as guilt or fear.

Instead they appealed to anyone and everyone by simply inviting you to have your say by completing a survey in the document and submitting it anonymously.

Except that it wasn’t a survey but a trick to run a program embedded in the document to implant the malware on your computer.

The crooks even went as far to pretend to be helpful, urging you to download an Office update in the background while having your say, and politely warning you that you might incur internet charges if you downloaded the “update” on a metered connection:

Some Office update!

If you proceeded with the “download”, you’d end up co-opted into Trickbot’s zombie network, also known as a botnet (short for robot network, thus the name bot for the malware part), and you’d end up with malware running in the background on your computer.

This zombie malware would regularly “call home” to one or more the Trickbot servers for instructions on what sort of cybercriminality to indulge in next.

It usually ended in ransomware

As if that weren’t bad enough already, one of the remote commands that computers infected with the Trickbot zombie could receive from its overlords was an instruction to download and launch yet another piece of malware.

For many victims, that command would be “infect yourself with this ransomware and prepare to have all your data scrambled on demand”.

So Trickbot infections often added a destructive insult to an already costly injury, typically ending up in a Ryuk ransomware attack.

Ryuk, named after a character in the manga series Death Note, has similarly been around for several years, and the malware crew behind it were early and enthusiastic adopters of what are now called “human-led” attacks that typically end up in extortion demands that run into millions of dollars.

By “human-led”, we mean that the ransomware isn’t just left to is own devices to spread and infect once the crooks have a foothold inside your network.

Instead, the criminals take oversight of both the network and the malware, typically ending up with a detailed understanding of your IT systems that they use to find and wipe out your backups, make sure your top-value servers are on the destruction list as well as your laptops, and pick the nastiest possible time to attack.

What will Microsoft’s takedown achieve?

At this point, you might be wondering what a network takedown of a could possibly do to to reign in the operation of a combined zombie/ransomware cybercrime gang of this sort.

Well, as Microsoft puts it:

We disrupted Trickbot through a court order we obtained as well as technical action we executed in partnership with telecommunications providers around the world. We have now cut off key infrastructure so those operating Trickbot will no longer be able to initiate new infections or activate ransomware already dropped into computer systems.

Simply put, the idea is to do two things:

  • Prevent the download and deployment of Trickbot or any other malware in the first place, thus limiting the number of new infections.
  • Prevent the transfer of malicious commands to any computer that’s already infected, thus leaving it high and dry and unable to do any more damage.

The jargon term for the set of computers disrupted by Microsoft is a C2 network, or C&C network, where C&C is short for command-and-control.

In theory, then, this takedown will greatly reduce the ability of the crooks to get malware onto your computer to start with, and also limit their ability to take over any infected computers even if they are able to zombify them in the first place.

It’s a bit like squashing a gun-running operation by shutting down the supply lines by which the weapons were distributed, and reducing the impact of any guns already in circulation by leaving them with no ammunition to fire in any case.

Is that the end of Trickbot and Ryuk?

Sadly, disruptions of this sort don’t solve the problem at source, not least because the criminals themselves still haven’t been identified and arrested.

But even if the crooks aren’t finished yet, neither is Microsoft:

We fully anticipate Trickbot’s operators will make efforts to revive their operations, and we will work with our partners to monitor their activities and take additional legal and technical steps to stop them.

Last week, we said a big thanks to ransomware victims who, despite being in deep water themselves, neverthless refused to pay up because they knew that doing so would directly fund future ransomware attacks.

Today, we’re saying thanks to Microsoft for all the effort behind a takedown of this size.


Naked Security Live – Cybersecurity tips for your own network

We do a show on Facebook every week in our Naked Security Live video series, where we discuss one of the big security concerns of the week.

We’d love you to join in if you can – just keep an eye on the @NakedSecurity Twitter feed or check our Facebook page on Fridays to find out the time. (Note that you don’t need a Facebook account to watch our live streams, although you will need to login if you want to ask questions or post comments.)

It’s usually somewhere between 18:00 and 19:00 UK time, which is early afternoon/late morning on the East/West coast of North America.

For those of you who [a] don’t use Facebook, [b] had buffering problems while we were live, [c] would like subtitles, or [d] simply want to catch up later, we also upload the recorded videos to our YouTube channel.

Here’s the latest video – we give you 8 tips for better security on your work-from-home network:

[embedded content]

(Watch directly on YouTube if the video won’t play here.)

Thanks for watching… hope to see you online later this week!


go top