Category Archives: News

Blast from the past! Windows XP source code allegedly leaked online

We saw it in a tweet. How about you?

If the reports are to be believed, someone has just leaked a mega-torrent (pun intended – allegedly some of the files have also been uploaded to Kiwi file-sharing service Mega) of Microsoft source code going all the way back to MS-DOS 6.

Zooming in on the image in the abovementioned tweet reveals some interesting artifacts:

 OS from filename Alleged source size (bytes) --------------------- --------------------------- MS-DOS 6 10,600,000 NT 3.5 101,700,000 NT 4 106,200,000 Windows 2000 122,300,000 NT 5 2,360,000,000

Take these numbers with a pinch of salt, of course – not only are these stolen files, we can’t tell you how complete they are or even if they can be compared at all.

NT 5, for example, covers a whole range of different Windows flavours, officially listed by Microsoft as shown below, so it seems likely that the NT 5 archive includes everything already in the Windows 2000 archive shown above, and more:

 OS name Version number --------------------------- -------------- Windows 2000 5.0 Windows XP 5.1 Windows XP 64-Bit Edition 5.2 Windows Server 2003 5.2 Windows Server 2003 R2 5.2

In case you are wondering, Vista was 6.0, and Microsoft stuck with 6.x for Server 2008, Server 2012 and even, rather weirdly, for the versions outwardly known numerically as Windows 7 and 8.

There was no Windows 9, of course, though we were never exactly sure why, and for Windows 10 the version number took a logical leap forward to 10.0 to match up with the product name. (Server 2016 and Server 2019 are also considered 10.0 releases.)

Intriguingly, Microsoft has officially released old-school source code before, such as when the source of MS-DOS 1.25 (1982) and Word 1.1a (1984) were made public a few years back.

And in recent years, the company has officially and enthusiatically embraced open source for some of its projects, which is where you not only show people your code but also allow them to use it for themselves freely.

But a full and publicly accessible dump of Windows XP source code has never, to our knowledge, happened before.

What does this mean?

The rumour mill suggests that much if not most of this code has been quietly circulating in underground forums for years.

It certainly seems unlikely that a lone hacker suddenly acquired all these files at once, directly from a previously-unknown “mother lode” archive at Microsoft itself.

But as far as we know this is the first claimed Windows XP source code that’s appeared as an all-in-one, come-and-get-it-one-and-all mega-dump.

(We have seen rumours that if you try to compile the 64-bit version of XP you’ll end up with errors caused by missing files, so the archive may not be complete even now.)

So, is this leak a security disaster that will inevitably lead to a raft of new exploits using bugs in XP that are still there in today’s versions of Windows?

Or is it just a mostly harmless, if unlawful, opportunity to take a trip down memory lane, assuming that you have the time, inclination and bandwidth to torrent gigabytes of stolen stuff?

We suspect that it’s more of the latter.

All things being equal, serious security flaws are easier to find if you have commented source code in front of you as well as the compiled binaries.

That’s because you have more to go on if you are on the lookout for flaws such as buffer overflows, dangling pointers (where you free up memory but then carry on using it anyway even after it’s been handed over to someone else), integer arithmetic errors or cryptographic blunders.

Sometimes, the comments alone can help you zoom in on problematic code, especially if you come across snippets where a programmer has left behind reminders such as /* FIX ME!!! */ or // Asked original author but they can't remember if it ever worked properly.

However, experienced bug hunters can find vulnerabilities and exploits without needing source code, as the number of bugs patched each month in products such as Windows and the closed-source parts of macOS, iOS and Android remind us only too well.

Also, even though latent bugs in XP sometimes turn out to have been carried forward all the way to today’s versions of Windows, those bugs are less and less likely still to be exploitable thanks to huge security changes in the core of Windows.

For example, before Windows XP Service Pack 3, pretty much any stack buffer overflow vulnerability you could find in any widely-used Windows application was enough to produce an exploit.

There was no data execution prevention, no address space layout randomisation and almost no other mitigations in place that could soften the blow of a buffer overwite.

If you could overwrite the return address of the current function, you wouldn’t just have a way to crash the running program – you could as good as guarantee to turn the bug into a working remote code execution exploit.

What to do?

If you’re interested in programming, cybersecurity or the history of technology – or even just curious to know what sort of comments Microsoft programmers were apt to write back in the more carefree coding days at the start of the 21st century…

…it’s tempting to take a look.

But unless you really need to look, we’d suggest that you simply let this one go, especially if your interest is just a passingly casual one.

After all, you can’t unsee the code after you’ve looked at it.

And if ever you are in a position where you need to show that you couldn’t possibly have copied someone else’s proprietary code, even if you’d wanted to, you’ll find that harder to do if the other side can show that you definitely did look at it at some point.

Oh, and if you’re a programmer, even if you only ever work on proprietary code that you expect to remain a trade secret for evermore, remember that your code is a little bit like a professional resume:

  • Don’t be sloppy.
  • Say what you mean.
  • Mean what you say.
  • You’re probably not quite as witty as you think.

Firstly, you’ll write better code if you keep to these rules.

Secondly, those comments that seemed so wry and amusingly edgy back in the day might not age as well as you thought.

Write your code as though your mum were going to see it, and test it as though she were going to depend upon it.

Because, for all you know, she might very well end up doing both.


SMS phishing scam pretends to be Apple “chatbot” – don’t fall for it!

Aren’t SMSes dead? Aren’t they just plain old text anyway? Surely they’re of no interest to cybercriminals any more?

Well, SMSes aren’t dead at all – they’re still widely used because of their simplicity and convenience.

Indeed, as a general-purpose short message service – which is literally what the letters SMS stand for – it’s hard to beat, because any phone can receive text messages, from the fanciest smartphone to the cheapest pre-paid mobile.

If all you need to transmit is a 6-digit logon code or a “pizza driver now 2 minutes away” notification, SMSes still make excellent business sense.

Sadly, however, what works for legitimate businesses almost always works for cybercriminals too, so there are plenty of crooks still using SMSes for phishing – an attack that’s wryly known as smishing.

You can see why SMSes work for crooks.

With just 160 characters per message, it’s easy for them to avoid the grammatical and stylistic blunders that they often make when they’re forced to produce longer-format email messages in a language they don’t speak well.

Better yet, business SMSes generally use URL shorteners to save space, giving the criminals an excuse to do the same.

URL shorteners convert lengthy but meaningful web addresses such as https://brandname.​example.com/​pizza-order.html?​lang=en-US into a compressed but cryptic format such as https://xx.test/ABXt that frees up characters for the rest of the SMS, but disguises where the link is going to end up.

Hovering over a shortened link doesn’t help because the link denotes the actual website you’ll visit. The link shortening site uses the characters after the website name (ABXt in our made-up example above) as an index to look up the real destination and then sends an HTTP 301 Moved Permanently reply to tell your browser where to go next. You need to click through to the shortening site first before you find out where you are supposed to end up.

The SMS system, of course, doesn’t know anything about URLs or even about the internet – but it doesn’t need to.

Your phone’s operating system will happily recognise when the text in an SMS looks like a URL and automatically make it clickable for you.

So, when the crooks use shortened URLs in their smishing scams, they don’t look unusual or out of place, even though the crooks are doing it specifically to be treacherous and not to save space.

As a result, text messages that contain one short, clipped sentence that wouldn’t look right in an email, and that contain deliberately disguised links that we might be suspicious of anywhere else…

…look surprisingly natural when they show up in an SMS.

Like this one we received earlier this week. (We’re not called Christopher and we don’t live in Derry, which is in Northern Ireland. The incomplete address given is a genuine suburban street, presumably plucked from a map to make it seem realistic.)

Dear Christopher, we have your packet in queue. Address: Londonderry, Ballynagard crescent
http COLON SLASH SLASH xxxxxxxx DOT com SLASH zzzzzzz

The message is meant to look as though it was sent to the wrong number, so the crooks are relying on you being intrigued enough to click through, whereupon they use some sneaky “reverse authentication” psychology to lure you in further.

The scam first shows you some cheery messages from a fake Apple chatbot to tell you why you – actually, to tell you why Christopher – had enough luck to be chosen to take part in an iPhone 12 trial, and then it invites you – actually, it invites Christopher – to join in:

Here, the link looks genuine, but the blue characters are simply the clickable text of the link, not the URL that is the destination of the link.

At this point, you’re no longer in the SMS messaging app but have clicked through into your browser, so you can see where the fake link leads if you hover your mouse over it. (On a phone, tap-and-hold on the link until the destination pops up.)

But if you aren’t cautious, you might wonder whether “Christopher” really was part of some Apple pre-release group.

What if you claim Christopher’s promo for yourself?

In fact, what’s stopping you from simply clicking through as if you were Christopher and finding out for yourself?

Well, one thing is stopping you, namely that you have to “prove” yourself by by giving your full name and address – except, of course, that the crooks helpfully leaked that information to you in the original text, making the “test” easy to pass.

You can guess what happens next:

In case you’re wondering, the name-and-address answers above in part 3/5 don’t matter a jot. We tried clicking numerous different combinations and, unsurprisingly, the crooks let us through anyway. The questions are there just to provide a plausible connection back to the SMS that was meant for “Christopher” but that reached you instead. It’s as though the criminals are trying to “authenticate” themselves to you, rather than the other way around.

As you see above, if you do click through the questions then you end up on a scam site (there were several variations, all similar – we tried the smish repeatedly) where you find there’s a courier delivery charge for the “free” phone, typically between £1 and £2.

Then you end up on a credit card payment form that’s hosted on what looks like a “special offers” website with a a believable enough name, and with an HTTPS security padlock if you take the time to look.

Of course, if you try to pay your modest delivery charge, you are simply handing over your personal data to the crooks, including your full card number and security code:

How bad is this?

Is this really a big deal, given that most of us would back ourselves to spot this as a scam right from the start?

Yes, it is.

Many of us have friends or family – perhaps even an at-risk relative who has been scammed before – who wouldn’t be so sure, and for whom the reverse authentication trick of asking for “Christopher’s” name and address might be convincing enough to draw them in further.

And friends don’t let friends get scammed, so if ever you get asked by someone who relies on you for cybersecurity help, “So what would happen if I clicked through?”…

…you can show them the short video above and let them see how these scams play out – without having to click through yourself.

What to do?

  • There is no free phone. And if there were a free phone, you wouldn’t have to hand over your credit card details and pay £1 for it. You’re not getting something for nothing – you’re handing over something for nothing, and the crooks will use it against you. If you’re in any doubt, don’t give it out.
  • Keep your eyes open for clues. The crooks have made numerous spelling and visual blunders in this scam. We’re not going to help them by listing them all like your English Language teacher would have done at school, but there are quite a few things that just don’t look right, even if you assume that there really is a free phone at the end of this. You might not always notice every clue, but always give yourself the time to look and therefore the best chance to catch out the crooks.
  • Look at the link before you click. If anything looks wrong, it IS wrong. Even if the crooks don’t make any spelling or grammatical mistakes they almost always need to lead you to a website that they control. Often, that means a bogus link that you ought to spot if you take your time. Never let yourself get rushed into clicking through, no matter how much the crooks play on your fear of missing out.
  • Consider a web filter. Network web filtering on your business network isn’t about surveillance, it’s about online safety. This helps you keep the bad stuff out, and helps your users keep the good stuff in, such as passwords and payment card numbers. Setting up a corporate VPN (virtual private network) means that users at home can browse securely back through the office network and enjoy the same protection that’t they’d have on the LAN at work.

Naked Security Live – “The Zerologon hole: are you at risk?”

We do a show on Facebook every week in our Naked Security Live video series, where we discuss one of the big security concerns of the week.

We’d love you to join in if you can – just keep an eye on the @NakedSecurity Twitter feed or check our Facebook page on Fridays to find out the time.

It’s usually about 18:00 UK time, which is early afternoon/late morning on the East/West coast of North America.

Note that you don’t need a Facebook account to watch our live streams, although you will need to login if you want to ask questions or post comments.

For those of you who [a] don’t use Facebook, [b] had buffering problems while we were live, [c] would like subtitles, or [d] simply want to catch up later, we also upload the recorded videos to our YouTube channel.

Here’s last week’s video, where we dug into the “Zerologon” security hole in Windows:

[embedded content]

(Watch directly on YouTube if the video won’t play here.)

Thanks for watching… hope to see you online later this week!


A real-life Maze ransomware attack – “If at first you don’t succeed…”

You’ve probably heard terms like “spray-and-pray” and “fire-and-forget” applied to cybercriminality, especially if your involvement in cybersecurity goes back to the early days of spamming and scamming.

Those phrases recognise that sending unsolicited email is annoyingly cheap and easy for cybercrooks, who generally don’t bother running servers of their own – they often just rent email bandwidth from other crooks.

And those crooks, in turn, don’t bother running servers of their own – they just use bots, or zombie malware, implanted on the users of unsuspecting computers to send email for them.

Six years ago, when home networks were generally a lot slower than they are today, SophosLabs researchers measured a real-life bot sending more than 5 million emails a week from a single consumer ADSL connection, distributing 11 different malware campaigns as well as links to nearly 4000 different fake domains that redirected via 58 different hacked servers to peddle phoney pharmaceutical products. Best, or worst, of all – because outbound emails are mostly uploaded network packets – the bot barely affected the usability of the connection, making it unlikely that the legitimate user of the ADSL account would notice from traffic alone.

The theory was simple: the cost of failure was so low that the crooks could pretty much dial-a-yield by setting their spamming rates as high as needed to suit the campaign they were running.

So the “spray-and-pray” equation was simple: to get 100 people interested with a click-rate of one in a million, the crooks had to send 100 million emails.

And with a zombie network capable of doing more than 5 million emails per computer per week, you could spam out those 100 million emails in the course of a single hour with a 3000-strong botnet.

(Some notorious zombie networks have given their botmasters remote control over hundreds of thousands or millions of devices at the same time.)

What has all this got to do with contemporary targeted ransomware like Maze?

Well, it reminds us that cybercriminals can make off with vast amounts of money, even though by some metrics their success rate sounds terrible.

Simply put, online crooks are no strangers to the upbeat verse that tells us:

 'Tis a lesson you should heed, Try, try again. If at first you don't succeed, Try, try again. 

Try, try again

Ransomare attacks are one especially destructive part of the cybercriminal underground where the crooks don’t mind failing, and where they are perfectly willing to try again.

In fact, it’s almost become part of their game plan: a sort of “third time lucky” approach.

The crooks are usually already inside your network by the time they unleash the ransomware part of their attack, and they usually spend the early part of their attack mapping out your network and acquiring similar (or perhaps even superior) access powers to your own sysadmins.

So they can afford to take the time to perform experiments, and if at first they don’t succeed, they’re more than ready to spend their time finding another way.

And with ransom demands getting into eight-figure territory these days – by which we mean extortion demands of $10,000,000 or more – you can see why.

For a fascinating insight into the minds of these money-grabbing blackmailers and their “try, try again” techniques”, we recommend the latest SophosLabs report, entitled Maze attackers adopt Ragnar Locker virtual machine technique.

The report is the result of an investigation by indefatigable Sophos Managed Threat Response expert (and occasional Naked Security writer) Peter Mackenzie and his colleagues, who were called in to deal with a network attack by the infamous Maze ransomware gang.

After two failed attempts to launch their ransomware files directly, the crooks resorted to a technique that we first wrote about when the the Ragnar Locker crooks used it: setting up a virtual machine (VM), and running the malware in that.

Intriguingly, this represents a complete turnaround in the attitude of cybercriminals to VM software such as VMWare, VirtualBox and Parallels.

VM software lets you run one OS inside another.
Here we have Windows 10 as a guest on a Slackware Linux host.

Some crooks go out of their way to avoid infecting virtual machines altogether, mainly to prevent their malware running inside a research lab or sandbox system, where VMs are usually used for scalablity and convenience. (VMs are much quicker and easier to reset to a known clean condition than re-imaging a physical hard disk.)

But ransomware crooks have realised that introducing a VM of their own to run their file scrambling malware gives them a chance to run it in a software environment of their choice – the Ragnar Locker gang decided to use Windows XP, presumably because it’s compact and doesn’t do any pesky licensing checks.

In this latest Maze attack, the crooks delivered their own VM containing Windows 7 and all the operating system components needed to launch a full-blown virtual Windows desktop that they knew was compatible with their malware – a whopping 700MB disk image, all to run just 2.5MB of malware code.

Three tries and a double whammy

Like many ransomware gangs, the Maze crew don’t just scramble your files these days – they take the time to steal some or all of your critical data first before bringing your network to a halt, thus giving them a double reason to extort money from you.

A year ago, you might have expected a Maze attack to leave behind a pre-recorded threat like this:

Listen to the audio message that plays after a Maze attack

Note how the crooks focused on the decryption of your precious files as the reason to pay up.

Today, the threat is two-pronged:

These days, you’re paying hush money for the crooks to keep quiet about the data breach aspect of the attack, as well as paying to get your business running again.

What to do?

In case you’re wondering, the crooks demanded $15,000,000 this time, but the victim said, “No,” to which we say, “Good on you.”

Those who refuse to do deals with the criminals deserve our respect, even if we might also feel critical because the victim suffered a data breach in the first place.

It’s easy to say that you’d do the same and refuse to pay, because of the moral princples involved, but it’s a different matter when it’s your business and your staff looking straight into the barrel that the crooks have shoved in your faces.

 If you get hit by ransomware It means you've had a breach. The world might get judgmental And want to point and screech. But if, despite the negatives, You give the crooks a "No" Then we give you a big, loud cheer And say to you, "Chapeau!"


Zerologon – hacking Windows servers with a bunch of zeros

The big, bad bug of the week is called Zerologon.

As you can probably tell from the name, it involves Windows – everyone else talks about logging in, but on Windows you’ve always very definitely logged on – and it is an authentication bypass, because it lets you get away with using a zero-length password.

You’ll also see it referred to as CVE-2020-1472, and the good news is that it was patched in Microsoft’s August 2020 update.

In other words, if you practise proper patching, you don’t need to panic. (Yes, that’s an undisguised hint: if you haven’t patched your Windows servers yet from back in August 2020, please go and do so now, for everyone’s sake, not just your own.)

Nevertheless, Zerologon is a fascinating story that reminds us all of two very important lessons, namely that:

  1. Cryptography is hard to get right.
  2. Cryptographic blunders can take years to spot.

The gory details of the bug weren’t disclosed by Microsoft back in August 2020, but researchers at Dutch cybersecurity company Secura dug into the affected Windows component, Netlogon, and figured out a bunch of serious cryptographic holes in the unpatched version, and how to exploit them.

In this article, we aren’t going to construct an attack or show you how to create network packets to exploit the flaw, but we are going to look at the cryptographic problems that lay unnoticed in the Microsoft Netlogon Remote Protocol for many years.

After all, those who cannot remember history are condemned to repeat it.

Authenticating via Netlogon

Netlogon is a network protocol that, in its own words, “is a remote procedure call (RPC) interface that is used for user and machine authentication on domain-based networks.”

At 280 pages, the Netlogon Remote Protocol Specification – it’s an open specification these days, not proprietary to Microsoft – is a lot shorter than Bluetooth, but far longer than any programming team can take in over a period of months or years, let alone days or weeks.

Its length comes in part from the fact that there are often many different ways of doing the same thing, somtimes with multiple different fallback algorithms that have been kept on to ensure backwards compatibility with older devices.

Ironically, perhaps, Section 5, Security Considerations, has just two short parts: a one-page subsection entitled Security Considerations for Implementors, and a brief (though admittedly useful) table called Index of Security Parameters that links to various important sections in the specification.

Netlogon Protocol security parameters list.
The highlighted items are the ones we look at in this article.

Getting started with Netlogon

A client computer that wants to communicate with a Netlogon server such as a Windows domain controller starts by sending eight random bytes (what’s often called a nonce, short for number used once) to the server.

The server replies with 8 random bytes of its own, as explained in section 3.1.4.1, Session-Key Negotiation:


 REQUEST --- ClientChallenge (8 random bytes, e.g. 452fdbfd2e38b9e0) -->

 REPLY <-- ServerChallenge (8 random bytes, e.g. 7696398fe5417372) ---


As shown above, Microsoft refers to these nonces as ClientChallenge (CC) and ServerChallenge (SC) respectively, if you want to match up this discussion with the protocol documentataion.

Both sides then scramble up the two random strings together with a shared secret to concoct a one-off encryption key, known as the SessionKey (SK).

On a Windows network, the secret component is the domain password of the computer you’re connecting from.

On the client, this is stored securely by Windows in the registry; on the domain controller, it’s stored in the Active Directory database.

This SessionKey scrambling is done using the keyed cryptographic hash called HMAC-SHA256.

The algorithm is specified in section 3.1.4.3.1, AES Session-Key, and in pseudocode it looks like this:

Assuming that the client requesting access has the same password stored locally as the Netlogon server has on record centrally, and given that each side has already told the other its 8-byte random challenge, both sides should now have arrived at the same, one-off SessionKey value (SK) to secure the rest of their communication.

This session key setup avoids using the secret password directly in encrypting Netlogon traffic, and ensures a unique key for each session, into which both parties inject their own randomness. (This is a common approach: setting up a WPA-2 wireless connection using a pre-shared key follows a similar process.)

In theory, the server could blindly assume that the client knows the real password by simply accepting encrypted function calls immediately; if the client had cheated so far by using a made-up password, the requests wouldn’t decrypt properly and the ruse would fail.

Good practice, however, says that each end should verify the other first, for example by encrypting a known test string that the other end can validate, and that’s what happens next.

Obviously, the client can’t share the session key directly because that would let anyone else on the network sniff it out and hijack the session.

Instead, the client proves that it knows the session key by encrypting the ClientChallenge that it committed to at the start, using the SessionKey it just calculated.

Microsoft calls this the Netlogon Credential Computation, detailed in section 3.1.4.4.1:

At the other end, the server does the same thing in reverse, and verifies that the original ClientChallenge comes out correctly when the ciphertext is decrypted with the session key.

At this point, it looks as though an imposter client is stuck.

Without the right secret password, which you can only get by already having administrator-level access to a trusted computer on the network, you won’t get the same session key as the server.

Without the right session key, you won’t produce an encrypted version of your original 8-byte random string that the server will accept to authenticate you.

A chink in the armour

At this point, if you’re interested in cryptography, you’re probably wondering, “What on earth is the encryption algorithm dubbed AES-128-CFB8 in the pseudocode above?”

Let’s investigate.

AES, short for Advanced Encryption Standard, sounds like a good choice because it’s currently accepted as a strong algorithm with no known security holes.

Also, a key size of 128 bits is currently regarded as satisfactory because it would take too long to try all possible 2128 keys, even if you harnessed all of the world’s computing power for yourself.

For the record: AES doesn’t use any internal calculations that could be sped up with so-called quantum algorithms, so it is considered post-quantum secure. Even if a truly powerful quantum computer were built tomorrow, it wouldn’t be of any special use, so far as we know, in cracking AES faster than we can with regular computers today.

But algorithms like AES can be used in many different modes, and not all of them are suitable for all purposes.

The best-known mode, which you can think of as “straight encryption”, is called AES-128-ECB, and it scrambles 16 bytes of input at a time, directly producing 16 bytes of output.

Note that we’ve simplified these diagrams by pretending that AES-128 works on 4 bytes (32 bits) at a time instead of 16 bytes (128 bits), but the underlying principles are still perfectly clear:

ECB stands for Electronic Code Book, because the cipher in this mode does indeed work like an unimaginably large codebook.

The codebook moniker is entirely theoretical. In practice, you would need a different codebook for every one of the 2128 different keys, with each book listing every one of the encryption values for each of the 2128 different 16-byte input strings. And you would need a further 2128 (that’s 300 million million million million million million) codebooks to list all the possible decryptions, too, if you ever had the space or time to unscramble what you had previously encrypted.

Unfortunately, the simplicity of codebook mode is also a weakness, because any time there is repeated text in one of the input blocks, you’ll know because you’ll get a repeat in the ciphertext, too:

At best, ECB leaks whether there are any patterns in the input, something that an encryption algorithm should conceal.

At worst, it means that if ever you figure out what the plaintext was in one part of the input – a chapter heading, for example, or part of a Bitcoin address – you will automatically be able to decrypt that text everywhere else it appears.

Various solutions exist to use block-based encryption algorithms so they don’t reveal repeated patterns, and one of them is Cipher Feeback (CFB) mode, which works like this:

Instead of encrypting the plaintext blocks with AES each time, you encrypt the last block of ciphertext instead, and then XOR that “keystream” with the next block of plaintext.

That way, even if you get two identical plaintext blocks in a row, the ciphertext won’t repeat.

Of course, there is no ciphertext block to use at the outset, so AES-128-CFB mode requires not only a key of 16 bytes for the encryption engine, but also an initialisation vector (IV) of 16 bytes as an up-front input to get the keystream started.

Note that the IV can, and usually is, shared along with the ciphertext – the IV doesn’t need to be kept secret, because the secrecy of the encryption is provided by the key that controls the AES encryption engine.

Nevertheless, a CFB initialisation vector should be chosen randomly, and should never be re-used, especially with the same AES key.

CFB8 explained

One disadvantage that AES-ECB and AES-CFB have in common is that until you have a full 16-byte block of input, you can’t produce any output, because they can’t work on partial blocks – AES is designed to mix-and-swap-and-mince-and-munge chunks of 128 bits at a time.

That also means you are stuck if you have any leftover bytes at the end, for example if you have 67 bytes to encrypt, which is 4×16 + 3.

You need to figure out a way to pad out the last block to the right size, and then reliably work out whether there were any extra bytes added on that need to be stripped off when you decrypt the data.

One solution to this is AES-CFB8, a mode that we have never heard of anyone using in real life before, but that is designed to use a full 128-bit AES mixing cycle for every byte of input, so you can encrypt even just a single character without any padding.

Instead of encrypting the last full block of ciphertext to create the next full block of keystream data, you use just the first byte of the keystream each time and XOR it with one plaintext byte rather than a 16-byte plaintext block.

Then you chop off the keystream byte you just used and add the new ciphertext byte on at the end of the keystream, giving you a full block of data to encrypt to generate the next keystream byte:

Netlogon CFB8 considered harmful

Sadly, the way that Netlogon uses AES-128-CFB8 is significantly less secure than it should be.

Secura researchers spotted the problem very quickly when perusing the Microsoft documentation, where the algorithm is not defined generically (as we listed it above), but given in a dangerously simplified form.

Section 3.1.4.4.1 specifies the AES Credential [Computation] process as follows:


If AES support is negotiated between the client and the server, the Netlogon 
credentials are computed using the AES-128 encryption algorithm in 8-bit CFB 
mode with a zero initialization vector. [Sk below is short for SessionKey]

 ComputeNetlogonCredential(Input, Sk, Output)
 SET IV = 0
 CALL AesEncrypt(Input, Sk, IV, Output)


You probably spotted the cryptographic blunder already: “the credentials are computed […] with a zero initialization vector.”

As we already mentioned, IVs are supposed to be randomly chosen, and used only once with any key – indeed, that’s why they are often referred to as nonces, for numbers used once.

But there’s an even bigger problem with an all-zero IV in CFB8 mode, as Secura discovered.

You can visualise the problem if you use an all-zero IV plus an all-zero block of plaintext bytes:

Because AES is a high-quality cipher with no known statistical biases, you can put in any input and encrypt it with any key, and the chance of each individual bit in the output being zero (or one) is 50%.

Every output bit’s value is like a digital coin toss.

So the chance of the first output byte being zero is the same as the chance that the first 8 output bits are all zero, which is 50% × 50% × 50% … eight times over (50% is just another way of writing 0.50, which is the same as 1/2).

50%8 is 2-8, or 1/256.

Remember that probability.

In the diagram above we’ve assumed that the first output byte did indeed come out as zero, and you can see that if that happens, the entire encryption process essentially gets “locked into” an all-zero state.

The keystream byte (black) comes out as 00, so when you XOR it with the first plaintext byte (pink) of 00 you get a ciphertext byte (red) of 00.

Then, when you chop the first 00 off the left hand end of the IV (white) and append the new ciphertext 00 at the end, you are right back where you started, with another all-zero IV and a remaining plaintext buffer of all zeros.

When you encrypt the “new” IV with the key, you get exactly the same result as before, because all your inputs are the same again, and out comes another keystream byte of 00, which XORs with the next plaintext 00 to produce another ciphertext byte of 00, which feeds back into the IV to make it all zero again.

How to trick Netlogon

Secura’s researchers quickly realised what would happen if they tried to authenticate to a Netlogon server over and over again using a ClientChallenge nonce consisting of 8 zeros.

Roughly once in every 256 times the server would randomly concoct a session key for which the correctly-encrypted version of their all-zero ClientChallenge

…would itself be all zeros.

We tried an al-zero IV with an all-zero ClientChallenge 2560 times.
One in 256 times the key chosen gave all-zero output too.

In other words, by submitting a ClientChallenge of 0000000000000000 and then blindly also submitting a Netlogon Credential Computation (see above) of 0000000000000000, they’d get the credential computation correct by chance 1/256 of the time, even though they had no idea what the right SessionKey value should be because they had no idea what secret password to use.

Simply put, 1/256 of the time, they ended up in a situation where they could always produce correctly-encrypted data to transmit to the server, without having a clue what the password or session key was, as long as they only ever needed to encrypt zeros!

Better yet, the server would automatically notify them when they hit the jackpot by accepting their credential submission.

Surely that’s not exploitable?

By now you are probably thinking, “What’s the chance that every time they needed to submit an encrypted authentication token or to supply encrypted password data, they’d only ever need to encrypt zeros?”

We wondered that too, but our intrepid researchers found a way.

One of the Netlogon password functions, NetrServerPasswordSet2 (section 3.4.5.2.5), can be called remotely from a Netlogon session that has already got past the Netlogon Credential Computation check.

This function, which does what its name suggests and changes the server password, requires the caller to correctly encrypt two chunks of data:

  • The original ClientChallenge, treated as a 64-bit number, with the current time (in what’s known as “Posix seconds” or Unix epoch form) added to it. This data is used as an authentication check to ensure it’s still the same client program trying to do the password change.
  • A buffer of 516 bytes that specifies the new password, formatted as (512-N) bytes of random data, followed by N bytes specifying the password, followed by the password length N expressed as a 4-byte number.

The ClientChallenge is all zeros, because that was needed to get the exploit started in the first place, but the current time in Posix seconds is something close to this:


$ date --utc +%s
1600300026


Posix time denotes the number of seconds since the start of the Unix epoch, which began, by definition, at 1970-01-01T00:00:00Z, a date now more than 50 years in the past.

The researchers found themselves on the horns of a dilemma: the ClientChallenge was zero, but the time was not, so the sum of those two numbers couldn’t be zero, and therefore wouldn’t encrypt to zero…

…and therefore the attackers would need the original session key after all, and to get the session key they would need to know a valid password for a suitable computer on the network.

What to do?

Well, the researchers just pretended it was 1970 all over again, used a timestamp of zero added to a ClientChallenge of zero…

…and the server didn’t mind – there was apparently no check to see if the timestamp was decades in the past.

Of course, the 516 all-zero bytes that the researchers now needed to supply in the encrypted password buffer forced them to specify a password length of zero, which you might think would be disallowed by the server.

But the researchers tried it anyway…

…and the server didn’t mind that either, setting its own Active Directory password to <no password at all>.

What next?

Happily – or perhaps slightly less unhappily – the password change that they were able make didn’t reset the server’s actual login password, so the researchers couldn’t simply login directly and take over the server via a conventional Windows desktop.

However, they did report that by changing the Active Directory password of the domain controller itself, they were able to:

extract all user hashes from the domain through the Domain Replication Service (DRS) protocol. This includes domain administrator hashes (including the ‘krbtgt’ key, which can used to create golden tickets) that could then be used to login to the Domain Controller (using a standard pass-the-hash attack) and update the computer password stored in the Domain Controllers’s local registry.

In other words, complete network compromise.

All because of an over-simplified cryptographic specification that involved the cardinal sin of an all-zero initialisation vector every time.

Of course, that flaw was compounded by several other programmatic oversights where stricter attention to security and correctness could have prevented this attack, including:

  • Allowing an all-zero ClientChallenge in the first place. We’d assume that the most likely cause of an all-zero buffer at the start of the Netlogon process would be an incorrectly initialised or buggy client program, so we’d reject it as a precaution anyway.
  • Allowing a zero-length password. Given that Windows already has a secure mechanism for storing shared secrets, and relies on it heavily anyway, it seems unnecessary to allow blank passwords at all, even for accounts where no humans are ever expected to log on.
  • Allowing a date-based authentication field in which the timestamp could not possibly be correct. We’d be inclined to treat this as a warning of a buggy client or an attempt to pull off a security trick.

What to do?

This bug opens a serious security hole to anyone already inside your organisation, and perhaps even to outsiders, depending on the topology of your network.

If you haven’t applied the August 2020 patch yet, you need to do so – you aren’t just letting yourself down, you’re letting everyone else down too by making your network an easier target for crooks, and therefore making it more likely that you will be the source of security problems for other people.

In addition:

  • Don’t take cryptographic shortcuts such as choosing an encryption mode that’s convenient for your application, but then taking liberties with how you use it because that’s convenient for your programmers.
  • Program defensively whenever you are accepting untrusted data, especially if the data can easily be checked for obviously forged or incorrect values such as timestamps 50 years in the past.
  • Retire old parts of your products or specifications as soon as you can after better ones are available. Although the exploit in this case relied on updated parts of the Netlogon protocol, such as using AES instead of falling back to older algorithms, you can argue that this bug might have been found far sooner if the protocol specification were not encumbered with so many alternative ways of doing all sort of security-related checks.

But the big thing to remember here is: patch early, patch often.


go top