Category Archives: News

More PrintNightmare: “We TOLD you not to turn the Print Spooler back on!”

It never rains but that it pours,” as the old weather adage goes.

That’s certainly how Microsoft must be seeing things right now, following the official announcement of yet another unpatched vulnerability in the Windows Print Spooler service.

Dubbed CVE-2021-34481, this one isn’t quite as bad as the previous PrintNightmare problems, because it’s an elevation of privilege bug (EoP), not a remote code execution hole (RCE).

As you will remember from last time, an EoP means that someone who is already logged onto your computer as a regular, unprivileged user can silently and unlawfully boost themselves to Admin or SYSTEM level.

If you’re logged in, say, as RegularUser, you can do yourself plenty of harm by deleting your own files, messing with your own applications, downloading inappropriate files, and so on.

But if you can wrangle access to the SYSTEM account, you will find yourself on a similar footing to Windows itself, and you can wreak much more havoc.

You can stop, start and even install new system services, mess with firewall settings, alter files in the Windows folder, change boot-time security settings, and generally to do all the things that IT has spent ages trying to make sure that you can’t, whether deliberately or by mistake.

That’s not quite as bad as an RCE, which means that someone who isn’t logged onto your computer at all can get unauthorised access in the first place, giving them a beachhead for futher cybercrime.

But an EoP on its own is bad enough, not least because an RCE exploit that only just gets a cybercriminal in, perhaps with no more powers that a guest user, can often be combined with an EoP to achieve what a crook would consider “complete compromise”.


VULNERABILITY JARGON EXPLAINED – DEMYSTIFYING ‘EOP’, ‘RCE’ AND FRIENDS

Learn more about vulnerabilities, how they work, and how to defend against them.
Recorded in 2013, this podcast is still an excellent and jargon-free explainer of this vital topic.

Click-and-drag above to skip to any point in the podcast. You can also listen directly on Soundcloud.

The story so far

To recap rapidly on the PrintNightmare story so far [2021-07-16T15:00Z]:

  • Microsoft patched an EoP bug in Print Spooler. This patch was part of the June 2021 security update. The bug it fixed was dubbed CVE-2021-1675.
  • The bug was more serious than first thought and got upgraded to RCE-and-EoP status later in the month. The original patch, however, protected against both aspects of the bug.
  • Researchers with an Print Spooler RCE-and-EoP bug of their own decided to disclose it publicly. They naively assumed it was identical to the one that was already patched, so that releasing it wouldn’t reveal a new sort of attack.
  • They were wrong. Their bug was new, and the existing patch didn’t protect against it.
  • They quickly scrambled to delete their proof-of-concept exploit code. They hoped this would suppress the leak and prevent the new bug becoming a zero-day (an actively exploitable but as-yet-unpatched security hole).
  • Too late. The exploit code had already been widely copied and announced openly as a zero-day that would evade the June 2021 patch. The new bug was dubbed CVE-2021-34527.
  • We recommended turning off the Print Spooler entirely. This isn’t terribly convenient because it stops your printer working, but it’s the only sure-fire way we know of preventing any of these bugs being triggered, patches or no patches.
  • Microsoft scrambled out an emergency patch. This mitigated the new zero-day hole.
  • Researchers quickly found that the new patch didn’t fix the EoP part of the bug. The hope remained, however, that the more serious RCE part of the bug was blocked.
  • It wasn’t. To protect properly, it turned out that you’d need to apply various additional mitigations and registry modifications by some other means, and even then, no one was quite sure if those would work fully.
  • We recommended applying the patch anyway. It does prevent several known ways of exploiting the bug.
  • We recommended NOT turning your Print Spooler back on, if at all possible. Once again, this stops your printer working, but it does remove the Print Spooler from your attack surface completely.
  • Another EoP was found in the Print Spooler. This is a new-new bug, not covered by any previous patches or advisories. This is the bug mentioned at the top of this article, namely CVE-2021-34481.
  • Microsoft officially issued a temporary fix.The workaround for this vulnerability is stopping and disabling the Print Spooler service.

What to do?

What Microsoft said.

Turn off the Print Spooler and disable the service so it can’t start again, whether by accident or design.

If you have the Print Spooler service shut down on your network from before, then you are ahead of the game – but you might as well make sure, just in case someone, somewhere, has turned theirs back on.

More advice as we have it!


HOW TO CONTROL AND CONFIGURE THE PRINT SPOOLER SERVICE

Here’s a quick summary of the tips and tricks for controlling the Print Spooler that you can find in our earlier articles:

===From a Command Prompt (CMD.EXE): > sc query Spooler <-- check Print Spooler Status
> sc config Spooler start= disabled <-- prevent Spooler starting, even after reboot
> sc stop Spooler <-- stop Spooler if it is running > sc config Spooler start= demand <-- don't start on reboot but allow manual on/off
> sc start Spooler <-- start it on demand, if not disabled Note that reconfiguring Windows services can only be done from Aministrator level, so you need to choose Run as Administrator when starting CMD.EXE ===From a PowerShell prompt or script: > Get-Service Spooler <-- check Spooler status
> Set-Service -Name Spooler -StartupType Disabled <-- prevent Spooler starting, even after reboot
> Set-Service -Name Spooler -StartupType Manual <-- same as "start= demand" above

If you are a Sophos customer you can use the Sophos Live Discover feature to check the status of the Spooler service across your network with a simple query like this one:

SELECT name, display_name, start_type, path, status, user_account, CASE WHEN status = 'RUNNING' THEN 'Stop service to end exposure to unpatched vulnerabilities inc. PrintNightmare' END AS SpoolerCheck, CASE WHEN start_type != 'Disabled' THEN 'Set Spooler service to DISABLED to prevent it from starting' END AS ServiceCheck
FROM services
WHERE name = 'Spooler' AND (status = 'RUNNING' OR start_type != 'DISABLED')

Want to earn $10 million? Snitch on a cybercrook!

Just over a week ago, we wrote about the REvil ransomware gang’s latest braggadoccio.

As you probably know, ransomware operators like REvil, Clop and others don’t generally work on the front line themselves by conducting the actual network intrusions that deliver the final ransomware warhead.

Instead, they recruit teams of “attack affiliates” – subcontractors, if you like – who are given their own variants of the ransomware code and let loose on the world.

The affiliates don’t bother, or even need to know how, to program the malware in the first place, or to get involved in the process of negotiating and collecting the final blackmail money from victims who decide to pay up.

The affiliates bring different skills to the operation, such as:

  • Breaking into networks and posing as sysadmins, sometimes for weeks or even months.
  • Mapping out the network, possibly even including assets the victims have lost track of.
  • Stealing what they can and exfiltrating data that might assist with subsequent attacks, or raise good money on the dark web, or be used for additional blackmail leverage after the ransomware has done its dirty work.
  • Opening backdoors and creating bogus accounts that let them walk straight back in if they get locked out on the way.
  • Finding out how the company does its backups, and trashing them in advance of the cryptographic denouement…

…in return for a big chunk of the ransomware payment, often as much as 70%.

(We have to guess that the core crooks originally set their share at 30% because that’s the number that seems to have worked out well for companies like Apple and Google when licensing products such as music and apps.)

Join up and aim big!

The affiliates get well-rewarded for each individual attack, which motivates them to make their attacks as network-wide and as disruptive as they possibly can.

The core crooks keep away from involvement in the actual network intrusions while nevertheless scooping up 30% of everything.

But in one of REvil’s most high-profile incidents to date, one of the gang’s affiliates pulled off an attack that was even broader and deeper than usual.

By exploiting bugs in code from network management company Kaseya, they were able to penetrate more than 50 MSPs in one go, and from there, apparently, to attack more than 1000 customers.

We’ll probably never know for sure whether the core REvil crew were delighted or dismayed at how the attack went down.

Sometimes, cybercriminals can “succeed” so surprisingly (as happened in the infamous 20-year-old Code Red virus outbreak that we reminisced about yesterday!) that everyone takes notice, and our worldwide cybersecurity vigour improves, at least for a while.

What we do know, however, is that the REvillers disdainfully made what they pitched as a global “offer of salvation” after the Kaseya incident:

If anyone want to negotiate about universal decryptor – our price is [$70 million in Bitcoin] and we will publish publicly decryptor that decrypts all files of all victims, so everyone will be able to recover from attack in less that an hour.

Stirring the pot

We can only assume that the crooks didn’t seriously expect to get paid out, but instead hoped to stir things up a bit, and perhaps to provoke infighting amongst the cybersecurity community about what to do.

Or maybe the criminals were being truly sarcastic, as though they were saying, “We don’t really expect you to be able to agree on what to do, so we’ve asked for a ludicrous amount just to rattle your collective cages. Also, who cares about the money from this one? We’re rich already. And anyway, to paraphrase a famous actor, ‘We’ll be back’.”

One reaction – and various legislatures seem to be giving this serious thought – might be to criminalise ransomware payments entirely, thus forcing any and all ransomware victims to “go it alone” if the time comes for recovery.

Of course, if your business has ground to a total halt and is almost certain to fold if you don’t pay up, the knock-on effects of a blanket payment ban might affect hundreds or thousands of employees who could suddenly lose their jobs.

Therefore this sort of regulatory payment-based intervention is not popular with everyone.

What to do?

After the Kaseya incident, which happened over the 2021 Independence Day weekend in the US, we asked you, our readers, what you thought.

Unsurprisingly, some of the more earnest replies weren’t entirely suitable for a family-friendly, community-oriented website, but we did get an idea of how many of you felt:

• A better solution would be to offer up Wanted – Dead or Alive ransoms at that same price point for the criminals. Let’s put a stop to this extortion with actual policy that may stop it.

• I think WE should BLOCK from the Internet countries who do not cooperate with OUR government in punishing the guilty party of such crimes.

• PAY THE RANSOM TO A REVENGE COMPANY TO ELIMINATE COMPLETELY THE CRIMINALS BY BEING INVESTIGATOR, JUDGE, JURY AND ELIMINATOR.

• Compulsory life sentence for any such crooks who break into the internet with a crime of that size and happen to get caught.

• We are finding all these criminals but just not punishing them severely enough.

What’s been done

No jurisdiction that we know of has yet activated any of the proposed solutions listed above…

…but the US Department of State has gone some of the way towards tipping the balance against cash-rich cybercriminals with funds to spare for their next attack.

The US is now officially offering a reward of up to $10 million for help in finding and acting against serious cybercriminals:

The U.S. Department of State’s Rewards for Justice (RFJ) program, which is administered by the Diplomatic Security Service, is offering a reward of up to $10 million for information leading to the identification or location of any person who, while acting at the direction or under the control of a foreign government, participates in malicious cyber activities against U.S. critical infrastructure in violation of the Computer Fraud and Abuse Act (CFAA).

As you can see, this isn’t $10 million for turning over just anyone involved in ransomware attacks.

We’re talking here about so-called “state sponsored actors”, and we’re talking about attacks that specifically touch on “critical infrastructure”, which doesn’t cover every big attack, even if it were to cause the collapse of a huge company with thousands of employees.

On the other hand, it doesn’t apply only to ransomware attacks, but to cybercriminality in general.

That’s a good thing, because even though ransomware may hog the headlines, it is one of only many seriously disruptive and economically damaging side-effects that criminal hackers, malware peddlers and network intruders can cause.

What next?

The RFJ program doesn’t pay out terribly often, it seems, but it pays out big when it does.

The Department of State says that the scheme has been operating for nearly 40 years, notably in search of information about terrorists and terrorism, and has paid out “in excess of $200 million to more than 100 people across the globe” over that period.

While that averages out at fewer than three payments a year, informants seem to have trousered an average of about $2 million each time, so the rewards do indeed sound large enough to be tempting.

What do you think?

Will this help, or will the bulk of cybercriminality simply continue unhindered by this sort of reward?


The Code Red worm 20 years on – what have we learned?

There’s a famous and very catchy song that starts, “It was 20 years ago today…

In the song, of course, Sergeant Pepper was busily teaching his band to play – a band, as the song assures us, that was guaranteed to raise a smile.

But can you remember where you were and what you were doing 20 years ago, if you’re old enough to have well-formed memories of that period?

If you were in IT or cybersecurity 20 years ago this week, the answer to that question is almost certainly a big fat “Yes.

That’s because July 2001 is when the infamous Code Red computer worm showed up, spread fast, and all but consumed the internet for several days.

I can certainly remember where I was, because I had just – only just! – relocated from Sophos UK to Sophos Australia, where we were in the process of opening up a new threat research lab.

I’d had just enough time to figure out how to use Sydney’s buses and trains (there were no Opal cards back then – bus travel was based on magnetic strips and unintiutive “zone numbers” that turned out to be old British miles in disguise).

On Code Red day, I got to the office nice and early, ready for the sort of problems often left behind on purpose handed over carefully by those of our colleagues in North America who been the last to leave their office.

I was the second person into work that day, and I was greeted by a early-bird colleague who was wearing one of those conflicted excitement/fear/resignation expressions that, in the late 1990s and early 2000s world of IT, could only mean one thing.

WILDCAT VIRUS OUTBREAK!

He very calmy advised me to attend to all my bodily needs right away: get coffee and any needed snacks; take a toilet break (the bathrooms were out in the fire escape in that office, for some reason); adjust desk and chair for long-term comfort; find most ergonomic alignment of keyboard; optimise colour settings for screen (computer displays really weren’t that good 20 years ago).

I think we might be here for a while,” he said. (We were. I can’t remember if the pub next door had already closed by the time I went home as I, but it might as well have, and I am pretty sure I was the only passenger on the bus.)

Code Red wasn’t a new idea.

Indeed, it used one of the tricks of the notorious Internet Worm from way back in 1988: a stack buffer overflow in server software that was almost certainly open to the internet on purpose.

In Code Red’s case, the vulnerable service was IIS, Microsoft’s unfunkily but informatively named web server, Internet Information Services.

By sending in an innocent-ish HTTP GET request, which is the unexceptionable way that almost every browser asks for almost every page on almost every website, the Code Red malware was able to distribute itself…

…all in a single HTTP request.

Visit to Naked Security’s main page showing plethora of GET requests.

The malicious malware “probe” that drove the Code Red infection process consisted of:

  • Making a TCP connection to port 80 on a randomly chosen computer. Few websites used HTTPS, which runs on port 443, back then.
  • Sending a GET request that asked for an unusual URL. The URL looked something like this: GET /default.ida?NNNN[...]NNN%u9090%u6858%ucbd3%u7801%u9090%[...].
  • Following the request with an HTTP body that included further malware code. Code disguised as data is known as shellcode in the jargon.

The malware didn’t need to bother about whether the connection failed, or whether the computer really was a web server.

It didn’t need download or install any additional files, make any registry changes, wait for a user to open or click anything (or even for any user to be logged in at all), write anything to disk, or take any notice of what the server sent back, if indeed it replied at all.

In modern jargon, this was a truly fileless 0-click remote code execution exploit in action.

In case you’re wondering, the long string NNNN[...]NNNN in the GET> request was sized up to be just too big for the buffer awaiting it in the ISS server.

The resulting buffer overflow lined up the hexadecimal-encoded data in the fake URL with the return address on the CPU stack to which IIS would jump back after receiving the request.

A return address overwrite of this sort misdirects the flow of execution at the end of the current code subroutine (or function, in C jargon) to a memory location of the attacker’s choice.

The important hex digits above are %ucbd3%u7801, two 16-bit numbers that decode in memory to the 32-bit number 0x7801CBD3.

Back in the day

Back in 2001, Windows didn’t support ASLR, short for Address Space Layout Randomisation, so any DLLs loaded on your computer would show up at the same address as they did on mine, and indeed on everyone else’s.

This predictablity in memory arrangement made remote code exploits based on stack buffer overflows almost trivial to figure out in those days.

You just had to pick a return address for your stack overwrite that correponded to a useful sequence of instructions anywhere in any Windows DLL that you would expect to be in memory already.

In this case, the attackers knew (or, more precisely, could assume with high probability) that memory address 0x7801CBD3 would be somewhere safely inside MSVCRT.DLL, the Microsoft Visual C Runtime library.

This DLL is used by many Windows programs, including IIS, so the virus writers could predict that the binary data at the given address would be the machine instruction CALL [EBX].

The crooks also knew, at that point, that the EBX register just happened to hold the address of another memory location on the stack that had also been modified in the buffer overflow, so that the CPU diverted execution directly back into the bytes submitted in the GET request.

Those bytes contained code that located the address in memory of the HTTP request body, which contained yet more attacker-supplied shellcode, and jumped there.

Back in 2001, Windows didn’t support DEP, short for Data Execution Prevention, so that any code shoved onto the stack could blindly be executed, even though the stack is intended to store data, not code.

This lack of execution prevention on the stack meant that remote code exploits didn’t need to find sneaky places in memory to locate themselves, and were therefore trivial to figure out.

What, no canary?

Also, few if any programs in those days bothered to use what’s known as a Stack Canary, or a stack-smashing protector, which acts as an early warning that a stack overflow has just occurred.

A stack canary is a randomly chosen value placed onto the stack when a subroutine gets called, so it occupies the memory bytes just in front of the return address.

This canary is checked by the called subroutine in the last line of code before it executes the RET instruction to return to the caller.

This helps to spot buffer overflows, because an attacker has to overwrite all the bytes leading up to the return address in order to overflow far enough to reach the return address.

The attacker can’t easily guess what value to put in the buffer overflow data at the point where the canary lives, so the canary will probably get overwritten by incorrect data and the canary check wil therefore fail.

At this point, with very little additional code added to each protected function, the operating system can instantly terminate the offending program before the return address hijack takes place.

Guessing made easy

All this predictability made stack overflow exploits really easy to figure out 20 years ago:

  • The layout of common system functions in system DLLs could trivially be predicted. Attackers could easily and reliably find good return address overwrite candidates.
  • Shellcode could simply be placed directly on the stack as part of the buffer overflow. Stack shellcode was almost certain not to be blocked from running during an attack.
  • The stack layout could trivially be predicted, with no protective “canary” values. Attackers could attempt stack smashing attacks without worrying about getting noticed.

As we said in this week’s podcast:

In the Code Red days, […] if you could find a stack buffer overflow, it was often very, very little work, maybe half an afternoon’s work, to weaponise it, to use the paramilitary terminology that cybersecurity seems to like, and turn it into a workable exploit that could basically break in on any similar Windows sytem.

The silver lining, if there was one, is that Code Red wasn’t programmed to do much damage to the computers it infected.

The direct damage was limited to an attempt to deface your website with a “Hacked by Chinese” message, although there was no evidence that the malware itself came from China.

But Code Red didn’t go after anything else of yours, such as searching for your passwords or your trophy data, or scrambling all your files and demanding money to unlock them again, as you might expect today’s cybercrooks to do.

Once running, Code Red dedicated 99 parallel threads of execution to generating a list of new victim computers and spewing out HTTP requests to all of them.

That network-based aggression alone was how it came to cause as much disruption as it did.

What can we learn from Code Red?

Amusingly (well, it’s worth a smile with hindsight), Microsoft had patched against the Code Red buffer overflow exploit about a month before the attack.

But with many companies struggling to get patches done within a month even in 2021, you can imagine how few people had installed the vital patch in time back in 2001.

And anyway, even if you were patched, or weren’t running IIS at all, sooner or later you’d end up facing a barrage of infection attempts from servers that hadn’t been patched and were now actively infected (or, of course, re-infected).

That’s an important reminder that, in cybersecurity, an injury to one is an injury to all.

To mix a metaphor, one rotten apple really can spoil the barrel,

You don’t just need to patch for yourself, you need to do it for everyone else, as well:

If nothing else, just in terms of network traffic, [Code Red] was quite disastrous. […] A huge percentage of network traffic in the world was this jolly thing trying to spread from anyone who’d got infected to thousands, millions, of other computers. […] Even if you’d patched, you’d still get these packets battering on your web server, like a built in DDoS.

Another important lesson to remember is that uncontrolled cybersecurity “experiments” and irresponsible disclosure are something that we can all do without.

Again, from the podcast:

Q. So, when you’re analysing this, when it’s breaking out, what’s your mindset? Are you [thinking], “This is unique”? […] Are you impressed by it?

A. You have to have a little bit of grudging respect. “My gosh, they packed so much into so little.” At the same time, when you look at the code, you think, “You knew! When you said, ‘Let’s have 99 threads,’ when you said, ‘Let’s try and spread everywhere, all the time,’ you’d made your point already.”

To go and make it an extra 20 times is excessive. To make it another 1000 times is really going over the top. To go and do it another 1000,000,000 times? Well, it’s hard to have much respect for that. And that’s all I’m saying.

.

What to do?

  • Patch early, patch often. Do it for the rest of us, even if you don’t feel you need to do it for yourself.
  • Be circumspect when you want to prove a cybersecurity point. Responsible disclosure will earn you not only hacker kudos (and often a financial reward as well), but also respect from the greater IT community for not putting them in harm’s way in the name of research.
  • Embrace cybersecurity improvements in your coding. The defences mentioned above (DEP, ALSR and stack protection) aren’t perfect, but they are examples of technologies popularised by malware like Code Red because they make exploits harder to find for the crooks.

By the way, despite their simplicity and effectiveness, it took years for the protections mentioned above to become ubiquitous in Windows software products.

Don’t be slow to seek out improvements that push back even further against the crooks, even if it means revisiting legacy code you thought was “feature complete” by now.


LISTEN NOW: CODE RED IN THE NAKED SECURITY PODCAST

Code Red section starts at 12’50”.

Click-and-drag above to skip to any point in the podcast. You can also listen directly on Soundcloud.


Home delivery scams get smarter – don’t get caught out

We’ve written several times before about home delivery scams, where cybercriminals take advantage of our ever-increasing (and, in coronavirus times, often unavoidable) use of online ordering combined with to-the-doorstep delivery.

Over the past year or so, we’ve noticed what we must grudgingly admit is a gradual improvement in believability on the part of the scammers, with the criminals apparently improving their visual material, their spelling, their grammar and what you might call the general tenor of their fake websites.

The smarter crooks seem to have learned to cut out anything that might smell of drama or urgency, which tends to put potential victims on their guard, and to follow the KISS principle: keep it simple and straightforward.

Ironically, the more precisely that the criminals plagiarise legitimate content, and the fewer modifications they make to the workflow involved, the less effort they have to put in themselves to design and create the material they need for their fake websites…

…and the better those fake websites look and feel.

It’s almost as though the less work they put in of their own, the better and more believable their fraudulent schemes become.

Here’s an example sent in yesterday by a Naked Security reader [who has asked to remain anonymous], in the hope it might serve as a helpful “real world threat story” that you can use to educate and advise your own friends and family.

We hope that you’d spot this one easily, as our community-spirited reader did, because of three tell-tale signs that the crooks can’t easily avoid:

  • The URL you’re invited to click doesn’t look right, despite using HTTPS and taking you to a regular-looking dot-COM domain.
  • The workflow (data entry sequence) isn’t quite right, given that the crooks need to get you to follow a made up process for re-delivery.
  • The personal data requested isn’t quite right, given that the crooks are trying to squeeze you for personal information that the courier company almost certainly would not need just to rearrange delivery.

Nevertheless, we’ll let the scam sequence speak for itself below, and we think you’ll agree that this one has far fewer mistakes and obvious telltale signs than many of the delivery scams we’ve described before.

DPD, for readers in North America, is a widely-known courier company in Europe and the UK, with a name and logo that is regularly seen on the streets. Note that the crooks regularly rotate the courier brands that they rip off, including matching region-specific brands such as Canada Post and Royal Mail to the country they’re targeting in each specific scamming campaign. Remember that when scammers send their phishing messages via SMS (a technique that is often referred to as smishing), they automatically know from the phone number prefix which country you’re in. Phone numbers generally provide a much better guide to your location and local language than email addresses, which often end with suffixes such as outlook.com or gmail.com no matter where you live.

The scam in words and pictures

The smishing (phishing-via-SMS) lure arrives on your phone, and looks innocent and self-explanatory enough.

The URL ought to be a warning, because it doesn’t look as though it has any connection with the courier company concerned, but it is at least a believable-looking .COM domain with a realistic-looking HTTPS address:

The landing page of the scam is believable enough, too, if you’re already inclined to trust the server name in the address bar.

There are none of the grammar or spelling mistakes that often give away less careful scammers:

The crooks have even copied a geniune-looking list of tracking details that opens up if you click the Where has my parcel been dropdown:

Here’s where the criminals need to introduce an unusual step in the re-delivery process in order to justify asking you for payment-related data later on.

Note that although you shouldn’t need to pay for re-delivery in cases like this, courier companies are sometimes required to ask you to pay additional fees such as import duties or taxes, so “pay before we deliver” is not unheard of.

(For what it’s worth, whenever we’ve received notes from delivery companies that additional fees need to be paid before they are allowed to release the item, there’s always been an obvious way for us to find our own way to the company’s payment portal, or to pay and collect at the depot in person.)

But the convenience of simply paying online, and the modest amount requested, could easily persuade you to let your guard down:

Once you’ve decided to attempt re-delivery, the scammers want you to confirm your location.

This is another warning sign, given that they should already know your address and phone number to have attempted delivery once and then messaged you about the delivery failure, but it’s easy to assume that this is a precaution to avoid a repeated mis-delivery:

These criminals handily offer “payment” by debit or credit card, PayPal or a PrePay account.

We went for the payment card option:

Then comes the sting for your full card details, including CVV (the secret three-digit code on the back used in online transactions):

Next, the crooks make yet another play for personal information, neatly simulating the Visa Secure dialog window (also known as Mastercard Identity Check, ClickSafe and other names) that most merchants in the UK use these days to allow your bank to do additional security validation.

Note that the crooks check for a genuine-looking credit card number in the webform you just filled in on the fake pay page, so they can use the first few digits (known as the BIN, short for bank identification number) to pop up a realistic-looking financial provider’s name in the window:

Scammers of this sort often struggle to find a good way to finish off a fake payment card transaction, given that they aren’t actually after the £1 or £3 they’re claiming to “charge” you.

The crooks don’t want to risk triggering a fraud warning right away by actually trying to complete the low value transaction themselves at the same time as you’re handing over the data.

Sometimes they produce a fake error message, which helps explain why no £1 or £3 charge ever goes through on your account, but leaves you with an unresolved “home delivery” issue that draws attention to the scam.

We’ve also seen cybercriminals redirect you, at the end of the scam, to a genuine page on the website of the company they’re pretending to be, in order to allay suspicion. (In cases like this, they typically wipe out your browsing history so you easily can’t go back and check what happened so far.)

The crooks in this scam, however, have taken the soft-and-gentle approach of simply pretending everything worked out fine, giving them a full day to evade suspicion until you wonder what happened to the delivery and take steps to find out.

They even advise you that they “payment” won’t be deducted from your account until delivery is complete, as an excuse to explain why no £1 or £3 transfer will appear on your account:

What to do?

  • Check all URLs carefully. Learn what server names to expect from the companies you do business with, and stick to those. Bookmark them for yourself in advance, based on trustworthy information such as URLs on printed statements or account signup forms.
  • Steer clear of links in messages or emails if you can. Legitimate companies often provide quick-to-click links to help you jump directly to useful web pages for online accounts such as utility bills. These links save you a few seconds because you don’t need to find and type in your own tracking code or account number by hand. But you’ll never get caught out by fake links if you never use in-message links at all! (See point 1 above.) Those few seconds are a small price to pay for not paying the large price of handing over your personal data to cybercriminals.
  • Report compromised cards or online accounts immediately. If you get as far entering any banking data into a fake pay page and then realise it’s a scam, call your bank’s fraud reporting number at once. Look on the back of your actual card so you get the right phone number. (Remember that you don’t have to click [OK] or [Continue] for a web form to capture any partial data you have already entered.)
  • Check your bank and card statements. Don’t just look for payments that shouldn’t be there, but also keep an eye out for expected payments that don’t go through. Be alert for incoming funds you weren’t expecting, too, given that you can be called to account for any income that passes through your hands, even if you neither asked for it nor expected it.

And, of course, when it comes to personal data of any sort: if in doubt, don’t give it out.


Don’t get tricked by this crashtastic iPhone Wi-Fi hack!

About a month ago, a security researcher revealed what turned out to be zero-day bug in Apple’s Wi-Fi software, apparently without meaning to:

Carl Schou, founder of an informal hacker collective known as Secret Club, “created originally as a gag between friends who are passionate about technical subjects”, seems to have been doing what bug-hunters do…

…and trying out a range of potentially risky values in the Wi-Fi settings on his iPhone.

Schou set up a Wi-Fi access point with a network name (ESSID) of %p%s%s%s%s%n, and then deliberately connected his iPhone to it in order to check for what are known as format string vulnerabilities.

This sort of vulnerability is considered somewhat old-school these days, but as we have had good reason to say many times on Naked Security, “never assume anything” in the world of cybersecurity, and it seems that Schou followed this advice, and unexpectedly unearthed up a genuine bug.

What type is that data?

The name format string vulnerability comes from a standard, widely-used system function that you will find in almost every operating system known as printf(), short for print formatted data.

In practice, programmers may not actually call the printf() function directly, but frequently write code that ends up calling either printf() itself or a related system function that works in the same way.

As the Linux/Unix documentation puts it:

NAME printf - format and print data int printf(const char * format, ...);
int fprintf(FILE *file, const char *format, ...);
int dprintf(int fd, const char *format, ...);
int sprintf(char *str, const char *format, ...);
int snprintf(char *str, size_t size, const char *format, ...); The functions in the printf() family produce output according to a format parameter. [. . .] All of these functions write the output under the control of a format string that specifies how subsequent arguments are converted for output.

The idea is simple: you use a so-called format string to specify how you want the output to look, and you then hand the function all the values that you want to print out.

In the format string, a percent (%) character acts as a placeholder for each of the values you want to print, typically followed by a letter to denote how to do the formatting.

For example, %s means print as a text string, %c says to print a single character, %d denotes print as a decimal integer, and %p (intended for debugging output) means print as a formatted memory address, also known as a pointer.

The format string is, by convention, the first argument to the printf() function, followed by the values you want to display, like this (the character 10 is a line feed, which moves the output to the next line):

---begin eg1.c---
extern int printf(const char *format, ...); int main(void) { printf("Text: %s Number: %d%c","hello",42,10); return 0;
}
---end eg1.c $ clang -Wall -o eg1 eg1.c
$ ./eg1
Text: hello Number: 42
$

You can clearly see how the second, third and fourth parameters passed to the printf() function are worked into the format specified by the first parameter before being displayed.

As you can probably imagine, you need to be careful not to mix up the parameters, because at a system level, strings (stored as pointers, i.e. raw addresses in memory) and integers (stored directly as a binary representation of the number) mustn’t be mixed up.

If you swap over the parameters “hello” and 42, then you will be asking the printf() function to treat the memory address where the string “hello” is stored if if were a number, which us unkikely to make any sense.

Memory addresses are allocated either by the compiler or the operating system, so they don’t mean anything from an arithmetical point of view.

Worse still, you will be telling printf() to use the number 42 as a memory address to be accessed, which is unlikely either to be valid or safe.

A good compiler will do its best to protect you from this sort of mixup:

---begin eg2.c---
extern int printf(const char *format, ...); int main(void) { /* We've swapped parameters 2 and 3 around */ printf("Text: %s Number: %d%c",42,"hello",10); return 0;
}
---end eg2.c--- $ clang -Wall -o eg2 eg2.c
eg2.c line 4: warning: format is 'char *' but argument is 'int' printf("Text: %s Number: %d%c",42,"hello",10); ~~ ^~ %d
eg2.c line 4: warning: format is 'int' but argument is 'char *' printf("Text: %s Number: %d%c",42,"hello",10); ~~ ^~~~~~~ %s
2 warnings generated.
$

Where you really can come unstuck, of course, is when the compiler can’t help you because the format string isn’t known when the program is compiled, but instead gets generated at runtime and sent to printf(), which then has to trust that it is correctly constructed.

Even worse, and something that programmers should go out of their way to avoid, is when you have code that constructs the format string from untrusted input that an attacker could have chosen.

In that case, someone else gets to decide how printf() consumes its arguments, so a malicious user (or a determined researcher) could go out of their way to mix up the format string on purpose.

In this miniature demonstration, we simply allow the person running the program to put the format string on the command line, making it easy to see what happens when we mix things up intentionally:

---begin eg3.c---
extern int printf(const char *format, ...); int main(int argc, char **argv) { printf("Using unverified format string: %s\n",argv[1]); /* Let the user choose the format string at runtime. */ /* NEVER DO THIS IN REAL LIFE. IT RARELY ENDS WELL. */ printf(argv[1],"hello",42,10); return 0; } ---end eg3.c--- $ clang -Wall -o eg3 eg3.c
$ ./eg3 "Text: %s Number: %d%c"
Text: hello Number 42
$

So far, so good, because we have scrupulously copied the format string from the original code and respected the order of the parameters supplied.

But watch what happens if we swap round the %s and the %d in order to trick the code into using 42 as a memory address, not merely as a number:

$ ./eg3 "%d%s%c"
Using unverified format string: %d%s%c
Segmentation fault
$

A segmentation fault is archaic Unix jargon for a memory access violation, meaning that we purposefully crashed the program by deliberately mixing up pointers and integers.

Stress-testing the Wi-Fi dialog

With this in mind, you can imagine why Schou took the trouble to create a Wi-Fi access point with the unusual and unlikely name %p%s%s%s%s%n.

He wanted to see whether Apple’s networking software would choke on it when it tried to process the name.

Unfortunately, not only did something crash, it seems to have kept trying again, and again, and again…

…even after a reboot, because the system would try again, and again, and again, preventing Schou from connecting to his usual access point instead:

As well-known researcher CodeColorist subsequently discovered by decompiling the offending code in iOS, the bug does indeed arise due to the untrusted ESSID name being mixed into the format string of a subsequent system call that relies on a printf()-like functionality.

A %s inserted at the wrong place in a network name could trigger what is essentially a network Denial of Service (DoS) attack against an innocent user’s iPhone.

(The ESSID doesn’t have to be an obviously weird string such as %p%s%s%s%s%n, but could be written to look as though any unwanted %s strings were simply typos or a misplaced apostrophes, such as “Hacker%s Delight” or “Your %superstore”.)

One quick fix

One potential fix, quickly circulated, is to use Settings > Reset > Reset Network Settings to return to a “disconnected from everything” state, and then to reconnect manually to a proper access point that doesn’t trigger this bug.

Apparently, however, this fix doesn’t always work, depending on what other ESSIDs you have connected to before.

Amusingly, though it probably wasn’t funnny to Schou himself, who was the first person to reveal this bug to the world, he found out how much more serious the bug was than he first thought after locking himself out of his own network, ultimately begging for help on Twitter :

Schou, who reported the bug to Apple once its more serious side was revealed, was able to get control back over his network settings without doing a total firmware reset, but not without considerable hassle.

Hack your own backup

Schou, it seems, had to backup his phone to his laptop, find the wireless network settings configuration file (com.apple.wifi.known-networks.plist) amongst the thousands of backed-up items, edit out all reference to hackily-named wireless networks, and then restore the modified data to his phone.

Sadly, Apple iPhone backups aren’t stored as regular copies of the files and directories on your device, so files can’t be located by name.

Instead, each file is stored using a name that’s a cryptographic hash of its content, and a large SQLite database called Manifest.db is used to keep track of where all the now-uselessly-named backup files came from on the original device.

If you end up in Schou’s position (admittedly, he went looking for trouble on purpose, but you might end up there by mistake), take heart from the fact that researcher Alex Skalozub, also known as @pieceofsummer, has come up with a Python script that will do the recovery work for you.

His script will pore through a local iPhone backup (you can backup an iPhone to Mac or Windows computers), attempt to find the troublesome configuration file, and repair it automatically, although, as he wryly notes: “Not tested, use at your own risk.”

What to do?

  • Avoid Wi-Fi networks with percent signs in their names. They’ve probably been set up recently by hackers who think it’s funny to foist annoying and troublesome bugs onto other people.
  • Don’t put percent signs in your network name. In the unlikely event that you have such a network name, consider changing it as a favour to everyone else. Don’t set up poisoned access points as a joke. You’d just be showing off, and you won’t be helping anyone, not even yourself.
  • Learn how to make an iPhone backup. You don’t need to use iCloud – you can store it locally – and you can (indeed, you should) encrypt it. You never know when you might need it.
  • Sanitise your inputs. If you’re a programmer, always be aware of risky characters in any data you consume. Never use untrusted data directly to create format strings, filenames, HTML output, commands you intend to run, and so on.
  • Never assume. Just like the Peloton Bike hackers who decided to start with the most obvious and unlikely way in, even experienced programmers sometimes make old-school mistakes.

As an cybersecurity aside, we recommend using the Settings > Reset > Reset Network Settings feature from time to time anyway, simply to reduce the number of known networks that your device will connect to automatically.

You might be surprised how many known networks your phone or laptop has in its list, even ones that you last connected to months or years ago and that might be under new ownership or have adopted new terms and conditions of use since you last used them.


go top