Category Archives: News

“Gucci Master” business email scammer Hushpuppi gets 11 years

He was sentenced under his real-life name of Ramon, but in back in his boastful days of pretending to be a seriously successful real estate agent based in Dubai, you may have seen and heard of him as Ray, or, to give him his full nickname, Ray Hushpuppi.

To be clear, Ramon Olorunwa Abbas wasn’t pretending to have lots of money, but he was pretending to have acquired his money by legitimate means.

His now-shuttered Instagram account was awash with show-off photos promenading the extent of his wealth, including fancy cars (see featured image at top of article), luxury travel by private jet, and high-ticket shopping trips:

Unfortunately for Abbas, who allegedly referred to himself on Snapchat as The Billionaire Gucci Master!!!, and fortunately for the numerous victims of his criminality, the photos above were featured in a US Department of Justice charge sheet signed in June 2020 by FBI Special Agent Andrew Innocenti and approved by US Magistrate Judge Rozella Oliver:

Grabbed and nabbed

Abbas was charged with the crime of Conspiracy to Engage in Money Laundering, quickly arrested by the Dubai police, and extradited to the US where he has been behind bars ever since.

As we wrote back in 2020:

Maximum prison sentences are rarely handed out. But if Abbas gets convicted of conspiracy to engage in money laundering, and if he happens to be the unlucky exception to this general rule, he’ll be looking at a maximum sentence of 20 years in federal prison.

Well, more than two years later, Hushpuppi has pleaded guilty to the charge and been sentenced, and although he didn’t get the maximum prison term, United States District Judge Otis Wright gave him 135 months, which is just over 11 years. (We assume this will include the time already that Puppi has already spent in custody.)

He’s also required to pay back more than $1.7m in restitution to two specific victims whom Abbas admitted to defrauding as part of his plea agreement: $922,857 to a law firm in New York, and $809,983 to a businessperson in Qatar.

The original charge sheet setting out that Abbas indeed had a case to answer, and should therefore be arrested and brought to the US, makes fascinating reading.

It includes extracts from Hushpuppi’s correspondence with various co-consipirators, including a money launderer from Canada called Ghaleb Alaumary, who was sentenced to 140 months (11 years 8 months) in a US prison last year, and ordered to repay a whopping $30m.

Crooks versus the banks

The conversations recorded by the investigating officer give an intriguing insight into how so-called Business Email Compromise (BEC) criminals try to sneak past the fraud prevention measures that the banks have put in place.

Here, you can see them talking to each other about transfer problems, and offering advice on those banks or countries that should be avoided because the transfers will trigger warnings:

I sent 1.1m pound to acc they said open ben in uk money landed and now they asking questions

An open ben, or “open beneficiary”, is explained by the investigator as “an account where a different business account name can be substituted to help in deceiving the victim into sending funds.”

Bro I can’t keep collecting houses n not give them a feed back n keep asking for more. This things cost a lot of money now to open.

A house in this context is BEC slang for “a bank account used to receive proceeds of a fraudulent scheme”, because it provides a temporary home for funds.

Presumably, the money launderer’s contacts – other cogs in the cybercrime gearbox who send out so-called money mules to open accounts that are later used for fraud – were pushing back against the “cost” of going through face-to-face KYC (know your customer) checks to open accounts that ended up getting linked to criminality right away.

Brother I can’t send from uk to Mexico they keep finding out, but uk 2 uk these guy keep paying

Here, the money launderer is suggesting that fraudulent transfers kept inside the UK are likely to go through, whereas trying to get money out of the country is likely to provoke more detailed checks and trigger a block.

BEC explained

As you probably know, BEC is an umbrella term used to describe email-driven cybercrime where electronic messages (which often look perfectly genuine because they’re sent from a compromised account inside your own company) are used to persuade someone in the finance department to change the recipient’s account details just before a major payment is due.

BEC criminals can target the compromised company directly, by tricking someone in your own Accounts Payable department into thinking that a supplier just swapped banks and is requesting their forthcoming payments to be made to a new account.

Worse still, BEC crooks can target your customers, by tricking their Accounts Payable staff, under cover of fraudulent emails that really do originate from your company, that your company has switched banks and requires future debtor payments to go to a new account.

As you can imagine, customers defrauded in this way might not realise that their “successful” payments have been going astray (assuming that the transfers to the fraudulent “house” don’t get spotted by the bank)…

…until your own accounts department notices they’re apparently behind on payments and sets the debt collection team onto them.

That sort of confrontation is almost certain to lead to a doubly-angry customer, and the resulting data breach publicity really is something you could do without, alongside the likely need to make good your customer’s loss if the bank can’t claw back the funds.

What to do?

We know that banks are able to head off significant amounts of BEC-style fraud, but that plenty of the stolen money nevertheless ends up in the hands of scammers, because the DOJ remarks that:

“By his own admission, during just an 18-month period defendant conspired to launder over $300 million,” prosecutors wrote in a sentencing memorandum. “While much of this intended loss did not ultimately materialize, [Abbas’s] willingness and ability to participate in large-scale money laundering highlights the seriousness of his criminal conduct.”

Here are some tips you can follow to reduce the risk of getting scammed by the Hushpuppis of the world:

  • Turn on two-factor authentication (2FA) so that a password alone is not enough to access your accounts, especially email. Remember that your email account is probably the key to resetting passwords on many of your other accounts, including ones you use at work and at home.
  • Look for features in your service providers’ products that can warn you when anomalies occur. XDR (extended detection and response) tools help you to search for logins that come from unusual places, or to track down network and file activity that doesn’t fit your usual pattern. This can help you flush out crooks who have wriggled into your network or your email account. Talk to your bank about how they can add another layer of scam detection, too.
  • Enforce a two-step (or more) process for making significant changes to accounts or services, especially changes in details for outgoing payments. Don’t just rely on simple “manager approval” click-throughs – implement independent checks by different teams, working in separate departments, looking for different indicators of scamminess.
  • If you see anything that doesn’t look right in an email demanding your attention, assume you are being scammed. Crooks who try to impersonate your CEO or CFO might not make any mistakes, but often they do. Don’t let the crooks get away with slip-ups such as spelling mistakes or unlikely errors that ought to give them away – one Naked Security commenter reported catching a scammer red-handed simply because the crook used an emoji where they felt certain that the true owner of the email account would have spelled out the meaning in full. As carpenters like to say, “Measure twice, cut once.”
  • If you want to check details with another company based on an email, never rely on contact data provided in the email, especially when money is involved. Find your own way to get hold of the other party using a different form of communication, for example using a phone number on printed documents that you already have.
  • Consider using internal training tools to teach your staff about scams. Tools such as Sophos Phish Threat can test staff behaviour safely so that they can make their mistakes when it doesn’t actually matter, rather than when the crooks come calling.


Dangerous SIM-swap lockscreen bypass – update Android now!

A bug bounty hunter called David Schütz has just published a detailed report describing how he crossed swords with Google for several months over what he considered a dangerous Android security hole.

According to Schütz, he stumbled on a total Android lockscreen bypass bug entirely by accident in June 2022, under real-life conditions that could easily have happened to anyone.

In other words, it was reasonable to assume that other people might find out about the flaw without deliberately setting out to look for bugs, making its discovery and public disclosure (or private abuse) as a zero-day hole much more likely than usual.

Unfortunately, it didn’t get patched until November 2022, which is why he’s only disclosed it now.

A serenditious battery outage

Simply put, he found the bug because he forgot to turn off or to charge his phone before setting off on a lengthy journey, leaving the device to run low on juice unnoticed while he was on the road.

According to Schütz, he was rushing to send some messages after getting home (we’re guessing he’d been on a plane) with the tiny amount of power still left in the battery…

…when the phone died.

We’ve all been there, scrabbling for a charger or a backup battery pack to get the phone rebooted to let people know we have arrived safely, are waiting at baggage reclaim, have reached the train station, expect to get home in 45 minutes, could stop at the shops if anyone urgently needs anything, or whatever we’ve got to say.

And we’ve all struggled with passwords and PINs when we’re in a rush, especially if they’re codes that we rarely use and never developed “muscle memory” for typing in.

In Schütz’s case, it was the humble PIN on his SIM card that stumped him, and because SIM PINs can be as short as four digits, they’re protected by a hardware lockout that limits you to three guesses at most. (We’ve been there, done that, locked ourselves out.)

After that, you need to enter a 10-digit “master PIN” known as the PUK, short for personal unblocking key, which is usually printed inside the packaging in which the SIM gets sold, which makes it largely tamper-proof.

And to protect against PUK guessing attacks, the SIM automatically fries itself after 10 wrong attempts, and needs to be replaced, which typically means fronting up to a mobile phone shop with identification.

What did I do with that packaging?

Fortunately, because he wouldn’t have found the bug without it, Schütz located the original SIM packaging stashed somewhere in a cupboard, scratched off the protective strip that obscures the PUK, and typed it in.

At this point, given that he was in the process of starting up the phone after it ran out of power, he should have seen the phone’s lockscreen demanding him to type in the phone’s unlock code…

…but, instead, he realised he was at the wrong sort of lockscreen, because it was offering him a chance to unlock the device using only his fingerprint.

That’s only supposed to happen if your phone locks while in regular use, and isn’t supposed to happen after a power-off-and-reboot, when a full passcode reauthentication (or one of those swipe-to-unlock “pattern codes”) should be enforced.

Is there really a “lock” in your lockscreen?

As you probably know from the many times we’ve written about lockscreen bugs over the years on Naked Security, the problem with the word “lock” in lockscreen is that it’s simply not a good metaphor to represent just how complex the code is that manages the process of “locking” and “unlocking” modern phones.

A modern mobile lockscreen is a bit like a house front door that has a decent quality deadbolt lock fitted…

…but also has a letterbox (mail slot), glass panels to let in light, a cat flap, a loidable spring lock that you’ve learned to rely on because the deadbolt is a bit of a hassle, and an external wireless doorbell/security camera that’s easy to steal even though it contains your Wi-Fi password in plaintext and the last 60 minutes of video footage it recorded.

Oh, and, in some cases, even a secure-looking front door will have the keys “hidden” under the doormat anyway, which is pretty much the situation that Schütz found himself in on his Android phone.

A map of twisty passageways

Modern phone lockscreens aren’t so much about locking your phone as restricting your apps to limited modes of operation.

This typically leaves you, and your apps, with lockscreen access to a plentiful array of “special case” features, such as activating the camera without unlokcking, or popping up a curated set of notification mesaages or email subject lines where anyone could see them without the passcode.

What Schütz had come across, in a perfectly unexceptionable sequence of operations, was a fault in what’s known in the jargon as the lockscreen state machine.

A state machine is a sort of graph, or map, of the conditions that a program can be in, along with the legal ways that the program can move from one state to another, such as a network connection switching from “listening” to “connected”, and then from “connected” to “verified”, or a phone screen switching from “locked” either to “unlockable with fingerprint” or to “unlockable but only with a passcode”.

As you can imagine, state machines for complex tasks quickly get complicated themselves, and the map of different legal paths from one state to another can end up full of twists, and turns…

…and, sometimes, exotic secret passageways that no one noticed during testing.

Indeed, Schütz was able to parlay his inadvertent PUK discovery into a generic lockscreen bypass by which anyone who picked up (or stole, or otherwise had brief access to) a locked Android device could trick it into the unlocked state armed with nothing more than a new SIM card of their own and a paper clip.

In case you’re wondering, the paper clip is to eject the SIM already in the phone so that you can insert the new SIM and trick the phone into the “I need to request the PIN for this new SIM for security reasons” state. Schütz admits that when he went to Google’s offices to demonstrate the hack, no one had a proper SIM ejector, so they first tried a needle, with which Schütz managed to stab himself, before succeeding with a borrowed earring. We suspect that poking the needle in point first didn’t work (it’s hard to hit the ejector pin with a tiny point) so he decided to risk using it point outwards while “being really careful”, thus turning a hacking attempt into a literal hack. (We’ve been there, done that, pronged ourselves in the fingertip.)

Gaming the system with a new SIM

Given that the attacker knows both the PIN and the PUK of the new SIM, they can deliberately get the PIN wrong three times and then immediately get the PUK right, thus deliberately forcing the lockscreen state machine into the insecure condition that Schütz discovered accidentally.

With the right timing, Schütz found that he could not only land on the fingerprint unlock page when it wasn’t supposed to appear, but also trick the phone into accepting the successful PUK unlock as a signal to dismiss the fingerprint screen and “validate” the entire unlock process as if he’d typed in the phone’s full lock code.

Unlock bypass!

Unfortunately, much of Schütz’s article describes the length of time that Google took to react to and to fix this vulnerability, even after the company’s own engineers had decided that the bug was indeed repeatable and exploitable.

As Schütz himself put it:

This was the most impactful vulnerability that I have found yet, and it crossed a line for me where I really started to worry about the fix timeline and even just about keeping it as a “secret” myself. I might be overreacting, but I mean not so long ago the FBI was fighting with Apple for almost the same thing.

Disclosure delays

Given Google’s attitude to bug disclosures, with its own Project Zero team notoriously firm about the need to set strict disclosure times and stick to them, you might have expected the company to stick to its 90-days-plus-14-extra-in-special-cases rules.

But, according to Schütz, Google couldn’t manage it in this case.

Apparently, he’d agreed a date in October 2022 by which he planned to disclose the bug publicly, as he’s now done, which seems like plenty of time for a bug he discovered back in June 2022.

But Google missed that October deadline.

The patch for the flaw, designated bug number CVE-2022-20465, finally appeared in Android’s November 2022 security patches, dated 2022-11-05, with Google describing the fix as: “Do not dismiss keyguard after SIM PUK unlock.”

In technical terms, the bug was what’s known a race condition, where the part of the operating system that was watching the PUK entry process to keep track of the “is it safe to unlock the SIM now?” state ended up producing a success signal that trumped the code that was simultaneously keeping track of “is is safe to unlock the entire device?”

Still, Schütz is now significantly richer thanks to Google’s bug bounty payout (his report suggests that he was hoping for $100,000, but he had to settle for $70,000 in the end).

And he did hold off on disclosing the bug after the 15 October 2022 deadline, accepting that discretion is the sometimes better part of valour, saying:

I [was] too scared to actually put out the live bug and since the fix was less than a month away, it was not really worth it anyway. I decided to wait for the fix.

What to do?

Check that your Android is up to date: Settings > Security > Security update > Check for update.

Note that when we visited the Security update screen, having not used our Pixel phone for a while, Android boldly proclaimed Your system is up to date, showing that it had checked automatically a minute or so earlier, but still telling us we were on the October 5, 2022 security update.

We forced a new update check manually and were immediately told Preparing system update…, followed by a short download, a lengthy preparatory stage, and then a reboot request.

After rebooting we had reached the November 5, 2022 patch level.

We then went back and did one more Check for update to confirm that there were no fixes still outstanding.


We used Settings > Security > Security update to get to the force-a-download page:


The date reported seemed wrong so we forced Android to Check for update anyway:


There was indeed an update to install:


Instead of waiting we used Resume to proceed at once:


A lengthy update process followed:


We did one more Check for update to confirm we were there:


S3 Ep108: You hid THREE BILLION dollars in a popcorn tin?

THREE BILLION DOLLARS IN A POPCORN TIN?

Radio waves so mysterious they’re known only as X-Rays. Were there six 0-days or only four? The cops who found $3 billion in a popcorn tin. Blue badge confusion. When URL scanning goes wrong. Tracking down every last unpatched file. Why even unlikely exploits can earn “high” severity levels.

Click-and-drag on the soundwaves below to skip to any point. You can also listen directly on Soundcloud.

With Doug Aamoth and Paul Ducklin. Intro and outro music by Edith Mudge.

You can listen to us on Soundcloud, Apple Podcasts, Google Podcasts, Spotify, Stitcher and anywhere that good podcasts are found. Or just drop the URL of our RSS feed into your favourite podcatcher.


READ THE TRANSCRIPT

DOUG.  Twitter scams, Patch Tuesday, and criminals hacking criminals.

All that and more on the Naked Security podcast.

[MUSICAL MODEM]

Welcome to the podcast, everybody.

I’m Doug.

He is Paul Ducklin.

Paul, how do you do today?


DUCK.  Very well, Doug.

We didn’t have the lunar eclipse here in England, but I did get a brief glimpse of the *full* full moon through a tiny gap in the clouds that emerged as the only hole in the whole cloud layer the moment I went outside to have a look!

But we didn’t have that orange moon like you guys did in Massachusetts.


DOUG.  Let us begin the show with This Week in Tech History… this goes way back.

This week, on 08 November 1895, German physics professor Wilhelm Röntgen stumbled upon a yet undiscovered form of radiation which prompted him to refer to said radiation simply as “X”.

As in X-ray.

How about that… the accidental discovery of X-rays?


DUCK.  Quite amazing.

I remember my mum telling me: in the 1950s (must have been the same in the States), apparently, in shoe shops…


DOUG.  [KNOWS WHAT’S COMING] Yes! [LAUGHS]


DUCK.  People would take their kids in… you’d stand in this machine, put on the shoes and instead of just saying, “Walk around, are they tight? Do they pinch?”, you stood in an X-ray machine, which just basically bathed you in X-ray radiation and took a live photo and said, “Oh yes, they’re the right size.”


DOUG.  Yes, simpler times. A little dangerous, but…


DUCK.  A LITTLE DANGEROUS?

Can you imagine the people who worked in the shoe shops?

They must have been bathing in X-rays all the time.


DOUG.  Absolutely… well, we’re a little safer today.

And on the subject of safety, the first Tuesday of the month is Microsoft’s Patch Tuesday.

So what did we learn this Patch Tuesday here in November 2022?

Exchange 0-days fixed (at last) – plus 4 brand new Patch Tuesday 0-days!


DUCK.  Well, the super-exciting thing, Doug, is that technically Patch Tuesday fixed not one, not two, not three… but *four* zero-days.

But actually the patches you could get for Microsoft products on Tuesday fixed *six* zero-days.

Remember those Exchange zero-days that were notoriously not patched last Patch Tuesday: CVE-2002-41040 and CVE-2022-41082, what became known as ProxyNotShell?

S3 Ep102.5: “ProxyNotShell” Exchange bugs – an expert speaks [Audio + Text]

Well, those did get fixed, but in essentially a separate “sideline” to Patch Tuesday: the Exchange November 2022 SU, or Software Update, that just says:

The November 2022 Exchange Software Updates contain fixes for the zero-day vulnerabilities reported publicly on 29 September 2022.

All you have to do is upgrade Exchange.

Gee, thanks Microsoft… I think we knew that that’s what we were going to have to do when the patches finally came out!

So, they *are* out and there are two zero-days fixed, but they’re not new ones, and they’re not technically in the “Patch Tuesday” part.

There, we have four other zero-days fixed.

And if you believe in prioritising patches, then obviously those are the ones you want to deal with first, because somebody already knows how to do bad things with them.

Those range from a security bypass, to two elevations-of-privilege, and one remote code execution.

But there are more than 60 patches in total, and if you look at the overall list of products and Windows components affected, there’s an enormous list, as usual, that takes in every Windows component/product you’ve heard of, and many you probably haven’t.

Microsoft patches 62 vulnerabilities, including Kerberos, and Mark of the Web, and Exchange…sort of

So, as always: Don’t delay/Do it today, Douglas!


DOUG.  Very good.

Let us now talk about quite a delay…

You have a very interesting story about the Silk Road drug market, and a reminder that criminals stealing from criminals is still a crime, even if it’s some ten years later that you actually get caught for it.

Silk Road drugs market hacker pleads guilty, faces 20 years inside


DUCK.  Yes, even people who are quite new to cybersecurity or to going online will probably have heard of “Silk Road”, perhaps the first well-known, bigtime, widespread, widely-used dark web marketplace where basically anything goes.

So, that all went up in flames in 2013.

Because the founder, originally known only as Dread Pirate Roberts, but ultimately revealed to be Ross Ulbricht… his poor operational security was enough to tie the activities to him.

Silk Road founder Ross Ulbricht gets life without parole

Not only was his operational security not very good, it seems that in late 2012, they had (can you believe it, Doug?) a cryptocurrency payment processing blunder…


DOUG.  [GASPS IN MOCK HORROR]


DUCK.  …of the type that we have seen repeated many times since, that went around not quite doing proper double entry accounting, where for each debit, there’s a corresponding credit and vice versa.

And this attacker discovered, if you put some money into your account and then very quickly paid it out to other accounts, that you could actually pay out five times (or even more) the same bitcoins before the system realised that the first debit had gone through.

So you could basically put in some money and then just withdraw it over and over and over again, and get a bigger stash…

…and then you could go back into what you might call a “cryptocurrency milking loop”.

And it’s estimated… the investigators weren’t sure, that he started off with between 200 and 2000 bitcoins of his own (whether he bought them or mine them, we don’t know), and he very, very quickly turned them into, wait for it, Doug: 50,0000 bitcoins!


DOUG.  Wow!


DUCK.  More than 50,000 bitcoins, just like that.

And then, obviously figuring that someone was going to notice, he cut-and-run while he was ahead with 50,000 bitcoins…

…each worth an amazing $12, up from fractions of a cent just a few years before. [LAUGHS]

So he made off with $600,000, just like that, Doug.

[DRAMATIC PAUSE]

Nine years later…

[LAUGHTER]

…almost *exactly* nine years later, when he was busted and his home was raided under a warrant, the cops went searching and found a pile of blankets in his closet, under which was hidden a popcorn tin.

Strange place to keep your popcorn.

Inside which was a sort-of computerised cold wallet.

Inside which were a large proportion of said bitcoins!

At the time he was busted, bitcoins were something north of $65,535 (or 216-1) each.

They’d gone up well over a thousand fold in the interim.

So, at the time, it was the biggest cryptocoin bust ever!

Nine years later, having apparently been unable to dispose of his ill-gotten gains, maybe afraid that even if he tried to shove them in a tumbler, all fingers would point back to him…

…he’s had all this $3 billion worth of bitcoins that have been sitting in a popcorn tin for nine years!


DOUG.  My goodness.


DUCK.  So, having sat on this scary treasure for all those years, wondering if he was going to get caught, now he’s left wondering, “How long will I go to prison for?”

And the maximum sentence for the charge that he faces?

20 years, Doug.


DOUG.  Another interesting story going on right now. If you’ve been on Twitter lately, you will know that there’s a lot of activity. to say it diplomatically…


DUCK.  [LOW-TO-MEDIUM QUALITY BOB DYLAN IMPERSONATION] Well, the times, they are a-changing.


DOUG.  …including at one point the idea of charging $20 for a verified blue check, which, of course, almost immediately prompted some scams.

Twitter Blue Badge email scams – Don’t fall for them!


DUCK.  It’s just a reminder, Doug, that whenever there’s something that has attracted a lot of interest, the crooks will surely follow.

And the premise of this was, “Hey, why not get in early? If you’ve already got a blue mark, guess what? You won’t have to pay the $19.99 a month if you preregister. We’ll let you keep it.”

We know that that wasn’t Elon Musk’s idea, as he stated it, but it’s the kind of thing that many businesses do, don’t they?

Lots of companies will give you some kind of benefit if you stay with the service.

So it’s not entirely unbelievable.

As you say… what did you give it?

B-minus, was it?


DOUG.  I give the initial email a B-minus… you could perhaps be tricked if you read it quickly, but there are some grammar issues; stuff doesn’t feel right.

And then once you click through, I’d give the landing pages C-minus.

That gets even dicier.


DUCK.  That’s somewhere between 5/10 and 6/10?


DOUG.  Yes, let’s say that.

And we do have some advice, so that even if it is an A-plus scam, it won’t matter because you’ll be able to thwart it anyway!

Starting with my personal favorite: Use a password manager.

A password manager solves a lot of problems when it comes to scams.


DUCK.  It does.

A password manager doesn’t have any human-like intelligence that can be misled by the fact that the pretty picture is right, or the logo is perfect, or the web form is in exactly the right position on the screen with exactly the same font, so you recognise it.

All it knows is: “Never heard of this site before.”


DOUG.  And of course, turn on 2FA if you can.

Always add a second factor of authentication, if possible.


DUCK.  Of course, that doesn’t necessarily protect you from yourself.

If you go to a fake site and you’ve decided, “Hey, it’s pixel-perfect, it must be the real deal”, and you are determined to log in, and you’ve already put in your username and your password, and then it asks you to go through the 2FA process…

…you’re very likely to do that.

However, it gives you that little bit of time to do the “Stop. Think. Connect.” thing, and say to yourself, “Hang on, what am I doing here?”

So, in a way, the little bit of delay that 2FA introduces can actually be not only very little hassle, but also a way of actually improving your cybersecurity workflow… by introducing just enough of a speed bump that you’re inclined to take cybersecurity that little bit more seriously.

So I don’t see what the downside is, really.


DOUG.  And of course, another strategy that’s tough for a lot of people to abide by, but is very effective, is to avoid login links and action buttons in email.

So if you get an email, don’t just click the button… go to the site itself and you’ll be able to tell pretty quickly whether that email was legit or not.


DUCK.  Basically, if you can’t totally trust the initial correspondence, then you can’t rely on any details in it, whether that’s the link you’re going to click, the phone number you’re going to call, the email address you’re going to contact them on , the Instagram account you’re going to send DMs to, whatever it is.

Don’t use what’s in the email… find your own way there, and you will short circuit a lot of scams of this sort.


DOUG.  And finally, last but not least… this should be common sense, but it’s not: Never ask the sender of an uncertain message if they’re legitimate.

Don’t reply and say, “Hey, are you really Twitter?”


DUCK.  Yes, you’re quite right.

Because my previous advice, “Don’t rely on the information in the email”, such as don’t phone their phone number… some people are tempted to go, “Well, I’ll call the phone number and see if it really is them. [IRONIC] Because, obviously, if the cook’s answer, they’re going to give their real names.”


DOUG.  As we always say: If in doubt/Don’t give it out.

And this is a good cautionary tale, this next story: when security scans, which are legitimate security tools, reveal more than they should, what happens then?

Public URL scanning tools – when security leads to insecurity


DUCK.  This is a well-known researcher by the name of Fabian Bräunlein in Germany… we’ve featured him a couple of times before.

He’s back with a detailed report entitled urlscan.io‘s SOAR spot: chatty security tools leaking private data.

And in this case, it’s urlscan.io, a website that you can use for free (or as a paid service) where you can submit a URL, or a domain name, or an IP number, or whatever it is, and you can look up, “What does the community know about this?”

And it will reveal the full URL that other people asked about.

And this is not just things that people copy-and-paste of their own choice.

Sometimes, their email, for example, may be going through a third-party filtering tool that itself extracts URLs, calls home to urlscan.io, does the search, gets the result and uses that to decide whether to junk, spam-block, or pass through the message.

And that means that sometimes, if the URL included secret or semi-secret data, personally identifiable information, then other people who just happened to search for the right domain name within a short period afterwards would see all the URLs that were searched for, including things that may be in the URL.

You know, like blahblah?username=doug&passwordresetcode= followed by a long string hexadecimal characters, and so on.

And Bräunlein came up with a fascinating list of the kind of URLs, particularly ones that may appear in emails, that may routinely get sent off to a third party for filtering and then get indexed for searching.

The kind of emails that he figured were definitely exploitable included, but were not limited to: account creation links; Amazon gift delivery links; API keys; DocuSign signing requests; dropbox file transfers; package tracking; password resets; PayPal invoices; Google Drive document sharing; SharePoint invites; and newsletter unsubscribe links.

Not pointing fingers there at SharePoint, Google Drive, PayPal, etc.

Those were just examples of URLs that he came across which were potentially exploitable in this way.


DOUG.  We’ve got some advice at the end of that article, which boils down to: read Bräunlein’s report; read urlscan.io‘s blog post; do a code review of your own; if you have code that does online security lookups; learn what privacy features exist for online submissions; and, importantly, learn how to report rogue data to an online service if you see it.

I noticed there are three… sort-of limericks?

Very creative mini-poems at the end of this article…


DUCK.  [MOCK HORROR] No, they’re not limericks! Limericks have a very formal five-line structure…


DOUG.  [LAUGHING] I’m so sorry. That’s true!


DUCK.  …for both meter and rhyme.

Very structured, Doug!


DOUG.  I’m so sorry, so true. [LAUGHS]


DUCK.  This is just doggerel. [LAUGHTER]

Once again: If in doubt/Don’t give it out.

And if you’re collecting data: If it shouldn’t be in/Stick it straight in the bin.

And if you’re writing code that calls public APIs that could reveal customer data: Never make your users cry/By how you call the API.


DOUG.  [LAUGHS] That’s a new one for me, and I like that one very much!

And last, but certainly not least on our list here, we’ve been talking week after week about this OpenSSL security bug.

The big question now is, “How can you tell what needs fixing?”

The OpenSSL security update story – how can you tell what needs fixing?


DUCK.  Indeed, Doug, how do we know what version of OpenSSL we’ve got?

And obviously, on Linux, you just open a command prompt and type openssl version, and it tells you the version you’ve got.

But OpenSSL is a programming library, and there’s no rule that says that software can’t have its own version.

Your distro might use OpenSSL 3.0, and yet there’s an app that says, “Oh, no, we haven’t upgraded to the new version. We prefer OpenSSL 1.1.1, because that’s still supported, and in case you don’t have it, we’re bringing our own version.”

And so, unfortunately, just like in that infamous Log4Shell case, you had to go looking for the three? 12? 154? who-knows-how-many places on your network where you might have an outdated Log4J program.

Same for OpenSSL.

In theory, XDR or EDR tools might be able to tell you, but some won’t support this and many will discourage it: actually running the program to find out what version it is.

Because, after all, if it’s the buggy or the wrong one, and you actually have to run the program to get it to report its own version…

…that feels like putting the cart before the horse, doesn’t it?

So we published an article for those special cases where you actually want to load the DLL, or the shared library, and you actually want to call its own TellMeThyVersion() software code.

In other words, you trust the program enough that you’ll load into memory, execute it, and run some component of it.

We show you how to do that so you can make absolutely certain that any outlying OpenSSL files that you have on your network are up to date.

Because although this was downgraded from CRITICAL to HIGH, it is still a bug that you need to and want to fix!


DOUG.  On the subject of the severity of this bug, we got an interesting question from Naked security reader Svet, who writes, in part:

How is it that a bug that is enormously complex for exploitation, and can only be used for denial of service attacks, continues being classified as HIGH?


DUCK.  Yes, I think he said something about, “Oh, hasn’t the OpenSL team heard of CVSS?”, which is a US government standard, if you like, for encoding the risk and complexity level of bugs in a way that can be automatically filtered by scripts.

So if it’s got a low CVSS score (which is the Common Vulnerability Scoring System), why are people getting excited about it?

Why should it be HIGH?

And so my answer was, “Why *shouldn’t* it be HIGH?”

It’s a bug in a cryptographic engine; it could crash a program, say, that’s trying to get an update… so it will crash over and over and over again, which is a little bit more than just a denial of service, because it’s actually preventing you from doing your security properly.

There is an element of security bypass.

And I think the other part of the answer is, when it comes to vulnerabilities being turned into exploits: “Never say never!”

When you have something like a stack buffer overflow, where you can manipulate other variables on the stack, possibly including memory addresses, there is always going to be the chance that somebody might figure out a workable exploit.

And the problem, Doug, is once they’ve figured it out, it doesn’t matter how complicated it was to figure out…

…once you know how to exploit it, *anybody* can do it, because you can sell them the code to do so.

I think you know what I’m going to say: “Not that I feel strongly about it.”

[LAUGHTER]

It is, once again, one of those “damned if they do, damned if they don’t” things.


DOUG.  Very good, Thank you very much, Svet, for writing that comment and sending it in.

If you have an interesting story, comment or question you’d like to submit, we’d love to read it on the podcast.

You can email tips@sophos.com, you can comment on any one of our articles, or you can hit us up on social: @nakedsecurity.

That’s our show for today; thanks very much for listening.

For Paul Ducklin, I’m Doug Aamoth, reminding you until next time to…


BOTH.  Stay secure!


Emergency code execution patch from Apple – but not an 0-day

No sooner had we stopped to catch our breath after reviewing the latest 62 patches (or 64, depending on how you count) dropped by Microsoft on Patch Tuesday…

…than Apple’s latest security bulletins landed in our inbox.

This time there were just two reported fixes: for mobile devices running the latest iOS or iPadOS, and for Macs running the latest macOS incarnation, version 13, better known as Ventura.

To summarise what are already super-short security reports:

  • HT21304: Ventura gets updated from 13.0 to 13.0.1.
  • HT21305: iOS and iPadOS get updated from 16.1 to 16.1.1

The two security bulletins list exactly the same two flaws, found by Google’s Project Zero team, in a library called libxml2, and officially designated CVE-2022-40303 and CVE-2022-40304.

Both bugs were written up with notes that “a remote user may be able to cause unexpected app termination or arbitrary code execution”.

Neither bug is reported with Apple’s typical zero-day wording along the lines that the company “is aware of a report that this issue may have been actively exploited”, so there’s no suggestion that these bugs are zero-days, at least inside Apple’s ecosystem.

But with just two bugs fixed, just two weeks after Apple’s last tranche of patches, perhaps Apple thought these holes were ripe for exploitation and thus pushed out what is essentially a one-bug patch, given that these holes showed up in the same software component?

Also, given that parsing XML data is a function performed widely both in the operating system itself and in numerous apps; given that XML data often arrives from untrusted external sources such as websites; and given the bugs are officially designated as ripe for remote code execution, typically used for implanting malware or spyware remotely…

…perhaps Apple felt that these bugs were too broadly dangerous to leave unpatched for long?

More dramatically, perhaps Apple concluded that the way Google found these bugs was sufficiently obvious that someone else might easily stumble upon them, perhaps without even really meaning to, and begin using them for bad?

Or perhaps the bugs were uncovered by Google because someone from outside the company suggested where to start looking, thus implying that the vulnerabilities were already known to potential attackers even though they hadn’t yet figured out how to exploit them?

(Technically, a not-yet-exploited vulnerability that you discover due to bug-hunting hints plucked from the cybersecurity grapevine isn’t actually a zero-day if no one has figured out how to abuse the hole yet.)

What to do?

Whatever Apple’s reason for rushing out this mini-update so quickly after its last patches, why wait?

We already forced an update on our iPhone; the download was small and the update went through quickly and apparently smoothly.

Use Settings > General> Software Update on iPhones and iPads, and Apple menu > About this Mac > Software Update… on Macs.

If Apple follows up these patches with related updates to any of its other products, we’ll let you know.


Exchange 0-days fixed (at last) – plus 4 brand new Patch Tuesday 0-days!

Remember those Exchange zero-days that emerged in a blaze of publicity back in September 2022?

Those flaws, and attacks based on them, were wittily but misleadingly dubbed ProxyNotShell because the vulnerabilities involved were reminiscent of the ProxyShell security flaw in Exchange that hit the news in August 2021.

Fortunately, unlike ProxyShell, the new bugs weren’t directly exploitable by anyone with an internet connection and a misguided sense of cybersecurity adventure.

This time, you needed an authenticated connection, typically meaning that you first had to acquire or correctly guess an existing user’s email password, and then to make a deliberate attempt to login where you knew you weren’t supposed to be, before you could perform any “research” to “help” the server’s sysadmins with their work:

Click-and-drag on the soundwaves below to skip to any point. You can also listen directly on Soundcloud.

As an aside, we suspect that many of the thousands of self-styled “cybersecurity researchers” who were happy to probe other people’s servers “for fun” when the Log4Shell and ProxyShell bugs were all the rage did so knowing that they could fall back on the presumption of innocence if caught and criticised. But we suspect that they thought twice before getting caught actually pretending to be users they knew they weren’t, trying to access servers under cover of accounts they knew were supposed to be off-limits, and then falling back on the “we were only trying to help” excuse.

So, although we hoped that Microsoft would come up with a quick, out-of-band fix, we didn’t expect one…

…and we therefore assumed, probably in common with most Naked Security readers, that the patches would arrive calmly and unhurriedly as part of the October 2022 Patch Tuesday, still more than two weeks away.

After all, rushing out cybersecurity fixes is a little bit like running with scissors or using the top step of a stepladder: there are ways to do it safely if you really must, but it’s better to avoid doing so altogether if you can.

However, the patches didn’t appear on Patch Tuesday either, admittedly to our mild surprise, although we felt as good as certain that the fixes would turn up in the November 2022 Patch Tuesday at the latest:

Patch Tuesday in brief – one 0-day fixed, but no patches for Exchange!

Intriguingly, we were wrong again (strictly speaking, at least): the ProxyNotShell patches didn’t make it into November’s Patch Tuesday, but they did get patched on Patch Tuesday, arriving instead in a series of Exchange Security Updates (SUs) released on the same day:

The November 2022 [Exchange] SUs are available for [Exchange 2013, 2016 and 2019].

Because we are aware of active exploits of related vulnerabilities (limited targeted attacks), our recommendation is to install these updates immediately to be protected against these attacks.

The November 2022 SUs contain fixes for the zero-day vulnerabilities reported publicly on September 29, 2022 (CVE-2022-41040 and CVE-2022-41082).

These vulnerabilities affect Exchange Server. Exchange Online customers are already protected from the vulnerabilities addressed in these SUs and do not need to take any action other than updating any Exchange servers in their environment.

We’re guessing that these fixes weren’t part of the regular Patch Tuesday mechanism because they aren’t what Microsoft refer to as CUs, short for cumulative updates.

This means that you first need to ensure that your current Exchange installation is up-to-date enough to accept the new patches, and the preparatory process is slightly different depending on which Exchange version you have.

62 more holes, 4 new zero-days

Those old Exchange bugs weren’t the only zero-days patched on Patch Tuesday.

The regular Windows Patch Tuesday updates deal with a further 62 security holes, four of which are bugs that unknown attackers found first, and are already exploiting for undisclosed purposes, or zero-days for short.

(Zero because there were zero days on which you could have appplied the patches ahead of the crooks, no matter how fast you deploy updates.)

We’ll summarise those four zero-day bugs quickly here; for more detailed coverage of all 62 vulnerabilities, along with statistics about the distribution of the bugs in general, please consult the SophosLabs report on our sister site Sophos News:

Microsoft patches 62 vulnerabilities, including Kerberos, and Mark of the Web, and Exchange…sort of

Zero-days fixed in this month’s Patch Tuesday fixes:

  • CVE-2022-41128: Windows Scripting Languages Remote Code Execution Vulnerability. The title says it all: booby-trapped scripts from a remote site could escape from the sandbox that is supposed to render them harmless, and run code of an attacker’s choice. Typically, this means that even a well-informed user who merely looked at a web page on a booby-trapped server could end up with malware sneakily implanted on their computer, without any clicking any download links, seeing any popups, or clicking through any security warnings. Apparently, this bug exists in Microsoft’s old Jscript9 JavaScript engine, no longer used in Edge (which now uses Google’s V8 JavaScript system), but still used by other Microsoft apps, including the legacy Internet Explorer browser.
  • CVE-2022-41073: Windows Print Spooler Elevation of Privilege Vulnerability. Print spoolers exist to capture printer output from many different programs and users, and even from remote computers, and then to deliver it in an orderly fashion to the desired device, even if it was out of paper when you tried printing, or was already busy printing out a lengthy job for someone else. This typically means that spoolers are programmatically complex, and require system-level privileges so they can act as a “negotiators” between unprivileged users and the printer hardware. The Windows Printer Spooler uses the locally all-powerful SYSTEM account, and as Microsoft’s bulletin notes: “An attacker who successfully exploited this vulnerability could gain SYSTEM privileges.”
  • CVE-2022-41125: Windows CNG Key Isolation Service Elevation of Privilege Vulnerability. As in the Print Spooler bug above, attackers who want to exploit this hole need a foothold on your system first. But even if they are logged in as a regular user or a guest to start with, they could end up with sysadmin-like powers by wriggling through this security hole. Ironically, this bug exists in a specially-protected process run as part of what’s called the Windows LSA (local system authority) that’s supposed to make it hard for attackers to extract cached passwords and cryptographic keys out of system memory. We’re guessing that after exploiting this bug, the attackers would be able to bypass the very security that the Key Isolation Service itself is supposed to provide, along with bypassing most other security settings on the computer.
  • CVE-2022-41091: Windows Mark of the Web Security Feature Bypass Vulnerability. Microsoft’s MoTW (mark of the web) is the company’s cute name for what used to be known simply as Internet Zones: a “data label” saved along with a downloaded file that keeps a record of where that file originally came from. Windows then automatically varies its security settings accordingly whenever you subsequently use the file. Notably, Office files saved from email attachments or fetched from outside the company will automatically open up in so-called Protected View by default, thus blocking macros and other potentially dangerous content. Simply put, this exploit means that an attacker can trick Windows into saving untrusted files without correctly recording where they came from, thus exposing you or your colleagues to danger when you later open or share those files.

What to do?

  • Patch Early/Patch Often. Because you can.
  • If you have any on-premises Exchange servers, don’t forget to patch them too, because the Exchange 0-day patches described above won’t show up as part of the regular Patch Tuesday update process.
  • Read the Sophos News article for further information on the other 58 Patch Tuesday fixes not covered explicitly here.
  • Don’t delay/Do it today. Because four of the bugs fixes are newly-uncovered zero-days already being abused by active attackers.

go top