Category Archives: News

Chrome fixes 8th zero-day of 2022 – check your version now

Google has just patched Chrome’s eighth zero-day hole of the year so far.

Zero-days are bugs for which there were zero days you could have updated proactively…

…because cybercriminals not only found the bug first, but also figured out how to exploit it for nefarious purposes before a patch was prepared and published.

So, the quick version of this article is: go to Chrome’s Three-dot menu (⋮), choose Help > About Chrome, and check that you have version 107.0.5304.121 or later.

Uncovering zero-days

Two decades ago, zero-days often became widely known very quickly, typically for one (or both) of two reasons:

  • A self-spreading virus or worm was released to exploit the bug. This tended not only to draw attention to the security hole and how it was being abused, but also to ensure that self-contained, working copies of the malicious code were blasted far and wide for researchers to analyse.
  • A bug-hunter not motivated by making money released sample code and bragged about it. Paradoxically, perhaps, this simultaneously harmed security by handing a “free gift” to cybercriminals to use in attacks right away, and helped security by attracting researchers and vendors to fix it, or come up with a workaround, quickly.

These days, the zero-day game is rather different, because contemporary defences tend to make software vulnerabilities harder to exploit.

Today’s defensive layers include: additional protections built into operating systems themselves; safer software development tools; more secure programming languages and coding styles; and more powerful cyberthreat prevention tools.

In the early 2000s, for instance – the era of super-fast-spreading viruses such as Code Red and SQL Slammer – almost any stack buffer overflow, and many if not most heap buffer overflows, could be turned from theoretical vulnerabilities into practicable exploits in quick order.

In other words, finding exploits and “dropping” 0-days was sometimes almost as simple as finding the underlying bug in the first place.

And with many users running with Administrator privileges all the time, both at work and at home, attackers rarely needed to find ways to chain exploits together to take over an infected computer completely.

But in the 2020s, workable remote code execution exploits – bugs (or chains of bugs) that an attacker can reliably use to implant malware on your computer merely by luring you to view a single page on a booby-trapped website, for example – are generally much harder to find, and worth a lot more money in the cyberunderground as a result.

Simply put, those who get hold of zero-day exploits these days tend not to brag about them any more.

They also tend not to use them in attacks that would make the “how and why” of the intrusion obvious, or that would lead to working samples of the exploit code becoming readily available for analysis and research.

As a result, zero-days often get noticed these days only after a threat response team is called into investigate an attack that’s already succeeded, but where common intrusion methods (e.g. phished passwords, missing patches, or forgotten servers) don’t seem to have been the cause.

Buffer overflow exposed

In this case, now officially designated CVE-2022-4135, the bug was reported by Google’s own Threat Analysis Group, but wasn’t found proactively, given that Google admits that it is “aware that an exploit […] exists in the wild.”

The vulnerability has been given a High severity, and is described simply as: Heap buffer overflow in GPU.

Buffer overflows generally mean that code from one part of a program writes outside the memory blocks officially allocated to it, and tramples on data that will later be relied upon (and will therefore implicitly be trusted) by some other part of the program.

As you can imagine, there’s a lot that can go wrong if a buffer overflow can be triggered in a devious way that avoids an immediate program crash.

The overflow could be used, for example, to poison a filename that some other part of the program is about to use, causing it to write data where it shouldn’t; or to alter the destination of a network connection; or even to change the location in memory from which the program will execute code next.

Google doesn’t explicitly say how this bug could be (or has been) exploited, but it’s wise to assume that some sort of remote code execution, which is largely synonymous with “surreptitious implantation of malware”, is possible, given that the bug involves mismanagment of memory.

What to do?

Chrome and Chromium get updated to 107.0.5304.121 on Mac and Linux, and to 107.0.5304.121 or 107.0.5304.122 on Windows (no, we don’t know why there are two different versions), so be sure to check that you have version numbers equal to or more recent than those.

To check your Chrome version, and force an update if you’re behind, go to the Three-dot menu (⋮) and choose Help > About Chrome.

Microsoft Edge, as you probably know, is based on the Chromium code (the open-source core of Chrome), but hasn’t had an official update since the day before Google’s threat researchers logged this bug (and hasn’t had an update that explicitly lists any security fixes since 2022-11-10).

So, we can’t tell you whether Edge is affected, or whether you should expect an update for this bug, but we recommend keeping an eye on Microsoft’s official release notes just in case.


Voice-scamming site “iSpoof” seized, 100s arrested in massive crackdown

These days, most of us have telephones that display the number that’s calling before we answer.

This “feature” actually goes right back to the 1960s, and it’s known in North American English as Caller ID, although it doesn’t actually identify the caller, just the caller’s number.

Elsewhere in the English-speaking world, you’ll see the name CLI used instead, short for Calling Line Identification, which seems at first glance to be a better, more precise term.

But here’s the thing: whether you call it Caller ID or CLI, it’s no more use in identifying the caller’s actual phone number than the From: header in an email is at identifying the sender of an email.

Show what you like

Loosely speaking, a scammer who knows what they’re doing can trick your phone into displaying almost any number they like as the source of their calls.

Let’s think through what that means.

If you get an incoming call from a number you don’t recognise, it almost certainly hasn’t been made from a phone that belongs to anyone you know well enough to have in your contact list.

Therefore, as a cybersecurity measure aimed at avoiding calls from people you don’t wish to hear from, or who could be scammers, you could use the jargon phrase low false positive rate to describe the effectiveness of CLI.

A false positive in this context represents a call from someone you do know, calling from a number it would be safe to trust, being misdetected and wrongly blocked because it’s a number you don’t recognise.

That sort of error is unlikely, because neither friends nor scammers are likely to pretend to be someone you don’t know.

But that usefulness only works in one direction.

As a cybersecurity measure to help you identify callers you do trust, CLI has an extreme false negative problem, meaning that if a call pops up from Dad, or Auntie Gladys, or perhaps more significantly, from Your Bank

…then there’s a significant risk that it’s a scam call that’s deliberately been manipulated to get past your “do I know the caller?” test.

No proof of anything

Simply put: the numbers that show up on your phone before you answer a call only ever suggest who’s calling, and should never be used as “proof” of the caller’s identity.

Indeed, until earlier this week, there was an online crimeware-as-a-service system available via the unapologetically named website ispoof.cc, where would-be vishing (voice phishing) criminals could buy over-the-internet phone services with number spoofing included.

In other words, for a modest initial outlay, scammers who weren’t themselves technical enough to set up their own fraudulent internet telephony servers, but who had the sort of social engineering skills that helped them to charm, or mislead, or intimidate victims over the phone…

…could nevertheless show up on your phone as the tax office, as your bank, as your insurance company, as your ISP, or even as the very telephone company you were buying your own service from.

We wrote “until earlier this week” above because the iSpoof site has now been seized, thanks to a global anti-cybercrime operation involving law enforcement teams in at least ten different countries (Australia, Canada, France, Germany, Ireland, Lithuania, Netherlands, Ukraine, the UK and the USA):

Megabust conducted

Seizing a clearweb domain and taking its offerings offline often isn’t enough on its own, not least because the criminals, if they remain at large, will often still be able to operate on the dark web, where takedowns are much harder due to the difficulty of tracking down where the servers actually are.

Or the crooks will simply pop up again with a new domain, perhaps under a new “brand name”, serviced by an even less scrupulous hosting company.

But in this case, the domain seizure was shortly preceded by a large number of arrests – 142, in fact, according to Europol:

Judicial and law enforcement authorities in Europe, Australia, the United States, Ukraine, and Canada have taken down a website that allowed fraudsters to impersonate trusted corporations or contacts to access sensitive information from victims, a type of cybercrime known as ‘spoofing’. The website is believed to have caused an estimated worldwide loss in excess of £100 million (€115 million).

In a coordinated action led by the United Kingdom and supported by Europol and Eurojust, 142 suspects have been arrested, including the main administrator of the website.

More than 100 of those arrests were in the UK alone, according to London’s Metropolitan Police, with up to 200,000 UK victims getting ripped off for many millions of pounds:

iSpoof allowed users, who paid for the service in Bitcoin, to disguise their phone number so it appeared they were calling from a trusted source. This process is known as ‘spoofing’.

Criminals attempt to trick people into handing over money or providing sensitive information such as one-time passcodes to bank accounts.

The average loss from those who reported being targeted is believed to be £10,000.

In the 12 months until August 2022 around 10 million fraudulent calls were made globally via iSpoof, with around 3.5 million of those made in the UK.

Of those, 350,000 calls lasted more than one minute and were made to 200,000 individuals.

According to the BBC, the alleged ringleader was a 34-year-old by the name of Teejai Fletcher, who has been remanded in custody pending a court appearance in Southwark, London, on 2022-12-06.

What to do?

  • TIP 1. Treat caller ID as nothing more than a hint.

The most important thing to remember (and to explain to any friends and family you think might be vulnerable to this sort of scam) is this: THE CALLER’S NUMBER THAT SHOWS UP ON YOUR PHONE BEFORE YOU ANSWER PROVES NOTHING.

Those caller ID numbers are nothing better than a vague hint of the person or the company that seems to be calling you.

When your phone rings and names the call with the words Your Bank's Name Here, remember that the words that pop up come from your own contact list, meaning no more than that the number provided by the caller matches an entry you added to your contacts yourself.

Put another way, the number associated with an incoming call provides no more “proof of identity” than the text in the Subject: line of an email, which contains whatever the sender chose to type in.


  • TIP 2. Always initiate official calls yourself, using a number you can trust.

If you genuinely need to contact an organisation such as your bank by phone, make sure that you initiate the call, and use a number than you worked out for yourself.

For example, look at a recent official bank statement, check the back of your bank card, or even visit a branch and ask a staff member face-to-face for the official number that you should call in future emergencies.


  • TIP 3. Don’t let coincidence convince you a call is genuine.

Never use coincidence as “evidence” that the call must be genuine, such as assuming that the call “must surely” be from the bank simply because you had some annoying trouble with internet banking this very morning, or paid a new supplier for the first time just this afternoon.

Remember that the iSpoof scammers made at least 3,500,000 calls in the UK alone (and 6.5M calls elsewhere) over a 12-month period, with scammers placing an average of one call every three seconds at the most likely times of the day, so coincidences like this aren’t merely possible, they’re as good as inevitable.

These scammers aren’t aiming to scam 3,500,000 people out of £10 each… in fact, it’s much less work for them to scam £10,000 each out of a few thousand people, by getting lucky and making contact with those few thousand people at the very moment when they are at their most vulnerable.


  • TIP 4. Be there for vulnerable friends and family.

Make sure that friends and family whom you think could be vulnerable to being sweet-talked (or browbeaten, confused and intimidated) by scammers, no matter how they’re first contacted, know that they can and should turn to you for advice before agreeing to anything over the phone.

And if anyone asks them to do something that’s clearly an intrusion of their personal digital space, such as installing Teamviewer to let them onto the computer, reading out a secret access code off the screen, or telling them a personal identification number or password…

…make sure they know it’s OK simply to hang up without saying a single word further, and getting in touch with you to check the facts first.


Oh, one more thing: the London cops have said that in the course of this investigation, they acquired a database file (we’re guessing it’s from some sort of call logging system) containing 70,000,000 rows, and that they’ve identified a whopping 59,000 suspects, of whom somewhere north of 100 have already been arrested.

Clearly, those suspects aren’t as anonymous as they might have thought, so the cops are focusing first on “those who have spent at least £100 of Bitcoin to use the site.”

Scammers lower down the pecking order may not be getting a knock on the door just yet, but it might just be a matter of time…


LEARN MORE ABOUT THE DIVERSIFICATION OF CYBERCRIME, AND HOW TO FIGHT BACK EFFECTIVELY, IN OUR THREAT REPORT PODCAST

Click-and-drag on the soundwaves below to skip to any point. You can also listen directly on Soundcloud.

Full transcript for those who prefer reading to listening.

With Paul Ducklin and John Shier.

Intro and outro music by Edith Mudge.

You can listen to us on Soundcloud, Apple Podcasts, Google Podcasts, Spotify, Stitcher and anywhere that good podcasts are found. Or just drop the URL of our RSS feed into your favourite podcatcher.


S3 Ep110: Spotlight on cyberthreats – an expert speaks [Audio + Text]

SPOTLIGHT ON CYBERTHREATS

Security specialist John Shier tells you the “news you can really use” – how to boost your cybersecurity based on real-world advice from the 2023 Sophos Threat Report.

Click-and-drag on the soundwaves below to skip to any point. You can also listen directly on Soundcloud.

With Paul Ducklin and John Shier. Intro and outro music by Edith Mudge.

You can listen to us on Soundcloud, Apple Podcasts, Google Podcasts, Spotify, Stitcher and anywhere that good podcasts are found. Or just drop the URL of our RSS feed into your favourite podcatcher.


READ THE TRANSCRIPT

[MUSICAL MODEM]

DUCK.  Hello, everybody – welcome to the Naked Security Podcast.

As you can hear, I am Duck, not Doug.

Doug is on vacation for… I was going to say “Black Friday”, but technically, actually, for US Thanksgiving.

I’m joined by my Toronto friend and colleague, John Shier, and it just so happens that the timing is perfect because we just published the Sophos 2023 Threat Report:

John, you’ve read it with the aim of going out into the world (I believe at the moment you’re in Rome) to talk to people about what we ought to, should, and in many ways *need* to do these days for cybersecurity.

So… tell us what the threat report has to say!


JOHN.  Hi, Duck… thanks.

Yes, it’s been quite the week-and-a-bit travelling around Europe, getting to see a lot of our partners and customers, and our colleagues from around the world, and talking to them about this year’s threat report and some of the things that we’ve found.

This year’s threat report is really interesting because it has, perhaps, a bit more technical depth than some of our previous years.

It also has a lot of information that I really think is actionable.

Out of that, we can basically turn around and go, “OK, based on that, what do we do to protect ourselves?”


DUCK.  So that’s what your friend and mine Chester likes to call “News You Can Use”?


JOHN.  Exactly… “News you can use”!

Information that is actionable is always, in my opinion… especially in the context of cybersecurity, is always more valuable.

Because I could tell you all about all the bad things that are happening out there, and if they’re theoretical, so what?

Also, if I’m telling you stuff that is not applicable to you, there’s nothing for you to do.

But as soon as I give you a piece of information where just acting on that information makes you more secure, then I think we *all win collectively*, because now there’s one less avenue for a cybercriminal to attack you… and that makes us all collectively more secure.


DUCK.  Absolutely.

There is an element of what you might call “self-serving altruism” in cybersecurity, isn’t there?

It really matters whether you’re secure or not in terms of protecting everyone else… *and* you do it for yourself.

Because if you don’t go probing, if you don’t try hard to do the right thing, the crooks will go probing for you.

And they’re very likely, these days, to find a way in.


JOHN.  They will, and they do!

The fact remains that we’ve long said that *everybody’s* a target, *everybody’s* a potential victim.

And when it comes to breaching a network, one of the things that you would do as a cybercriminal is not only ascertain what kind of company you’re in, what kind of network you’re in, where all the valuable assets are…

…but also what else you have access to, what other potential connections exist, what B2B [business-to-business] connections exist between the victim that you’re currently breaching and other potential victims out there.

At the end of the day, this is a monetisation game, and if I can get two victims for the price of one, then I win.

A lot of these more skilled attackers do have quite deep penetration into a lot of these networks.

I mean, most of them end up on Active Directory servers as DomainAdmin.

They can gather a lot of information that can be used for other crimes down the road…


DUCK.  But it’s not just about depth, it’s also about breadth, isn’t it?

If you’re the victim of a ransomware attack where pretty much all the useful data files, on all your computers including your servers, on your entire network, have been encrypted…

…that means the crooks already had read-and-write access to all of those files.

So therefore they could, and probably did, steal all those files first.


JOHN.  You’re right – the ransomware is the final phase of the attack.

This is the point of the attack where they *want* you to know that they were there.

They’ll put up the flaming skulls on your desktops, and on your servers, and wherever else they decide to encrypt, because they need you to know that something bad has happened… and they need to tell you how you can pay.

But the fact remains that ransomware, as I said, is the last phase.

There are a lot of things that have gone wrong before that last phase has happened.


DUCK.  So. John, let me just ask you quickly…

In the event of a ransomware attack, is it true to say that it is the exception rather than the rule that the crooks will [SPEAKING VERY RAPIDLY] come and scramble the files/ask for the money/and that’s it… in minutes or hours?

That’s not usually how it works, is it?


JOHN.  Right!

In the Active Adversary report from earlier this year, we identified (this is the study of all the incident response investigations from the Rapid Response Group at Sophos for the year of 2021)…

We identified that the median dwell time (that’s the time between when the attackers first breached the network and then launched the ransomware, or some sort of goal at the end where the attack was detected… it doesn’t have to be ransomware, it could be that we detect a cryptominer and then we’ve done the investigation) was 15 days:

Know your enemy! Learn how cybercrime adversaries get in…

Now, that’s the median for all attacks; for non-ransomware style attacks, it was 34 days, and for ransomware specifically, it was eleven days, so they move a little bit quicker than the overall median.

So, there’s a lot of time there.

And when I looked at some of the outliers, one of them victims had somebody in their network for 496 days, and this is likely due to initial access broker, or IAB, activity.

You’ve got somebody that came in through a vulnerability, implanted a webshell, sat on it for a while, and then eventually that either got resold…

…or independently, another cybercriminal found the same vulnerability because it wasn’t addressed, and was able to walk through the front door and do their activity.

There’s a lot that can go on, so there’s a lot of opportunities for defensive teams to be able to detect activity on the network that is anomalous – activity that is a signal to a potentially greater problem down the road, such as ransomware.


DUCK.  John, that reminds me that I need to ask you about something in the threat report that we perhaps rather cheekily have dubbed the Naughty Nine, which is a way of reminding people that individual cybercriminals, or even gangs of cybercriminals who work together these days, don’t need to know everything:

They’ve taken a divide-and-conquer approach, where different groups focus on, and then sell on, what they’re able to do in all sorts of different “business categories”.

Is that right?


JOHN.  Yes, it’s a development of the cybercrime ecosystem that seems to be somewhat cyclical.

If we roll back the clock a little bit, and we start thinking about the malware of yesteryear… you had generally viruses and worms.

They were stand-alone operations: there were people that were just going out there, doing their own thing, and infecting a bunch of computers.

And then eventually we got botnets that started to proliferate, and the criminals thought, “Hey, I can rent those botnets out to do spam.”

So now you had a couple different entities that were involved in cybercrime…

…and we keep fast forwarding to the days of the exploit kit merchants, where they would use the services of exploit kit brokers, and traffic direction services, and all sorts of other players in the market.

Every time we go through the cycle it seems like it gets bigger and more “professionalised” than before, and now we’re in an era where we’re calling it the “as-a-service” era for good reasons, because not only have legitimate companies gone to this model, but the cybercriminals have adopted it as well.

So you’ve got all sorts of services now that can be bought, and most of them are on the dark web in criminal forums, but you can find them on the clear web as well.


DUCK.  You mentioned, a moment ago, IABs: initial access brokers, crooks who aren’t actually interested in deploying ransomware or collecting bitcoins; they’ll leave that to someone else.

Their goal is to find a way in, and then offer that to lease or sale.

And that’s just *one* of the Naughty Nine “X-as-a-service” aspects, isn’t it?

With the Naughty Nine, with so many subdivisions, I guess the problem is, sadly, that [A] there’s plenty of room and attractiveness for everybody, and [B] the more the parts fragment, I imagine, the more complex it becomes for law enforcement.

Not necessarily to track down what’s going on, but to actually accumulate enough evidence to be able to identify, arrest and hopefully ultimately to convict the perpetrators?


JOHN.  Yes, it makes the investigative process a lot tougher, because now you do have that many more moving parts and individuals specifically involved in the attack… or at least aiding and abetting in the attack, we’ll say; maybe they’re not *directly* involved, but they’re definitely aiding and abetting.

In the good old days of the single operators doing ransomware, and doing everything from the initial breach to the end phase of ransomware, you might be able to get your criminal, the person that was behind it…

…but in this case, now you’re having to arrest 20 people!

While these investigators are good at what they do; they know where to look; they work tirelessly to try to discover these people, unfortunately, in many of the indictments I’ve read, it usually comes down to poor OpSec (poor operational security) that unmasks one of the individuals that’s involved in the crime.

And with that little bit of luck, then the investigator is able to pull on those strings and get the rest of the story.

If everybody’s got their story straight and their OpSec is tight, it can be a lot more difficult.


DUCK.  On the basis of what we’ve just said – the fact that there’s more cybercrime, involving more cybercriminals, with a wider range of stratified or compartmentalised skills…

…with all that in mind, what are the new techniques on the block that we can use to hit back against the apparently ever-increasing breadth and depth of the reach of the crooks?


JOHN.  Well, the first one I’ll start with isn’t necessarily new – I think we’ve been talking about this for a while; you’ve been writing about this on Naked Security for quite some time.

That’s the hardening of identity, specifically using multi-factor authentication wherever possible.

The unfortunate reality is that as I’ve gone through the last couple of years, reading a lot of the victim reports in the Active Adversary report, there’s a fundamental lack of multi-factor authentication that is allowing criminals to penetrate into networks quite easily… very simply, walking through the front door with a valid set of credentials.

And so while it’s not new, I think, because it’s not sufficiently adopted, we need to get to that point.


DUCK.  Even to consider SMS-based 2FA, if at the moment you just go, “It’s too hard, so I’ll just pick a really long password; no one will ever guess it.”

But of course, they don’t have to guess it, do they?

The initial access broker has 20 different ways of stealing it, and putting in a little database for sale later.

And if you have no 2FA at all, that’s a direct route in for anybody later on…


JOHN.  Some other crook has already asked nicely for your password, and they’ve got it somewhere.

Now this is just the second phase of the attack, where somebody else is using it.

Beyond this, I think we need to get to the point now where we’re actually investigating as many suspicious signals on the network as possible.

So, for many companies this might be impossible, if not very difficult… because it *is* difficult!

Having the competencies and the expertise to do this is not going to be within every company’s capability.


DUCK.  Now, what you’re talking about here, John, is, I think, what Chester likes to call, “Not sitting around waiting for alerts to pop into your dashboard, to tell you bad things that it now knows has happened, but actually *going out looking for things* that are indicators that an attack is on the way.”

In other words, to go back to what you said earlier, taking advantage of those first 14 days before the 15th “median day” on which the crooks get to the point that they’re ready to unleash the real bad stuff.


JOHN.  Yes, I can give you some examples… one that’s supported by the data and the Active Advertisary report, which actually to me supports the major trends that we’re seeing in the threat report.

And that’s exfiltration [the illegal extraction of data from the network].

There’s a time between when exfiltration happens to when ransomware gets released on the network.

Very often, these days, there will be some exfiltration that will precede the ransomware itself, so there will be some data that’s stolen.

And in our findings we saw that there was a median of 1.85 days – so you had, again, almost two days there before the ransomware hit, where you could have seen a suspicious signal happening on a server that doesn’t normally see a lot of outbound data.

All of a sudden, “Sending data to mega.io” [an online file storage service]… that could have been an indicator that something was happening on your network.

So that’s an example of where we’ve got signals on the network: they don’t mean “Immediately hit the panic button”, but it is the precursor to that particular event.


DUCK.  So these are companies that were not incompetent at looking for that kind of thing, or that did not understand what data exfiltration meant to their business, didn’t know that it wasn’t supposed to happen.

It was really just that, in amongst all the other things that they need to do to keep IT running smoothly in the company, they didn’t really have the time to think, “What does that tell us? Let’s dig that little bit further.”


JOHN.  No one was looking.

It’s not that they were negligent… it’s that either they didn’t know to look, or they didn’t know what to look for.

And so those kinds of events – and we see these time and again… there are definite signposts within ransomware attacks that are high-fidelity signals that say, “Something bad is happening in your network.”

And that’s just one side of things; that’s where we actually have signals.

But to your point, there are other areas where we could use the capabilities of an XDR tool, for example.


DUCK.  That’s extended detection and response?


JOHN.  That’s correct.


DUCK.  So that’s not, “Oh, look, that’s malware; that’s a file being encrypted; let’s block it.”

XDR is where you actively tell the system, “Go out and tell me what versions of OpenSSL I’ve got installed”?


JOHN.  Exactly.


DUCK.  “Tell me whether I’ve still got an Exchange server that I forgot about”… that kind of thing?


JOHN.  Yes.

We saw a lot of ProxyShell activity last year, when the PoC [proof-of-concept] was released in mid-August… and as you wrote about on Naked Security, even applying the patch to the system wasn’t going to necessarily save you, *if the crooks had gotten in before you and implanted a webshell*.

Serious Security: Webshells explained in the aftermath of HAFNIUM attacks

So now, by investigating after the fact – now that we know that ProxyShell exists, because we’ve seen the bulletins – we can go and look for: [1] the existence of those patches on the servers that we know about; [2] find any servers that we don’t know about; and [3] (if we have applied the patch) look for signs of those webshells.

All of that activity will ultimately make you safer, and potentially let you discover that there’s a problem on the network that you need to then call in your incident response team; call in Sophos Rapid Response; call in whomever is there to help you remediate these things.

Because in all these acronyms that we have, the “D”, the detection bit, that’s the technology.

The “R”, the response bit, that’s the humans… they’re the ones that are actually going out there and doing a lot of this response.

There are automated tools that can do this, but frankly the humans are much better at doing it in a more complete way than the machines can.

The humans know the environment; the humans can see the nuance of things better than computers can.

And so we need both the human and the machine working together in order to solve these problems.


DUCK.  So, XDR isn’t just about traditional, old-school threat detection and prevention, as important as that remains.

You could say it’s as much about finding the good stuff that is supposed to be there, but is not…

…as it is about finding the bad stuff that is not supposed to be there, but is.


JOHN.  It can be used another way as well, which is that if you are querying your estate, your network, all the devices that are reporting telemetry back to you… and you don’t get an answer from some of them.

Maybe they’re turned off?

Maybe not – maybe the criminals have turned off the protection of those systems, and you need to investigate further.

You want to reduce the amount of noise in the system so that you can spot the signal a little bit better, and that’s what prevention will do.

It will get rid of all that low-hanging, high-volume garbage malware that comes at us, at all of us, every single day.

If we can get rid of that, and get a more stable signal, then I think it not only helps the system overall because there are fewer alerts the process, but it also helps the humans find problems faster.


DUCK.  John, I’m conscious of time, so I’d like to ask you the third and final thing that people might not be doing (or they think they might need to do but they haven’t quite got round to it yet)… the thing that, in your opinion, gives the best bang for their cybersecurity buck, in order to increase their anti-cybercrime resilience as quickly as they can.


JOHN.  Something that I’ve been talking to a lot of our customers and partners about is: we’re in this world now where the threats have gotten more complex, the volume has gone up…

…so don’t be afraid to ask for help.

To me, that’s advice that we all should take to heart, because we can’t all do it all.

You made an example before we started recording about calling in a plumber, right?

Not everybody is capable of doing their own plumbing… some people are, but at the end of the day, asking for help shouldn’t be seen as a negative, or as a failure.

It should be seen as you doing everything you can to put yourself on a good security footing.


DUCK.  Yes, because that plumber has fixed hundreds of leaky pipes before… and cybersecurity is very much like that, isn’t it?

Which is why companies like Sophos are offering Managed Detection and Response [MDR], where you can say, “Come and help me.”

If nothing else, it frees you up to do all the other IT things that you need to do anyway… including day to day cybersecurity stuff, and regulatory compliance, and all of those things.


JOHN.  Expertise is gained through experience, and I really don’t want all of our customers, and everybody else out there, to have to experience hundreds of attacks daily in order to figure out how best to remediate them; how best to respond.

Whereas the aggregate of all the attacks that we see daily, and the experts that we have sitting in those chairs looking at that data… they know what to do when an attack hits; they know what to do *before* an attack kits.

They can spot those signals.

We’re going to be able to help you with the technical aspect of remediation.

We might give you some advice as well on how to prepare your network against future attacks, but at the same time, we can also take some of the emotion out of the response.

I’ve spoken to people who’ve gone through these attacks and it is harrowing, it’s emotionally taxing, and if you’ve got somebody there that is experienced, with a cool head, who’s unemotional, who can help guide you through this response…

…the outcome is going to be better than if you’re running around with your hair on fire.

Even if you have a response plan – which every company should, and it should be tested! – you might want to have somebody else along who can walk you through it, and go through that process together, so that at the end you are in a place where you’re confident your business is secure, and that you are also able to mitigate any future attack.


DUCK.  After your twelfth ransomware attack, I reckon you’ll probably be as good as our experts are at running the “network time machine”, going back, finding out all the changes that were made, and fixing everything.

But you don’t want to have to suffer the eleven ransomware attacks first to get to that level of expertise, do you?


JOHN.  Exactly.


DUCK.  John, thank you so much for your time and your passion… not just for knowing about cybersecurity, but helping other people to do it well.

And not just to do it well, but to do *the right stuff* well, so we’re not wasting time on doing things that won’t help.

So let’s finish up, John, by you telling everybody where to get the threat report, because it’s a fascinating read!


JOHN.  Yes, Duck… thank you very much for having me on; I think it was a good conversation, and it’s nice to be on the podcast with you again.

And if anybody wants to get their very own copy of the freshly minted threat report, you can go to:

https://sophos.com/threatreport


DUCK.  [LAUGHS] Well, that’s nice and easy!

It’s great reading… don’t have too many sleepless nights (there’s some scary stuff in there), but it will help you do your job better.

So thank you once again, John, for stepping up at short notice.

Thanks to everybody for listening, and until next time…


BOTH.  Stay secure!

[MUSICAL MODEM]


CryptoRom “pig butchering” scam sites seized, suspects arrested in US

Over the past year, we’ve had the unfortunate need to warn our readers not once, but twice, about a scam we’ve dubbed CryptoRom, a portmanteau word formed from the terms “Cryptocurrency” and “Romance scam”.

Simply put, these scammers use a variety of techniques, notably including prowling on dating sites, to meet people online, form a friendship…

…not with the intention of drawing their victims into a “we’ve fallen in love, now send money” romance scam, but instead to earn their trust and lure them into bogus investments “managed” via fraudulent mobile phone apps.

Intriguingly, the crooks even target iPhone users, despite the fact that ripoff financial apps are difficult to sneak into Apple’s App Store, and Apple doesn’t allow its users to download apps from anywhere else.

Sadly, and ironically, the CryptoRom gangs have turned Apple’s strictness into a sort of sales schpiel: if anyone and everyone could download their “investment” apps, that would spoil the exclusivity, so the apps are only available by invitation, directly from the “investment” group.

SophosLabs has tracked these criminals using Apple’s business and developer toolkits to bypass the App Store, using systems such as Apple’s Enterprise Provisioning system, which allows phones directly managed by a business to install proprietary apps:

The crooks have also used Apple’s development tool TestFlight, where unreleased apps can be provided for a limited time to invited, consenting partcipants:

As an aside that we can’t bring ourselves not to mention: the Sophos researchers who wrote the two papers referenced above won the prestigious 2022 Péter Szőr Award, presented at the annual Virus Bulletin conference for the best technical research of the year.

Winning your trust

Obviously, this means buying into a scammer’s instructions not merely to install an app you’ve never heard of, but to do so by essentially committing your entire device to their control, either via Enterprise Provisioning or by enrolling in a development process that would normally only be recommended for devices dedicated to coding and testing.

That’s why the scammers win your trust first, for example by befriending you via a dating site, so that you’re willing to accept what sounds like an obvious technical risk.

The crooks parlay the curious installation process into what sounds like an online privilege: the unusual way of acquiring the app is pitched as a way to join an exciting online investment vehicle that isn’t available via Apple precisely because it’s financial dynamite that’s not available to just anyone!

The “romance” in a CryptoRom scam isn’t tugging at your heart strings, but at your wallet strings.

You can probably imagine how the scam plays out from here.

A carefully concocted pack of lies

The app looks and behaves like a legitimate investment product, hooked up directly to an online web backend that processes deposits, calculates growth, allows deposits, displays real-time graphs…

…all presented with branding that is typically dolled up to look like an official, well-regulated service or stock exchange.

But the app, the “exchange” that backs it, the logos, the branding, and the enticingly upward direction of your account balance are all completely bogus.

In five words, the entire thing is a carefully concocted pack of lies.

Your initial investment shows up right away; the crooks may even offer to “boost” your account with a loan or a staking bonus, which might sound too good to be true but will nevertheless show up in your “account” as promised.

The crooks may even allow you to make withdrawals at first, to build trust and confidence.

This is a common ploy in so-called Ponzi or pyramid schemes – in truth, of course, the scammers are merely giving you some of your own money back.

But they then quickly show your account surging, inviting you to imagine how much more you could be making if only you’d re-deposit your recent withdrawal, and perhaps whack some more on top of that as well.

Heck, why not borrow from your friends and family (but don’t let them in on the whole story or they’ll all want to join in, eh?) and double, triple, quadruple all that money as well?

And that’s not all…

Sadly, that’s not all, because there’s a sting in the tail, too.

When you try to withdraw your “funds”, there’s suddenly a government witholding tax, usually of 20%, on the funds you want to access – something that’s admittedly not unusual in countries with investment charges such as Capital Gains Tax.

Except that it’s not a witholding tax at all, as you might at first expect (that’s where the government’s cut is simply deducted, or witheld, from the amount you want to withdraw, and the rest comes to you).

The crooks tell you that the funds are frozen for regulatory reasons, so they can’t be used to offset the amount you “owe”.

You have to pay the amount first, in a transaction of its own, in order unfreeze the funds before they can be withdrawn in a second transaction.

The crooks will typically pile on the pressure here, warning that you risk losing everything in your “account”, both your own money that you’ve paid in already, and the “capital gains” you think you’ve accumulated.

As the SophosLabs researchers explain, if the crooks think that they genuinely can’t squeeze you for the entire 20%, because they’ve almost bled you dry already, they’ll even pretend to “help” by rallying together their “friends” to lend you some of the money you need to get your “investment” out, until they really have drained you for every drop:

Screen photo of “tax” exchange from victim’s phone.
Click on image to see image in original article.

The theory, of course, is that after you’ve paid the 20% “tax”, you will get access to 100% of the “balance” in your account, leaving plenty of funds on hand not only to pay off the loans that made it all possible, but also to cash out to your own considerable advantage.

Tragically, this is a made-up example of how scams like this typically unfold:

Action "Balance" Amount at stake "Cashout" deductions
--------------------------------- --------- ------------------ --------------------
$10,000 paid in + $30,000 "loan" -> $ 40,000 YOUR STAKE $10,000 DEDUCT $30,000 Your graph shows you are doing well! Synthetic 2x boost in value -> $ 80,000 YOUR STAKE $10,000 DEDUCT $30,000 What if it's all phoney? Withdraw $5000 as "test of truth" -> $ 75,000 YOUR STAKE $ 5,000 DEDUCT $30,000 Big growth event coming, crooks go on a
charm offensive, tell you to invest more! Pay the $5000 withdrawal back in, add $10,000 on top, plus
another $20,000 "loan" -> $111,000 YOUR STAKE $20,000 DEDUCT $50,000 Synthetic 3x boost in value -> $ 333,000 YOUR STAKE $20,000 DEDUCT $50,000 Woo-hoo! Time to cash out! 20% "unfreezing" tax comes to $66,600
Crooks realise you genuinely can't come up with that much, but figure you can squeeze some money out by hitting up friends, etc. for $20,000 if they "offer" to find $46,000. You pay $20,000 + $46,600 "loan" -> $ 333,000 YOUR STAKE $40,000 DEDUCT $96,000 After withdrawal and "paying back" the $96,000, you will be still be left with $237,000, which gives you a "profit" of $197,000 after deducting your outgoings of $40,000! Withdraw $333,000 less "loans" -> GAME OVER. INSERT MORE COINS TO RESUME GAME. 

The sting in the tail of the tail

Even worse, there’s even a sting in the tail of the tail.

Once you realise you’ve been scammed, you may miraculously be contacted by someone who sympathises with your plight (perhaps it recently happened to them?) and knows just the service for you…

..cryptocurrency recovery!

We all know that cryptocoins, by design, are largely unregulated, pseudo-anonymous, and anywhere from hard to almost impossible to trace and recover.

Yet we also know that cryptocoin recoveries do sometimes happen, occasionally in astonishing amounts and after lengthy periods, like the fund recovered from wannabe rap star Crocodile Of Wall Street and her husband, or from Silk Road cryptorobber James Zhong, who hid $3 billion in bitcoins in a popcorn tin for almost a decade:

Sadly, if you go down the “recovery service” rabbit hole, you will just be pouring yet more good money after bad, and your overall losses will be even more catastrophic.

Hot on the trail

Here’s some good news to follow the bad: the US Department of Justice (DOJ) is taking on at least one group of CryptoRom scammers.

The DOJ refers to this sort of scam as “pig butchering”, which is a metaphor apparently chosen by the scammers themselves to mock their victims: in Chinese, the technique is known as 杀猪盘 (sha zhu pan), something we’d probably refer to as a “chopping block” in English, but that literally translates as “pork butchering plate”.

In a report this week, the DOJ describes a takedown of seven CryptoRom-related web domains that it alleges were used over a period of at least four months (May to Augut 2022) to rip off at least five victims in the US alone. (We assume there were numerous victims from other countries, but the DOJ report relates to victims in its juridiction.)

The domains were rigged up to look like web pages of an official Singapore financial exchange, and allegedly helped in conning victims out of over $10,000,000.

This follows a DOJ action last month in which 11 people were arrested in connection with these “chopping block” attacks and charged with with ripping off more than 200 people in the US of close to $18,000,000.

The 11 defendants were also charged with acting as money laundering “mules”, who illegally passed more than $52,000,000 through bank accounts opened up using forged or stolen identity documents, receiving a percentage of the amount laundered in payment.

As we’ve mentioned before, money laundering services of this sort are widely used by cybercriminals to exfiltrate illicit deposits out of the banking system before the fraud gets spotted and the bogus transactions get frozen or reversed.

Business Email Compromise (BEC) scammers, for instance, operate by tricking companies into paying invoices (they typically focus on high-value sums, sometimes into the millions of pounds or dollars) into the wrong bank account.

From there, they use the assistance of “money mules” to get those misdirected funds withdrawn from the banking system before the deception can be prevented:

What to do?

  • Take your time when online talk turns from romance, love, or even plain friendship, to money. Don’t be swayed by the fact that your new “friend” happens to have a lot in common with you, and don’t let yourself be mesmerised by their “investment advice”. It’s easy for scammers to pitch themselves as kindred spirits if they’ve studied at your social networking or dating site profiles in advance.
  • Never give administrative control over your phone to someone with no genuine reason to have it. Never click [Trust] on a dialog that asks you to enrol in remote management unless it’s from someone you already have an employment contract with, the conditions have been clearly explained to you in advance, and you understand and accept the business reasons for enrolling your phone.
  • Don’t be deceived by messaging inside the app itself. Don’t let by icons, graphs, names and text messages inside an app trick you into assuming it has the credibility it claims. (If I show you a picture of a pot of gold, that doesn’t mean I own a pot of gold.)
  • Don’t be fooled because a scam website looks well-branded and professional. Setting up a website with live graphs, investment pages and “account” management tools is easier than you think. Crooks can readily copy official logos, taglines, branding and even JavaScript code from the real site, and modify it to suit their malicious purposes.
  • Listen openly to your friends and family if they try to warn you. Online scammers think nothing of deliberately setting you against your family as part of their scams. They may even “counsel” you not to let your friends and family in on your “secret”, pitching their investment proposal as something exclusive: a good fit for you, but not open to just anyone. Don’t let the scammers drive a wedge between you and your family as well as between you and your money.

LEARN MORE ABOUT RELATIONSHIP SCAMS:


How to hack an unpatched Exchange server with rogue PowerShell code

Just under two months ago, some worrying bug news broke: a pair of zero-day vulnerabilities were announced in Microsoft Exchange.

As we advised at the time, these vulnerabilities, officially designated CVE-2022-41040 and CVE-2022-41082:

[were] two zero-days that [could] be chained together, with the first bug used remotely to open enough of a hole to trigger the second bug, which potentially allows remote code execution (RCE) on the Exchange server itself.

The first vulnerability was reminiscent of the troublesome and widely-abused ProxyShell security hole from back in August 2021, because it relied on dangerous behaviour in Exchange’s Autodiscover feature, described by Microsoft as a protocol that is “used by Outlook and EAS [Exchange ActiveSync] clients to find and connect to mailboxes in Exchange”.

Fortunately, the Autodiscover misfeature that could be exploited in the ProxyShell attack by any remote user, whether logged-in or not, was patched more than a year ago.

Unfortunately, the ProxyShell patches didn’t do enough to close off the exploit to authenticated users, leading to the new CVE-2022-40140 zero-day, which was soon laconically, if misleadingly, dubbed ProxyNotShell.

Not as dangerous, but dangerous nevertheless

Clearly, ProxyNotShell was nowhere near as dangerous as the original ProxyShell, given that it required what’s known as authenticated access, so it wasn’t open to abuse by just anybody from anywhere.

But it quickly transpired that on many Exchange servers, knowing any user’s logon name and password would be enough to pass as authenticated and mount this attack, even if that user would themselves need to use two-factor authentication (2FA) to logon properly to access their email.

As Sophos expert Chester Wisniewski put it at the time:

It’s a “mid-authentication vulnerability”, if you want to call it that. That is a mixed blessing. It does mean that an automated Python script can’t just scan the whole internet and potentially exploit every Exchange server in the world in a matter of minutes or hours, as we saw happen with ProxyLogon and ProxyShell in 2021. […]

You need a password, but finding one email address and password combination valid at any given Exchange server is probably not too difficult, unfortunately. And you might not have gotten exploited to date, because to successfully log into Outlook Web Access [OWA] requires their FIDO token, or their authenticator, or whatever second factor you might be using.

But this attack doesn’t require that second factor. […] Just acquiring a username and password combination is a pretty low barrier.

As you probably remember, many of us assumed (or at least hoped) that Microsoft would rush to get a fix out for the ProxyNotShell holes, given that there were still two weeks until October’s Patch Tuesday.

But we were disappointed to find that a reliable fix was apparently more complex than expected, and October came and went with ProxyNotShell addressed only by workarounds, not by proper patches.

Even November’s Patch Tuesday didn’t directly provide the needed fixes, though the patches nevertheless came out on the same day as part of an Exchange-specific security update that could be fetched and installed separately:

Proof-of-concept revealed

Now that the dust has settled and everyone has had time to patch their Exchange servers (the ones they haven’t forgotten about, at least), researchers at Zero Day Initiative (ZDI), to which these vulnerabilities were originally responsibly disclosed for submission to Microsoft, have explained how the bugs can be exploited.

The bad news, depending on your opinion of overt exploit disclosures, is that the ZDI team has now effectively provided a proof-of-concept (PoC) explaning how to attack Exchange servers.

The good news, of course, is that:

  • We can now study and understand the bugs ourselves. This not only helps us all to ensure that the overall precautions we have taken (not merely limited to patching) are likely to provide the protection we expect, but also informs us of progamming practices that we will want to avoid in future, so we don’t get trapped into opening up bugs of this sort in our own server-side code.
  • We now have no excuses left for not applying the patches. If we’ve dragged our feet about updating, ZDI’s explanation of why the attack works makes it clear that the cure is definitely preferable to the disease.

How it works

ZDI’s explanation of this vulnerability makes for a fascinating tale of how complex it can be to chain together all the parts you need to turn a vulnerability into a viable exploit.

It’s also worth reading to help you understand why digging into an existing exploit can help to reveal other ways that a vulnerability could be misused, potentially prompting additional patches, urging configuration changes, and promoting new programming practices that might not have been obvious just from fixing the original hole.

The explanation is, of necessity, complicated and quite technical, and leads you forwards through a lengthy series of steps to achieve remote code execution (RCE) at the end.

In the hope of helping you follow the high-level details more easily if you decide to read the ZDI report, here’s a hopefully-not-too-simplified summary with the steps listed in reverse…

…so you will know in advance why the story takes the directions it does:

  • STEP 4. Remotely trick Exchange into instantiating a .NET object of your choice, with an initialisation parameter of your choice.

In modern coding, an instantiated object is the jargon word for an allocated chunk of memory, automatically initialised with the data and resources it will need while it’s in use, and tied to a specific set of functions that can operate on it. (Instantiate is just a fancy word for create.)

Objects may be managed and controlled by the operating system itself, to help avoid the sort of memory mismanagement errors common in a language such as C, where you typically need to allocate memory yourself, fill up the relevant data fields by hand, and remember to release the memory and resources you’re using, such as network sockets or disk files, when you’re done.

Objects generally have a programmatic function associated with them called a constructor, which is automatically executed when a new object is created in order to allocate the right amount of memory and the correct set of system resources.

Usually, you need to pass one or more parameters as arguments to the constructor, to denote how you want the object to be configured when it starts out.

Simply put, if you instantiate, say, a TextString object (we’re making these names up, but you get the idea) using a parameter that is itself a text string such as example.com:8888

…you will probably end up with a memory buffer allocated to hold your text, initialised so it holds the same value you passed in, namely the raw text example.com:8888.

In that context, the text string passed in as data to the object constructor doesn’t immediately pose any obvious cybersecurity threat when you trigger the constructor remotely, other than a possible denial of service (DoS) by repeatedly asking for bigger and bigger strings to try to exhaust memory.

But if you were to instantiate, say, a ConnectedTCPClient object using the very same text string parameter of example.com:8888, you might end up with a memory buffer ready to hold temporary data, along with a network socket allocated by the operating system that’s ready to exchange data woith the server example.com over TCP port 8888.

You can see the remote code execution risk there, even if you never get to send any data to the open socket, given that you’ve tricked the server into calling home to a location that you control.

You might even find an object called, say, RunCmdAndReadOutput, where the text string you send as a parameter is, quite literally, a command you want to run automatically as soon the object is created, so you can collect its output later.

Even if you never get to recover the output of the command, just instantiating such an object would nevertheless let you choose a command to run, thus giving you generic remote code execution and presenting a risk limited only by the access rights of the server process itself.

Of course, the attack is only this easy once you get to the last stage, which you’re not supposed to be able to do, because Exchange has a strict allowlist that prevents you from choosing any old object to instantiate.

In theory, only safe or low-risk objects can be created remotely via PowerShell, so that instantiating our imaginary TextString above, or a SimpleIntegerValue, might be considered acceptable, while a ConnectedTCPClient or a RunCmdAndReadOutput would definitely not be.

But the ZDI researchers notice that before triggered the last step, they could do this:

  • STEP 3. Remotely trick Exchange into thinking that a low-risk object that’s passed the safety test is, in fact, some other object of your choice.

Even so, you might expect Exchange to prevent the remote creation even of low-risk objects, to minimise the threat even further.

But the researchers found that they could:

  • STEP 2. Remotely trick Exchange into using its PowerShell Remoting feature to create an object based on initialisation parameters controlled externally.

And that was possible because of the ProxyShell-like hole that was only semi-patched:

  • STEP 1. Remotely trick Exchange into accepting and processing a web request with code in by packing a valid username:password field into the request as well.

Even if the user named in the request wasn’t actually logged in, and would need to go through some sort of 2FA process to access their own mailbox, an attacker who knew their username:password combination would have enough authentication information to trick Exchange into accepting a web connection that could be used to kick off the attack chain described in steps 2 to 4 above.

Loosely speaking, any valid username:password combination would do, given that the “authentication” was needed simply to prevent Exchange from rejecting the HTTP request up front.

What to do?

Note that this attack only works:

  • If you have on-premises Exchange servers. Microsoft claims to have locked down its own cloud services quickly, so Exchange Online is not affected. Make sure you know where your Exchange servers are. Even if you now use Exchange Online, you may still have on-premises servers running, perhaps left over by mistake from your migration process.
  • If your servers are unpatched. Make sure you have applied the Exchange Software Update of 2022-11-08 to close off the vulnerabilities that the exploit requires.
  • If your servers still accept Basic Authentication, also known as legacy authentication. Make sure you have blocked all aspects of legacy authentication so your servers won’t accept the username:password headers mentioned above, and won’t accept risky Autodiscover protocol requests in the first place. This stops attackers tricking a server into accepting their booby-trapped object instantiation tricks, even if that server isn’t patched.

You can keep track of our official prevention, remediation and response advice, and Sophos customers can keep track of the threat detection names used by our products, via the Sophos X-Ops Twitter feed (@SophosXOps).


LEARN MORE ABOUT EXCHANGE AUTHENTICATION AND OAUTH2

Click-and-drag on the soundwaves below to skip to any point. You can also listen directly on Soundcloud.

With Paul Ducklin and Chester Wisniewski
Intro and outro music by Edith Mudge.


go top