“ScamClub” gang outed for exploiting iPhone browser bug to spew ads

Digital ad company Confiant, which claims to “improve the digital marketing experience” for online advertisers by knowing about and getting rid of malicious and unwanted ads, has just published an analysis of a malvertising group it calls ScamClub.

According to Confiant, this group is behind a massive number of those annoying and scammy popup campaigns you will almost certainly have seen, where you visit an apparently honest web page and then get pestered with online surveys.

We’ve warned our readers many times about the risks of online surveys – even ones that don’t obviously or explicitly lead to attempted malware infections.

At best, you will often end up giving away a surprising amount of personal data, typically in return for a minuscule chance of winning a free product (fancy phones, high-value gift cards and games consoles are typically used as lures).

Tranche of personal data grabbed by a typical “free gift” survey scam.
This step “wins” you the “right” to spend a bunch of money to qualify for the “prize”.

Or you may end up on the wrong end of a “survey-and-offer ladder”, where you have to “take advantage” of ever-more expensive offers to qualify for a prize, which means making numerous purchases along the way – and therefore giving out your credit card data over and over again.

In one example we analysed, by sharing personal data right now, you would “win” the “advantage” of making at least 10 more purchases within 20 days, in categories called Silver, Gold and Platinum, to qualify for a prize worth as little as £100.

We couldn’t even see what those Gold or Platinum purchases might be up front, but with eight Platinums to buy, and a typical Silver purchase (needed to “get on the ladder” in the first place) running at about £2.50, we’re guessing that we’d have spent a lot more than the value of the prize that we might eventually have qualified for but still not actually won…

…and if we bailed out at any point, or were subsequently found to have provided information at any stage that was deemed “inaccurate”, we’d have been disqualified anyway.


Scamming by exploit

According to Confiant, the ScamClub crew took things to an even more aggressive level by actively targeting a bug in Apple’s WebKit browser engine, the compulsory software core that every browser on your iPhone, including Safari, is required to use.

(Browsers not based on WebKit aren’t permitted in the App Store, even if those browsers are based on other web rendering engines on other platforms.)

The bug, dubbed CVE-2021-1801, was patched by Apple in recent updates to iOS and iPadOS (version 14.4) and macOS (Big Sur version 11.2 and Security Update 2021-001 for Catalina and Mojave).

Confiant says that the vulnerability, although nowhere near serious enough to allow remote code execution or any kind of major privilege escalation such as exfiltrating data belonging to other apps, nevertheless gave these rogue advertisers a chance to evade security restrictions that the WebKit sandbox was supposed to enforce.

The sandbox restrictions were supposed to prevent Apple users from being pestered by this ad group’s web redirects, but the vulnerability, it seems, allowed the ScamClub to fetch and present dubious ad content from third-party servers that should have been blocked, and that you wouldn’t have approved if you’d explicitly been asked.

Deliberately exploiting a vulnerability to achieve a cybersecurity bypass that you jolly well knew wasn’t supposed to happen, even if you don’t use it to commit subsequent crimes such as implanting malware but simply for your own convenience, is against the law in many jurisdictions.

Google, for example, found that out nearly 10 years ago when it was hit with a multimillion dollar fine for using a security bypass trick against Apple users to set browser cookies that Safari would otherwise have blocked.

What to do?

  • Get those Apple updates if you haven’t done so already. If you’re an iOS 13 user, please note that the latest security update for iOS 13 is iOS 14.4. There is no separate, supported channel for iOS 13 updates as there is with iOS 12. (Don’t shout at us. We’re just the messengers.)
  • Watch out for online surveys, no matter how harmless they seem. You will often give away a pile of personal data only to find that you need to give away even more, including making online credit card purchases through websites you wouldn’t normally trust, to stay “on track” for your “prize”. Even if you bail out half way through, you can’t recall any of the data you have already handed over. Watch the video below for excellent advice.
  • Know your privacy limits, and stick to them. If you have friends or family who are in the habit of filling in surveys because they think they’re mostly harmless, show them this article.
  • Don’t use tricks or subterfuge in your own online marketing. It’s not just consumers and potential customers whom you may anger – the regulators in your part of the world may be losing their patience with overly aggressive marketing tricks, too.


Romance scams at all-time high: here’s what you need to know

The US Federal Trade Commission (FTC), America’s official consumer protection watchdog, recently warned that romance scammers are making more money than ever before.

Victims in the US were tricked out of more than $300 million in 2020, up from $200 million in 2019.

The FTC says that the median average financial loss in a romance scam was $2500, more than ten times as much as the average for other online scams.

Here’s what you need to know, whether you want to protect yourself or to advise your friends and family if you think they are being sucked into a scam of this sort:



What are romance scams?

Romance scams, if you’ve not heard of them before, are pretty much what the name suggests, with the fake romance conducted online, something like this:

  • A cybercrime gang finds you online, typically through a dating site or social media.
  • The gang researches your interests using public sources such as the dating site itself, your social media accounts, and information posted by your real-life friends.
  • One of the gang creates a fake online profile that aligns nicely with yours, and makes contact using an assumed personality that’s calculated to appeal to you, typically using someone else’s name and photo.
  • If you show an interest, the crook carefully cultivates a “friendship” by pretending to be exactly the sort of person you’re looking for, typically over a period of weeks or months.
  • You form what you think is a loving online relationship with the crook, who pretends to have fallen in love with you, too. The scammer will typically put in a lot of effort here in order to cultivate a sense of being truthful and reliable, so you may exchange hundreds of messages and voice calls with them. You can expect that they will reply quickly and apparently lovingly to all the messages you send, and that they won’t miss online “dates” they’ve promised to keep with you.
  • The crook then talks you into handing over money. Typically the scammer will claim to live far away and says they can’t easily meet up with you, even if neither of you are living under coronavirus lockdowns. They then talk you into handing over money, typically a small amount at first, often followed by more and more.

Before the coronavirus pandemic, the excuses given for needing money to meet up would often involve visas and air tickets, possibly with some added lawyer’s or agent’s fees thrown in for visa processing.

Typically, your “loved one” would then fail to show up at your local airport as expected, and come up with excuses such as getting arrested, having a serious accident in the taxi to the airport at their end, or one of a dizzying array of plausible, emotional, but always utterly dishonest reasons why you should send yet more funds.

As the FTC points out, the coronavirus pandemic has created a whole new set of excuses available to romance scammers for why they need more and more money, even after you’ve paid for their travel and they failed to show up the first time:

Scammers fabricate attractive online profiles to draw people in, often lifting pictures from the web and using made up names. Some go a step further and assume the identities of real people. Once they make online contact, they make up reasons not to meet in person. The pandemic has both made that easier and inspired new twists to their stories, with many people reporting that their so-called suitor claimed to be unable to travel because of the pandemic. Some scammers have reportedly even canceled first date plans due to a supposed positive COVID-19 test.

Money gone for good

These scammers typically ask to you pay using wire transfer, often using the excuse that banking facilities in the country where they are living at the moment are unreliable and corrupt.

Another common way they arrange to receive your money is to get you to buy online gift cards that they say they can redeem at their end – all you need to do, they say, is scratch the card to get the PIN, then email the PIN and the card number to them.

Sadly, both wire transfers and “spent” gift cards are equivalent to cash, so there is no way to get your money back afterwards when you realise you’ve been scammed.

Another common trick that romance scammers use, especially if they think you won’t be fooled if they insist that you send money to them, is to show their “appreciation” by sending money or a “love gift” such as jewellery to you.

Sure enough, you soon receive a courier company’s tracking notice for the specified gift, with your “loved one’s” fake name as the sender.

But the courier company and the fake website you are asked to use to accept the “gift” are all part of the scam, too.

The “delivery fees” you will now be asked to pay to receive your “gift” go straight to the scammers, and the item never shows up.

What to do?

  • Don’t blame yourself if you get reeled in. These people are confidence tricksters by profession, so they have loads of practice. Unlike many online scammers, they are willing to play a long game, investing weeks, months and even years into building up false friendships.
  • Consider reporting your scam to the police. You almost certainly won’t get any money back, but with your evidence and that of other victims, there is at least a possibility of identifying and stopping the criminals involved, and of warning potential future victims away from these scammers.
  • Look for a support group if you feel depressed after getting scammed like this. But beware of people “reaching out” to help you online after you’ve been scammed, lest you get drawn into a scam all over again. Ignore any offers of help to “recover your money” after a scam – it’s probably the very same scammers having another try. Ask your local police or health care professional for advice.
  • Listen openly to your friends and family if they try to warn you. These criminals think nothing of deliberately setting you against your family as part of their scam, using romantic excuses such as “love conquers all”. Don’t let the scammers drive a wedge between you and your family as well as between you and your money.
  • Get out as soon as you realise it’s a scam. Don’t warn the scammers you suspect them, or ask them if they really do love you after all, because they will only tell you what you want to hear. Cut off contact unilaterally, abruptly and completely, and get into a genuine support group if you need help.

By the way, you can rumble some of these scammers right at the start by following the FTC’s advice of doing a reverse image search of the person they claim to be.

Sometimes, the reverse image search will lead you to someone else’s profile, and it will be pretty obvious that the scammer has simply copied someone else’s persona for their fake identity.


How one man silently infiltrated dozens of high-tech networks

We know what you’re thinking: “I bet you this is what they call a supply chain attack.”

And you’d be right.

The “one man” in the headline is cybersecurity researcher Alex Birsan, and his paper Dependency Confusion: How I Hacked Into Apple, Microsoft and Dozens of Other Companies, which came out last week, will tell you how his “attack” worked.

Of course, Birsan didn’t literally do it alone and unaided (see the end of his paper for the section of shout-outs to others who helped directly or inspired him indirectly during his research), and he didn’t really attack anyone in the way that a criminal hacker or cracker would.

His work was done in accordance with bug bounty rules or pre-arranged penetratation testing agreements, and Birsan actually includes bug bounties in his credits:

[A shout-out to] all of the companies who run public bug bounty programs, making it possible for us to spend time chasing ideas like this one. Thank you!

Malware-by-update

Loosely speaking, the corporate vulnerabilities that Birsan uncovered have the same cause as many malware-by-software-update stories we’ve written about before – a problem perhaps best described as a dependency disaster situation, although Birsan more graciously refers to it as dependency confusion.

Many programming languages these days come with an enormous treasure trove of community-contributed content that helps you to write even complex software very quickly, by giving you easy and automatic access to add-on libraries that solve programming problems that might take weeks, months or even years of work to code from scratch.

If you’ve ever programmed in C on Windows, for example, and you’ve wanted to add cryptographic capabilities to your software – to encrypt and decrypt data with AES, for example, or to validate file hashes, or to access high-quality random numbers…

…you’ll know that you don’t have to implement all that complex (and easy-to-get-wrong) stuff yourself.

You can just load and use the built-in system library BCrypt.dll (BCrypt is short for basic cryptography) and call the function BCryptGenRandom() in that library directly.

Your software is then said to be dependent on BCrypt.dll, inasmuch as your program won’t run if that DLL isn’t present (although on Windows it always is), and because your program automatically inherits all BCrypt’s strengths and weaknesses.

Wider, deeper and much, much bigger

When it comes to popular open source coding environments such as Node.js (basically JavaScript running outside your browser), Python and Ruby, these dependency trails can become much wider and much deeper, and therefore correspondingly much, much, bigger and harder to control.

A few years ago, for instance, we wrote an article entitled NPM update changes critical Linux filesystem permissions, breaks everything.

To set the scene in that article, we asked you to imagine that you had been set the task of writing a JavaScript program to match two images of human faces.

To solve this problem from scratch on your own might take years, but thanks to a ready-made library called facenet, you can literally do it in a few lines of code of your own. (There’s a working code example in the facenet package that is just 16 lines long, including comments.)

But, as we described back in 2018, facenet itself depends on @types/ndarray, argparse, blessed, blessed-contrib, brolog, canvas, chinese-whispers and many other packages; chinese-whispers, in turn, needs jsnetworkx, knuth-shuffle and numjs; of these, jsnetworkx needs babel-runtime, lodash, through and tiny-sprintf; and babel-runtime in turn needs regenerator-runtime, and so it goes, on, and on, and on.

As British mathematician Augustus De Morgan famously wrote in his 1872 book A Budget of Paradoxes:

 Great fleas have little fleas upon their backs to bite 'em, And little fleas have lesser fleas, and so ad infinitum. And the great fleas themselves, in turn, have greater fleas to go on; While these again have greater still, and greater still, and so on. 

In other words, even though a decision to use facenet in your program will reduce the complexity of your code enormously, it will greatly increase the complexity of the “hierarchy of fleas” on which your code depends.

Automatically handling dependencies

For better or worse, modern package management tools, including PyPi (for Python), RubyGems (for Ruby) and NPM (for Node.js) can hide this dependency complexity from you by automatically identifying, fetching, downloading, configuring and installing the packages you need, plus the packages on which they depend, and so on.

As handy as this sounds, you’re probably thinking that there’s a lot that could go wrong here, and you’d be right.

A complex dependency tree means a complex package supply chain, and a complex supply chain means a greatly increased attack surface area for you, and thus indirectly for your customers.

After all, whenever one of the packages in your own sea of dependencies gets updated, your package manager can go out and fetch and install the update for you by itself – automatically distributing it to your whole network, and even onwards to your customers, if you aren’t careful.

So, any mis-step in the curation of any of the packages you rely upon, by any one of the hundreds or even thousands of coders in the community whose programming and packaging skills you have implicitly chosen to trust, could lead to a security disaster.

Worse still, updated packages that are fetched and installed by your dependency manager can introduce malware into the heart of your coding ecosystem even if the source code in the package itself remains the exactly the same.

That’s because software packages of this sort typically include general-purpose installation scripts that are run just once, at install or update time, so a malicious installation script could sneakily mess with your network without visibly altering the directory trees full of source code that your developers rely on.

With a modified and booby-trapped package installation script, but unsullied and unmodified package source code, your developers won’t notice or experience any changes in the behaviour of the software that they’re working on, because the source code they’re using will remain unaltered.

When inside and outside collide

In Birsan’s research, he found numerous cases where source code published by a variety of major vendors, including Apple, Microsoft, Telsa, Uber, Yelp and dozens of others, contained clearly documented dependencies on internal (company-created) packages written in a variety of different languages.

As you can imagine, these internal packages – ones that weren’t available in public repositories like PyPi, Gems and the NPM archives – had internal names, typically because the functions they performed would never be needed in other software and would therefore be no use to anyone else.

(In your own network, for example, your coders might have JavaScript packages with unique names such as our-own-file-verifier or our-own-modified-authentication-check. There’s nothing wrong with that, not least because it makes it easy to spot your own customised internal packages at a glance.)

So Birsan wondered:

  • Can I collect a list of unique package names from the big players? These package names don’t need to be secret, and if they’re used and delivered in pure source code form, for example into a browser, they won’t be secret anyway.
  • How many of these internal names don’t appear in any open source package repositories? Intuition suggests that packages with company-specific names in them will be globally unique because no one else would have a reason to choose them.
  • What if I create public packages with the same names as internal ones and then publish external versions that claim to be more recent? (You can see where this is going.)
  • Will any of these major vendors have set up their internal package managers to accept external packages that happen to have the right names, and blindly use them by mistake as updates for local packages?

As you can probably guess from the headline, the answers to these questions were: Yes; None; They get accepted; and Yes, dozens of them.

In short, Birsan and his fellow researchers found a way to infiltrate updates into many corporate development environments in which the package source code they injected was unchanged, and thus would have gone unnoticed during code comparisons (diffs), code reviews and testing…

…but where the package update scripts, which get run just once during a remotely triggered update and then effectively ignored, were programs of their own choice.

Birsan didn’t actually install real malware – he just used a simple call-home script to confirm that his remotely injected “malware” had indeed been executed inside the “victim’s” development network, and from there had been able to connect outwards.

And there you have it – full-on remote code execution (RCE) holes that could be deployed at will, using popular public code repositories as unwitting malware carriers.

No passwords to hack; no 2FA codes to guess; no VPN vulnerabilities to unravel; no elevation of privilege exploits to acquire sysadmin rights; no malware or hacking tools to deploy; in fact, no access needed to the victim’s network at all.

What to do?

  • Separate your developers from live public repositories. Don’t let external package updates into your development network until they have been downloaded and vetted by your security team.
  • Be prepared to rewrite modules to keep dependencies under control. The bigger your dependency tree, the greater your attack surface. The more external package maintainers you rely upon, the more people whose innocent mistakes could lead to your own downfall.
  • Review all package update tools to stop them accessing public repositories unless they are supposed to. Ensure that any automated package update scripts inside your organisation are configured (and firewalled) to prevent them going outside your network by mistake.
  • Specify and verify dependencies and their allowed versions as strictly as you can. Birsan’s booby-trapped packages generally relied on company update scripts blindly accepting any package with the same name and almost any greater version number than the official internal version. Use strict package dependency lists so you can’t update “by mistake”. Use cryptographic hashes to create a strict package allowlist if you can, or use locked-down version numbers otherwise.
  • Don’t let code review become a simple checkbox. Don’t forget to review all parts of any updated package before you accept the update into your development or build ecosystem, even if that package originates inside your network. Be sure to review the scripts that run only once when the update is applied. It’s not enough to check just the final source code that ends up in your development or product directory tree.
  • Verify external package updates by watching for unexpected file system changes on a test system first. Don’t just look for modified files. Check for changes in access control lists and file permissions, too, and consider monitoring network traffic during the update process to look for connections you would not usually expect.

Birsan himself addiitonally recommends reading a paper from Microsoft entitled Three ways to mitigate risk using private package feeds.

In the jargon, go for a zero trust approach: take nothing on trust, but verify everything instead.

As we’ve known since Homer’s time, there’s many a slip ‘twixt the cup and lip.


Egregor ransomware criminals allegedly busted in Ukraine

According to a report from radio station France Inter, numerous cybercriminals connected to the Egregor ransomware gang have recently been arrested.

It’s not yet clear whether there are suspects in custody both in France and in Ukraine, but France Inter says [our translation] that:

This was a massive Franco-Ukrainian operation. Since Tuesday [last week], police in the two countries have been working together in an effort to dismantle a cybercrime group suspected of initiating hundreds of ransomware attacks dating back to September 2020.[…] Police arrested a number of hackers suspected of working with the Egregor cybercrime gang, providing hacking, logistical, and financial support.

RaaS

Like many ransomware gangs these days, Egregor isn’t a small and self-contained hacking crew.

Egregor is an example of what’s become known as RaaS, short for ransomware-as-a-service, a name that’s ironically derived from industry terminology such as IaaS (infrastructure-as-a-service) and SaaS (software-as-a-service).

Ransomware-as-a-service typically means that the core technical operators – the criminals who code the ransomware and collect the money from victims – don’t need to deal directly with those victims.

Instead, the core criminals behind a RaaS operation provide a web portal through which “affiliates” can sign up to acquire malware samples, after which it’s up to the affiliates to carry out the “street work” of breaking into networks, spreading the ransomware and initiating the blackmail demands in which most ransomware attacks culminate.

The core criminals then collect the cryptocurrency paid in by victims and pay the affiliate behind each attack a percentage of the takings.

Each affiliate in a RaaS scheme typically gets 70% of the “revenue” from each attack they orchestrate, while the core of the gang keep 30% of the takings from every payment.

We can only guess that the crooks chose this cut because 30% is a long-established figure in the legitimate cloud world – one that users of services such as Apple Music or Google Play are already used to.

Cybercriminal double-play

Egregor, like many other contemporary ransomware strains, doesn’t rely only on scrambling your files and then blackmailing you into paying for the decryption key.

Affiliates are expected to steal a victim’s “trophy data”, for example by secretly uploading it to a cloud storage service, before unleashing the cryptographic coup de grâce of locking up data on the victim’s computers.

This stolen data is used as a second, perhaps even scarier, basis for extortion.

The victim is told not only that they will get the decryption key if they pay up, and therefore be able to get their business moving again, but also that their stolen data will be deleted and not shared with the world at large.

Given that the data stolen by ransomware attackers often includes company secrets and personal data about customers, the crooks are holding a data breach disclosure sword, as well as a cryptographic business blockade, over their victims’ heads.

Egregor, along with many other ransomware gangs, even runs its own publicity site on the dark web, where companies that refuse to pay up get named and shamed, and samples of potentially embarrassing files get dumped for all to see.

The bust

Based on the France Inter report, it doesn’t sound as though the core players in the Egregor operation have been busted, but rather that a bunch of affiliates and “hired hands” have been identified and arrested.

Nevertheless, a report from ZDNet claims that the Egregor infrastructure – the underworld web services that keep gang affiliates in business – has been offline since last Friday, including both the data disclosure “name-and-shame” pages and the servers that control the operation of the malware itself.

Even if the core of the group is still going and ready to drag itself out of the ashes, this bust and associated operational disruption is therefore welcome news.

And to anyone who is part of, or who’s been toying with the idea of becoming part of, the ransomware-as-a-service scene, assuming that it’s a sneaky way of joining in undetectably at what feels like the fringes of cybercrime…

…just remember that you’re not as invisible or anonymous as you might think, and that, if you do get caught up in a dragnet like this, you can expect little sympathy from magistrates and judges these days when your time comes to get sentenced.

What to do?

  • Watch this video. In this excellent and well-informed talk, we give you comprehensive plain-English advice on how most ransomware attacks unfold, and how to defend against them every step of the way. (This video isn’t just for the healthcare sector, although it was inspired by an FBI alert last year that warned of cybercrooks who perceive hospitals as tempting targets given their current focus on the coronavirus pandemic.)

    LEARN HOW TO STAY SAFE FROM RANSOMWARE

    [embedded content]

    Watch directly on YouTube if the video won’t play here.
    Click on the cog to speed up playback or to see subtitles.

    • Read our advice on how to stay protected from ransomware. Ransomware crooks use a range of techniques to get their first toehold inside your network, including spamming out phishing attacks, cracking or guessing passwords, and seeking insecure or forgotten remote access servers on your public network.
    • Don’t give up on user awareness. Treat your users with respect and help them learn how to be more vigilant, and you can turn them into extra eyes and ears for your core cybersecurity team.
    • Make it easy for users to report suspicious activity. Set up a central mailing list or contact number to act as a “cybersecurity 911”. Cybercriminals don’t phish one user and give up if they fail, so an early warning from someone can immediately help everyone.

Naked Security Live – When is a bug bounty not a bug bounty?

We discuss bug hunting – how to do it professionally, how NOT to do it, and how to react when bugs are reported to you:

[embedded content]

Watch directly on YouTube if the video won’t play here.
Click the on-screen Settings cog to speed up playback or show subtitles.

Related reading

For futher information, please take a look at the following:

Why not join us live next time?

Don’t forget that these talks are streamed weekly on our Facebook page, where you can catch us live every Friday.

We’re normally on air some time between 18:00 and 19:00 in the UK (late morning/early afternoon in North America).

Just keep an eye on the @NakedSecurity Twitter feed or check our Facebook page on Fridays to find out the time we’ll be live.


go top