Category Archives: Phishing

Instagram scammers as busy as ever: passwords and 2FA codes at risk

We monitor a range of email addresses related to Naked Security, so we receieve a regular (a word we are using here to mean “unrelenting”) supply of real-world spams and scams.

Some of our email addresses are obviously directly associated with various Sophos-related social media accounts; others are more general business-oriented addresses; and some are just regular, consumer-style emails.

As a result, we like to think that our personal scam supply is a reliably representative sample of what the crooks are up to…

…and, as you’ve probably noticed yourself, even though we see all the “old favourites” pretty much all the time, we often see bursts of one specific scam topping our personal prevalence charts.

At one point, sextortion scams were in the #1 spot (that odious sort of message turned into a real deluge in 2019 and 2020).

Then home delivery and parcel scams went wild for a while; then we had a flurry of Docusign ripoffs.

Right now, however, our scam feed is awash with a variety of frauds targeting Instagram, Instagram, and Instagram.

Instagram scams of many sorts

In the past few days, we’ve had a bogus Instagram warnings, complete with Instagram branding, in each of these categories:

  • Fake warning: Community guidelines violation. Proposed solution: Contact us to find the content that needs to be removed to clear the block.
  • Fake warning: Copyright infringement. Proposed solution: Dispute the claim and cancel the strike against you by filling in the form.
  • Fake warning: Suspicious login alert. Proposed solution: If this wasn’t you, click through now to secure your account.

Although most of the examples we’ve receivedwere old-style username-and-password phishes, one went on to request our 2FA code as well.

Even though 2FA codes are typically only valid for a few minutes, cybercriminals no longer simply collect phishing data to use later.

Many cybergangs use manual or automatic techniques that alert them as soon as victims visit their phishing sites, allowing the crooks to react in real time.

If they can trick you into handing over a 2FA code as well as your password, they will try that password-and-2FA code combination immediately, knowing that, if they’re quick enough, they’re likely to get their attempt in before the 2FA code expires.

While this is not exactly exciting or unexpected news, it’s a reminder that these scams are almost certainly still delivering results for the cybercriminals – potentially giving them instant access to established, trusted social media accounts in moments.

And although these scams usually aren’t too hard to spot…

…the crooks are getting better and better at making them easier to miss.

It’s easy to miss the warning signs and fall into the trap if you’re in a hurry, or if you’re distracted by other events (and who isn’t ATM?), or if you’re a delighful, trusting person who thinks, “Oh, there’s obviously been some mistake. Surely just the matter of a moment to sort it out, thanks to the handy and official-looking form provided.”

What to look for

Here’s what the fake warnings we’ve received have looked like; if you have friends or family whom you think might be tricked by this sort of message, please share this article with them so they know that they’re one of millions people receiving the same fraudulent messages.

It’s often easier to convince people near and dear to you if it’s someone else behind the advice you’re offering – if nothing else, it sounds less “preachy” or judgmental if someone they don’t know is saying it.

And, sometimes, pictures are worth 1000 words, so here’s what they looked like.

1. Fake “Suspicious login alert” sample:

2. Fake “Community guidelines violation” sample:

3. Fake “Copyright infringement” sample:

What happens if you click through?

Here’s an example of the sort of follow-up pages that you’d see if you clicked through – this is the “suspicious login” sequence:

And here’s the fake “copyright appeal” – take note of the website name in these images, where what is looks like an upper-case I (eye) is actually a lower-case L (ell):

Finally, here’s the fake “community violation”, complete with a phishing page that tries to grab your 2FA code (or one of your backup codes if you don’t have your phone handy) for the crooks to try to break into your account right away, in real time:

What to do?

  • Don’t click “helpful” links in emails or other messages. Learn in advance how to handle Instagram complaints or security warnings, so you know the procedure before you need to follow it. Do the same for the other social networks and content delivery sites you use. If you already know the right URL to use, you never need to rely on any links in any emails, whether those emails are real or fake.
  • Think before you click. The emails above are only vaguely likely, so we hope you wouldn’t believe them in the first place (see point 1), but if you do click through by mistake, don’t be in a hurry to go further. The fraudulent sites above had HTTPS certificates (padlocks) and server names that included text such as “lnstagram” (note: with an L, not an I!), but they clearly weren’t hosted on the genuine Instagram site. A few seconds to stop and double-check the site details is time well spent.
  • Use a password manager if you can. Password managers help to prevent you putting the right password into the wrong site, because they can’t suggest a password for a site they’ve never seen before.
  • Watch our video below for additional advice. Early in 2021, we presented a Facebook Live talk looking at the history and evolution of this type of scam. If you have any friends who rely on social media to generate income, and who might be worried about getting cut off from their accounts, show them the video to protect them from tricks like these.

Watch directly on YouTube if the video won’t play here.
Click the on-screen Settings cog to speed up playback or show subtitles.

[embedded content]


S3 Ep71: VMware escapes, PHP holes, WP plugin woes, and scary scams [Podcast + Transcript]

LISTEN NOW

Click-and-drag on the soundwaves below to skip to any point. You can also listen directly on Soundcloud.

With Doug Aamoth and Paul Ducklin.

Intro and outro music by Edith Mudge.

You can listen to us on Soundcloud, Apple Podcasts, Google Podcasts, Spotify, Stitcher and anywhere that good podcasts are found. Or just drop the URL of our RSS feed into your favourite podcatcher.


READ THE TRANSCRIPT

DOUG. VMware holes, PHP flaws, WordPress bugs, and sextortion.

All that and more, on the Naked Security podcast.

[MUSICAL MODEM]

Welcome to the podcast, everyone.

I am Doug Aamoth; he is Paul Ducklin…


DUCK. That is I!


DOUG. We have a lot to cover today.

Paul, we love to start the show with a Fun Fact… and I don’t know if you at one point or another were a Nokia man?


DUCK. [SINGS THE NOKIA TUNE] Didda-loo-doo/Didda-loo-doo/Didda-loo-doo/DOOO.


DOUG. So good!


DUCK. You know that “dah-dah-dah dit-dit dah-dah-dah”. the tone for Nokia SMS, is just Morse code for “SMS”?


DOUG. Yes, Morse code for SMS – I did know that!

As I was researching these Fun Facts, because this fact will tie into our This Week in Tech History segment…

…this is a fun fact about the old-school Nokia phones, which had a reputation for excellent durability.

And, among those, the Nokia 3310 handset, circa September 2000, is believed to be the most durable of the bunch.

So if you had a Nokia handset around that time, odds are it was the 3310 and it was indestructible.

I myself was a Nokia 6110 man – that came out 1998.

I remember buying it – I was working at a computer superstore called Circuit City; I was selling computers and I bought it on the employee discount.

There was a cellphone plan for $50 that gave you 70 daytime minutes and 200 nighttime and weekend minutes.

For $50 – and I thought that was a great deal.


DUCK. I had the…was it the one after? The 6210?


DOUG. That was a good one.


DUCK. And that went missing.

So, I thought, “I’m just going to get a cheap phone.”

And I got… I think it was the 8210, the little tiny one.


DOUG. Yes.


DUCK. Like the 3310, but even smaller.

And I must say, Doug, that phone had the best voice quality of any phone I’ve had before or since.


DOUG. Yes – isn’t that weird?

It’s gotten worse somehow.


DUCK. And it had the world’s weirdest camera.

I think it was 200 kilopixels, that camera.


DOUG. [LAUGHS]


DUCK. I took pictures of beaches I’d visit, and afterwards I was unable to recognise what the photograph even was.

I’d infer they were beaches because they’d be somewhat browny-coloured at the bottom and mostly blue at the top.

What a hopeless camera!

But then people didn’t buy phones as cameras, did they?.


DOUG. No.


DUCK. They bought them to make telephone calls!

And at, that I must say, it was a superstar.

And how often did you have to charge your phone, Doug?


DOUG. Once a month maybe?

Let’s talk about this VMware story.

This is interesting because as I was reading it, I was like, “What’s the big deal?”

And then you explained what the big deal was – I was perfectly lured into the trap of it not being a big deal.


DUCK. Well, the problem with the main bug that we’re talking about here – or main bugs: CVE 2021-22040 and -22041 – is that although you need to be a local administrator (basically, to have root access already) in order to exploit this bug…

…that root access can be inside a guest virtual machine on the shared computer, not the host.

And of course, if you’ve got a virtual machine in the cloud, you might not know who is running the other VMs on that physical server.

Even if it’s a non-cloud virtual machine server and it’s in your company, you might have several different departments who are each expected to keep their data private from each other – for GDPR reasons, or just common sense reasons.

You need to assume that on any virtual machine server, say one that has 10 VMs running on it, there are going to be 11 different administrator accounts and passwords: one for the host, and one each for each of the guests.

And the whole idea is, as the host administrator, you’re not supposed to have to worry about the guests.

If they’re untrustworthy, they’re untrustworthy *only inside their own VM*.

So, the problem with these bugs is they could lead to what are called “guest VM escapes”.

In other words, somebody who has root access inside one of the pseudo-servers could somehow escape from it and manipulate either: other guest computers which might not belong to them, or, worse, the host server, which almost certainly means that they could then reach in and fiddle with all the other guests as well.


DOUG. So, we’ve got a patch, and if you can’t patch, we have a temporary workaround.

So, two ways to sidestep this issue at the moment?


DUCK. Yes.

The patch is the right way to do it, because it’s not just these guest escape bugs that you’re patching.

There are a whole load of other bugs as well – they don’t seem to be as serious or severe, but why patch one thing when you can patch seven things at the same time?

But, like you said, if you can’t quite do that yet, for the guest escape bugs there is a workaround.

Unfortunately, as I understand it, it basically means you get no USB simulation or no USB access in your guest VMs.

So, if you have a guest VM where you expect to be able to emulate USB devices, for example, they’re not going to work.


DOUG. OK, that is: VMware fixes holes that could allow virtual machine escapes on nakedsecurity.sophos.com.

And next we’ve got a PHP flaw.


DUCK. Yes, it’s one of those things where you find yourself thinking, “OK, so somebody who is really naughty and who takes the proof-of concept could crash my PHP process, and that could stop my web server responding until the relevant process gets restarted.”

Is that a big deal?

But the crash is actually caused by deliberately forcing memory mismanagement.

In particular a memory “use-after-free”, which is where you basically go and poke a knitting needle into somebody else’s memory and potentially modify it in a completely untrusted way.


DOUG. When Mozilla issues patches, they are quick to point out that, when a bug shows evidence of memory corruption, they say you should “presume that with enough effort, some of these bugs could have been exploited to run arbitrary code.”


DUCK. Yes, that is quite right!

Because although that might be difficult to achieve, you can imagine that the payback, for a cyber crook who manages to figure it out, could be huge.

Once they know where to start looking, it’s a heck of a lot easier for the Bad Guys to reverse-engineer the exploit from the patch than it is for them to figure it out as a zero-day attack in the first place.

So, I always warm to Mozilla when they put that in basically every security update they do.

They could spend days or weeks on each one of them, to prove that it really is exploitable or not…

..instead, they just go, “You know, we’ve patched these and we’re assuming that, if somebody wanted to, and you didn’t patch, they could be exploited in the future.”

So, be warned.

And the irony, Doug, is that it was essentially incorrectly-managed input in a routine that was supposedly all about input validation.


BOTH. [LAUGHTER]


DUCK. We shouldn’t really laugh!


DOUG. No…

Fortunately, if you’re a PHP user, the fix is as simple as updating and patching.

We’ve also got some advice for programmers.

We do like to say, “Validate Thine Inputs”, just in case… but there’s other advice here as well.


DUCK. In the article, I’ve put two diffs (that means “code differences”) comparing the previous version and the fixed version.

In this case, the function deals with checking for validity what are called “floating point” numbers or “decimals”, like 2.5, or 3.14159 (that’s pi), or whatever.

And another thing you can do is you can say, “Oh, and I want to make sure whether the number provided falls within a certain range.”

For example, where somebody is giving a scaling factor, you might want that scaling factor to run from -1 to +1.

And it turns out that, under some circumstances, if you send input that fails the check, then instead of the check just failing, what happens is this: the code frees up the memory that you’re using to store the number, and then it’s supposed to immediately reallocate new storage to store the validated number.

In one of the places, that happens correctly.

Basically, the programmer does what you might call, “Look for oncoming traffic. If clear, step into the road, and cross in one go.”

In one of the places that’s the order they do it.

In the other place, they’ve managed to get the three lines of code in the wrong order…


DOUG. [LAUGHS]


DUCK. …and they basically go, “Step into the road. Then check if there are any cars coming…


DOUG. [LAUGHS]


DUCK. …and if there aren’t/you’re still alive, complete the crossing.”

And it looks to me as though what happened is that in one part of the code, the range checking was added, and then someone said, “Oh, we should put that in this other, similar part of the code.”

And they copied the line that does the error checking, but they pasted it into the wrong place – *between* the “memory free” and the “memory reallocate”, instead of either before both of them or after both of them.

(Obviously it should be before, because the idea is, if it’s an error, you’re just going to bail out immediately.)

So the fix was: moving one line of code down, one line in the file.

So, the advice to programmers is this…

Particularly in C, it is easy to make allocation and deallocation errors, and they *all* matter, so when you’re doing a code review, you need to check them all.

And the second thing is – I think if I were to refactor or rewrite this code… because it’s a frequent idiom in this particular module, “free up the memory and allocate a new block”, why do that in two lines of code?

Why not create a function called something like “free_up_­and_reallocate_­in_one_go()”?

That way the programmer who comes after you can’t copy and paste a line of code in between two lines and break things.

Because there’s only one line., so they can only paste ahead of it or after it.

And in this case, either of those would have worked out OK.

So, the devil is in the details, as they say. Douglas.


DOUG. Very good.

All right, that is: Irony alert! PHP fixes security flaw in input validation code on nakedsecurity.sophos.com.

And now we have a WordPress plugin bug.

OK, it’s a bug – we can talk about the bug, but the way the company *handled* the bug was really impressive, Paul.


DUCK. It’s not the most dramatic bug in the world…

..but it could be problematic, as the company that created it explained.

And so I thought it was worth reminding people who are WordPress users, and who have these particular plugins – there’s a free version called Updraft, and the premium version, the paid version, called Updraft Plus… if you’re using those, they’re backup plugins that help you look after the content of your site.

So, if someone messes something up, you can restore it.

But the bug could have bad consequences.

The problem is this.

Anybody who has a login on your site (so it’s not an “unauthenticated bug”, but with many sites, you might have the administrators, and you might have dozens or even hundreds of contributors who are allowed to upload and put articles in there, and then somebody else has to approve them)…

Any user who can log into your site could, in theory, if they knew how this bug worked, could just get the whole backup of your whole site in one lump.

Anyway, when I read the report from the Updraft team, I thought, “My goodness, although this is a somewhat modest bug, and it was quickly fixed, if only more security reports were like this one!”

Clear; written in plain English; no excuses; and a genuine and believable apology at the end.

Even if you don’t use this plugin, you might want to go and read this report, because I think it’s a good example of how you can do security reports well, and perhaps win back trust.

With a less considered response, you might have had exactly the opposite reaction, and actually made your customers feel worse off.


DOUG. All right, that is: WordPress backup plugin maker Updraft says you “should update”.

And it is time in the show for This Week in Tech History.

Well, we talked about Nokia earlier in the show, and this week, on 23 February 2005, we said hello to the first mobile phone virus.

It was a worm called Cabir that affected the Symbian operating system, which was popular on Nokia phones.

The worm spread via Bluetooth to nearby handsets, and didn’t actually do true damage, other than affecting battery life thanks to constant Bluetooth polling.

And it was believed to be released by its creators more as a proof-of-concept, or a warning that mobile malware was indeed possible.

So, Paul, where were you when Cabir broke out?


DUCK. Well, I was still a Nokia user!

This wasn’t followed by an absolute deluge of mobile phone malware, possibly to the collective relief of the cybersecurity industry and users.

But it was a reminder to all of us that, well, “Here’s another operating system that you have to know something about.”

And, boy, Symbian was kind of complicated, Doug!


DOUG. I remember, yes!


DUCK. Do you think Android has lots of variants?

Well, with Symbian it was the same sort of thing – it was a fascinating and complicated ecosystem, Symbian.

[QUIZZICAL] And then for, better or for worse, it just disappeared…


DOUG. I think I read somewhere that, at its height, it was on 70% of handsets…


DUCK. Which is why, when malware like Cabir came out, there was that sense of, “My golly, if the crooks really figure this out, and they figure out how to make money out of this, we’re all doomed, because everyone’s vulnerable!”

By good fortune, we got a few years to think about it before mobile malware did become the sort of problem it is today…

…which I think took more powerful phones.

Suddenly the crooks could go, “Hey, I don’t have to strip down my malware. I don’t have to write this super-miniature thing that doesn’t really do anything. I can just use the same techniques that I would when I’m writing a regular app. I’ll just be naughty about it.!


DOUG. All right.

If we stay in the “old school” for a little while here, we’ve got a new sextortion scam that uses an old-school tactic that I, for one, have not seen in quite some time. But it’s back!


DUCK. You mean, “Let’s send the entire text of the email not as an attachment, not as a link that has to be downloaded, but as a pre-rendered, decent-resolution inline image?”


DOUG. Exactly!


DUCK. It’s an old technique designed to present difficulties, particularly to mail scanning or content scanning software that relies on things like linguistic analysis.

You convert the text, in advance, into an image, so that anyone who wants to do any kind of text scanning or natural language processing on it – or, indeed, to look for any links that are in there – has to do some kind of text recognition first.

This is not only error-prone, it also adds a whole load of extra computational complexity to preprocess every image into text.

But it died out with crooks, I think, because when you have an image, you can’t easily have a clickable link in it

On the other hand, if what you want to do is scare the person, present what looks like an official document and say, “Read this, think about what we’re saying to you, and email us back and maybe you won’t be charged with these serious criminal offenses allegedly related to viewing of online porn”…

“Contact us: maybe you have a good explanation as to why this might have happened innocently. We’re all ears. Oh, by the way, you have 72 hours.”

So, it’s a nasty trick because, in this case, an image is absolutely fine.

You’re not looking for the person to click on something like “I reject your copyright infringement notice”, like the copyright scammers do.

The verbiage is: “It’s your choice. But there may be something you wish to say in your defence. Here’s the email address.”

In truth, you should spot this scam… because to those of us who have had any number of these before, they all look the same.

They’ve all got the same dramatic story in them.

They’ve all got crazy mistakes, if you know what to look for.

Like this one: the investigating officer you have to email isn’t just Sergeant So-and-so or Inspector So-and-so.

It is the Director General of the French Police!

And the email, amazingly, comes from a person called Jean Luc Godard [LAUGHS].

He is in his 90s now, I believe, and he is a very famous neo-marxist French cinematographer…


DOUG. [LAUGHS]


DUCK. He’s very well known, and made some amazing films, so I was surprised to find that, in his dotage, he had gone from being a dramatic filmmaker to a serving police officer.

But there you go – I think the thing the crooks are looking out for here is this: they don’t want a million people to respond, do they, to scams like this?

They just want to send out a million copies of the message.

The pretext here in the message is that, “Obviously, we didn’t want to put the gory details of the evidence in this email message, but you may want to contact us to try and clear this up.”

And you imagine that the crooks… their goal is that they want to draw you in.

They want those people who are scared enough, or vulnerable enough, or uncertain enough that they will actually type in the lengthy email address and that they will reply.

Then, they’re looking for what you might call a “long game” scam, where they’ll be in contact with this person over and over and over.

So, the lack of a call-to-action link, the lack of a “click here; put in your password right now so we can drain your credit card”…

…that doesn’t matter.

The crooks are looking for people whom they can scam for a long period of time, in a human-led attack.

They don’t want a million people to respond!

They just want people who have self-selected as those who were terribly scared.

And, as you can imagine, because of the subject matter, those vulnerable people, those easily scared people…

…given the subject matter, they’re not likely to turn to friends and family to help, are they?

“Hey, we’ve got you for cyberporn offenses. This is serious.”

You might think, “Well, I’ve been to some websites that I don’t think were dodgy, but who knows what they’re connected with? Who knows what they’ve got? I’d better find out.”

You’re probably going to think, “Maybe I should reply and just see what’s going on,” rather than, “I should ask my granddaughter, or my uncle, or my parents, or my best friend.”

And that’s really what these crooks are banking on.

And the other thing with the image, of course: it lets them make it look like it’s a scanned-in, official, printed document.

Bcause there’s an “APPROVED” stamp on it; there’s a stamp with someone’s signature.


DOUG. So, we have some advice for the good people here.

We talked a little bit about it…

* How likely does the message really seem?

We talked about that.

* If in doubt, don’t give it out!

We say that a lot – that’s probably a good place to start.

* Don’t be afraid to check with a trusted source.

That’s good, because if someone were to come to me as a trusted source, I would do the next thing in your advice, which would be:

* Check online for similar messages reported by other people.

So, everytime someone comes to me and says, “I was on Facebook and I saw that this is happening,” I go and Google it and I say, “No, that is a scam.”


DUCK. Yes, you’re quite right, Doug!

Because the reason why we write these stories up is precisely so that there is the kind of evidence that you mentioned there.

So if you think, “Well, I wonder if anyone else is getting these?”, and then you search…

…that won’t necessarily catch the crooks out, but sometimes it *will* catch them out.

Because usually it *does* show up that this was obviously a campaign where somebody is accusing [AMAZED TONE] 100 million people, at the same time, of exactly the same crime.

What’s the chance of that?

And so that’s a way that you can set your own mind at rest instead of allowing fear, uncertainty and doubt to eat away at you.


DOUG. All right, that is: French speakers blasted by sextortion scams with no text or links on nakedsecurity.sophos.com.

And, as the sun begins to set on our show for this week, we have our Oh! No! segment.

We have a reader question for you, Paul, in regards to our article Google announces zero-day in Chrome browser: update now.

Reader Diane MP asks a fair question:

What’s the casual, mildly proficient user supposed to do? Checking my Chrome version gives me a number that does not resemble the required one on Google Play. I just get ‘already installed’. If experts can’t figure out the complexities of this threat and how to protect against it, well, maybe it’s time for people like me to just move on from the computer era. I’ve been wanting to start painting again anyway, and my phone works just fine independent of the Internet.”

Diane… I would say, Diane, don’t go!

Don’t let this kill your joy for being connected to everyone else in the world.

But, Paul, what do you say to that?

The frustration of constantly updating and knowing which version you’re supposed to be running?


DUCK. Yes, I was very sympathetic with that comment.

I can’t remember how I exactly responded – I think I just said, “Look, here’s the official way that you find out, and it works on your mobile phone and on a regular browser on your laptop.”

So I’m very sympathetic to Diane… I figure, Diane, if you’re listening, maybe take more time to paint!


DOUG. I was going to say, Diane, “Do paint, but don’t move on from the computer.”


DUCK. I wouldn’t throw your phone away!

But sadly, it is true that sometimes even companies that pride themselves on being able to find needles in haystacks and present them to us, to our amazement, like the “next video” Google recommends to you and you think, “How did they know?”…

Yet, when it comes to giving you a version number, it’s like extra-super-complicated!

And unfortunately, for every app that you update, and for every app you use, there tends to be a different way to find the one true version number; a different way to look up online what the real, true, current, patched version number is.

Sometimes you just have to run around the houses a bit, trying to work out what the right version is, just to see whether you’ve got it.

Or come to a site like nakedsecurity.sophos.com… to be honest, that’s one of the things we aim to do: where there are simple answers, we’ll try and give them to you – and then, if there are anomalies, or exceptions, or weird things around the edge, you can ask for our comments in the Naked Security community.

And we will do our best to answer for you if we can.


BOTH. [LAUGHTER]


DUCK. But the fact that sometimes it’s a hassle for us to find out…


DOUG. …yes, it’s not just you, Diane.

No, it is absolutely frustrating and confusing for everyone!


DUCK. Fortunately, you will find on Naked Security, when you ask questions like that, people will chip in.


DOUG. Hang in there, Diane!

Well, if you have an Oh! No! you’d like to submit, we’d love to read it on the podcast.

You can email tips@sophos.com; you can comment on any one of our articles like Diane did; or you can hit us up on social: @NakedSecurity.

That’s our show for today; thanks very much for listening.

For Paul Ducklin, I’m Doug Aamoth, reminding you, until next time, to…


BOTH. Stay secure!

[MUSICAL MODEM]


Microsoft blocks web installation of its own App Installer files

Late last year (November 2021), we reported on an unusual campaign of scammy emails warning recipients that they were in big trouble at work.

If you saw one of these, you’ll probably remember it: a customer had made a formal complaint and the company was scrambling to hold a meeting to investigate your alleged poor conduct…

…so you were expected to follow a link to download and read the complaint against you.

Here at Sophos, many of us were on the spamming list and received emails of this sort:

Click on image for full article.

You’re fired!

Some of us subsequently received follow-up messages telling us that we were no longer technically in trouble, for the rather dramatic reason that we’d been fired; our “termination letters” were attached, once again as a document download link.

The downloads looked like PDF files:

But the download links weren’t conventional http:// or https:// downloads; instead, they relied on an unusual link starting ms-appinstaller://, which (on Windows, at least), triggers Microsoft’s App Installer system to orchestrate the download process.

This ms-appinstaller protocol not only takes you down a very different visual path than a traditional web download, but also mirrors the sort of experience that you would only ever have seen before, if you had seen it at all, when using the Microsoft Store.

Notably, the process insists on a digitally-signed application bundle (for bundle, think Android APK or Linux package), and therefore starts with a reassuring, if unfamiliar, popup assuring you, with a confident-looking green tick (check mark), that this is a Trusted App, apparently coded by a vendor you know:

Note that in the screenshot above, the publisher’s name (fraudulently given as “Adobe Inc.”) is just a text string in the app bundle itself; to “verify” the signer, you need to click on the blue text Trusted App].

Unfortunately, the signer’s name doesn’t tell you much at all, or at least it didn’t last year when we first saw this trick used for distributing “trusted” malware.

Rogue signing made easy

In the example above, the app signer claimed to be a accounting firm and was a registered UK business.

But if you had chased it further than that, you would have found a company that had never really done any business, was located at an unlikely address, and was about to be deregistered anyway.

Simply put, based on an online-and-apparently-never-used-for-real company registration that probably cost just a few pounds (and who knows whether the attackers actually paid anything for it, or simply acquired access to the company data via an earlier hack or data breach?), cybercriminals were able to:

  • Acquire a “trusted” signing key and use it to sign App Installer bundles.
  • Distribute an App Installer bundle that presented itself as a Trusted App, much like an app from the curated Microsoft Store.
  • Trigger the installation via the Appinstaller.exe process, not more suspiciously via a browser.
  • Install and activate numerous unsigned apps under the imprimatur of the signed, and allewgedly trusted, top-level bundle file.

In the you’ve-been-fired email example we encountered here at Sophos, the purveyors of the “Trusted App” turned out to be the BazarLoader malware gang.

So, if the legitimate-looking, Microsoft Store-like installation process was enough to ensnare your trust, you’d have ended up with persistent backdoor malware – what’s often referred to as a bot or zombie.

The backdoor bot started off by leaking system configuration information to the crooks, and then waited for remote instructions on what new malware to download and run next.

Cybercriminals with generic remote code executiuon (RCE) access like this typically use your computer as a pawn in the underground economy, “renting” it out – possibly repeatedly – to other crooks to conduct further cybercrimes, either against your computer, or via your computer, or both.

Sometimes, sadly, this sort of zombification only gets detected by the victim when the bot operators “rent out” the infected computer (or use it themselves) one last time for a final round of malware that you can’t help but notice…

…typically, a ransomware attack.

More security implied that an HTTPS certificate

Note that the level of cybersecurity belief you are invited to adopt in the case of an ms-appinstaller:// download is significantly greater than the cybersecurity inference you’re expected to make from a regular https:// web certificate.

Web certificates, which use the TLS (transport layer security) protocol to encrypt and to integrity-check the data exchanged in an HTTP session between a client and the server, don’t say anything about how trustworthy the site at the other end actually is.

Indeed, browser makers have gone out of their way, over the past decade or so, to adjust the words, icons and colours used in the browser itself to describe an HTTPS-protected site.

After all, TLS, by design and by definition, provides transport-level security, thus putting the S (for secure) in HTTP (short for hypertext transfer protocol), but doesn’t aim or claim to perform any verification or trust assessment for the content that’s transmitted.

Firefox, for example, still uses a padlock icon to denote a “secure” site, but annotates the padlock simply with the words “Connection secure” and “You are securely connected”, without making any claims about the site itself:

The Edge browser does something similar when you click on a website’s padlock, mentioning the confidentiality of the connection, but not suggesting that you can therefore ultimately trust the contents of the site:

This site has a valid certificate, issued by a trusted authority. This means information (such as passwords or credit cards) will be securely sent to this site and cannot be intercepted. Always be sure you’re on the intended site before entering any information.​

In contrast, the App Installer popup that verifies the digital signature of the App Bundle you’re downloading explicitly identifies the software itself as a Trusted App, even though it allows the signer of the app to include entirely bogus vendor data in the app bundle, and then helpfully displays that fraudulent “identification” directly beneath to the “Trusted App” designator.

This implies, in our minds, anyway, that a much higher cybersecurity bar has been reached: it acts as a type of content-level assertion that lives on after installation, rather than merely denoting some degree of transport-level security that protects the network part of the download only.

Recommended workarounds

We recommended, and still recommend, various security workarounds, including:

  • Use a web filter, if you have one, to block the download of likely App Installer bundles. File extensions to block include: .msix, .appx, .msixbundle and .appxbundle.
  • Use a web filter, if you have one, to prevent users from clicking on URLs that start with ms-appinstaller://. This is the special protocol (referred to in URL terminology as a scheme) used by Windows to fire up the App Installer to take over from your browser.
  • Use Microsoft Group Policy settings, if possible, to prevent non-admin users from installing App Bundles at all. If that’s a step too far, lock down users so they can install App Bundles from the Microsoft Store only.

For the Group Policy tweaks that help with this issue (which was given the vulnerability idenfitier CVE-2021-43890), you can consult Microsoft’s published guidelines on which settings to use.

A step too far?

Our middle recommendation above might seem rather drastic, either for your internal users if your company relies on vendors that ship their software via App Bundles, or for external customers if you have gone down the App Bundle path for software delivery.

After all, App Bundles are supposed to have several advantages, notably for vendors with endpoint products that support a range of different Windows versions running on various computer types (e.g. Intel, AMD, ARM):

  • One signed bundle to rule them all, digitially signed at the top level.
  • User doesn’t need to figure out which of numerous distinct builds to use on their computer, so can’t end up with the wrong version.
  • Web dowloads via the App Installer save bandwidth by omitting the parts that aren’t required.

In Microsoft’s own words:

The ms-appinstaller protocol handler was introduced to enable users to seamlessly install an application by simply clicking a link on a website. What this protocol handler provides is a way for users to install an app without needing to download the entire MSIX package. This experience is popular, and we are thrilled that it has been adopted by so many people today.

What to do?

Despite the upbeat paragraph at the end of the previous section, Microsoft isn’t so thrilled that cybercriminals have adopted this “seamless” process that works “by simply clicking a link”, as we documented for the first time last last year.

For the time being, at any rate:

We are actively working to address this vulnerability. For now, we have disabled the ms-appinstaller scheme (protocol). This means that App Installer will not be able to install an app directly from a web server. Instead, users will need to first download the app to their device, and then install the package with App Installer. This may increase the download size for some packages.

In other words, Microsoft itself has given up entirely on supporting its own ms-appinstaller:// URL type via the web, because it thinks the process is still too easily abused.

Therefore:

  • If you use App Bundles to distribute your own software, you will need to change either your software packaging process, or your installation instructions, or both. Otherwise, potential customers may assume that your software is no longer compatible with their computer or their network, and shop elsewhere.
  • If you rely on vendors who distribute programs via App Bundles, you will need to change your deployment and updating procedures. Otherwise you might end up out of date, or with unhappy internal users.

Coronavirus SMS scam offers home PCR testing devices – don’t fall for it!

A Naked Security reader in the UK alerted us to a scam they received this afternoon in a text message.

The message claimed to come from the NHS, Britain’s National Health Service, which administers coronavirus vaccinations and provides free testing throughout the country:

As you probably know, PCR tests, which currently require processing in a laboratory, are considered more accurate than self-administered lateral flow tests.

Indeed, PCR tests are both advised and free in the UK if you already have coronavirus symptoms, or have been in contact with someone who’s infectious.

You can have a one-off test set sent through the mail, and post the completed test out to the lab for processing, but that adds time until you get the result – and if the test is positive but you don’t yet have any symptoms, that in turn adds time to your mandatory isolation period.

So, as you can imagine, for anyone who is self-employed but who needs to be out and about for their job – plumbers, electricians, care workers, painters and dozens of other professions – a home testing device that could reduce the time to receive a trustworthy result would be very useful.

We have no idea if such a consumer device could affordably be made, and if so whether the results could reliably and securely be validated online, but in a world in which retail companies can deliver esoteric products to your doorstep within hours and securely receive payment, in which telephones include high-resolution video cameras that can stream the images worldwide in real time, and in which private citizens can buy joyrides into space, we’re going to assume that there aren’t any insurmountable technological reasons that would make this a laughable idea.

Even better, for people who are self-employed and visit lots of other houesholds to do their jobs, is that a home testing device could allow workers to test so rapidly and reliably that they might even be able turn up at their appointments with a fresh and verifiable “COVID test pass” performed that very morning.

So you can understand why people who received the message above might have considerable interest in checking it out.

What to do?

We hope you’d spot this for a scam right away, but you can see why it was worth the while of the crooks to try it out.

After all, the UK government is an enthusiastic user of text messages for numerous purposes, including 2FA, reminders, notifications and more, so SMSes from government departments are not a rarity.

Indeed, you can’t book a coronavirus test online without providing a mobile phone number in advance, ready to receive the test results by text.

So, if you’re tempted to click through just in case, ask yourself the following questions first:

  • Is the story likely? No. Test results may come by SMS, but offers of amazing new experimental medical equipment don’t!
  • Does the link look likely? No. NHS links usually end NHS dot OK, wereas this one has a weird-looking dot COM address.
  • Do I need to click the link at all? No. Even if the link were genuine, you should be able to ignore the link and find your own way to the right place.

We’d be happy to show you what this scam looked like if you did click through, but we’re happier still to tell you that the website currently isn’t working properly.

The domain is brand new, registered just this morning; the HTTPS web certificate was issued at 7am today; and the web server is active and accepting connections…

…but all we could coax out of it was a short list of filenames, and a page that said Error 600:

The HTTPS certificate was issued this morning.
At the time of publication, that’s all she wrote.

(In case you’re wondering, the web page that says Error 600 actually had a HTTP response code of 200. Error 600 is meaningless, because there aren’t any HTTP codes above 599.)

Instagram copyright infringment scams – don’t get sucked in!

If you create any sort of online content at all – even if you’re just a once-in-a-while blogger or an occasional social media user – you almost certainly know how easy it is for other people to rip off your material and present it as their own.

We’re not talking about links, shares, retweets, and so on, which are legitimate ways for people to re-promote your work.

We’re referring to outright scraping, copying or republishing of your original content by someone else, as though they created the material themselves…

…without ever bothering to ask for permission.

At the same time, you’ll also know how easy it is to end up accused of copyright wrongdoing yourself, even if you’re always careful only to use third-party material in accordance with the original creator’s licensing guidelines.

So, given the frequent argy-bargy that surrounds online copyright issues, many social networks have established formal procedures for making complaints and appealing against takedowns.

Instagram’s procedures, for example, are listed in some detail on its official help page, which explains both how to complain if you think you’ve been ripped off, and how to respond if you’ve been falsely accused.

Enter cybercrime

As you can imagine, cybercriminals have learned how to use copyright infringement notices as bait in phishing scams.

By pretending to be a social network such as Instagram, they try to scare you into thinking that there’s an official copyright complaint against you..

…whilst at the same time giving you a quick and easy way of replying with a counter-claim of your own.

The criminals know that the complaint is totally bogus, and they know that you know it’s bogus.

But instead of leaving you to figure out that it’s bogus because there was no complaint in the first place, they trick you into thinking that the complaint was real, but that the bogus part was the accusation made by the complainer.

To do this, they don’t accuse you themselves, and they don’t threaten to sue; instead, they offer you an easy way to “prove” your “innocence” by providing a link to object to the “complaint”.

While we hope that you’d spot an email scam of this sort right away, we have to admit that some of the copyright phishes we’ve received in recent weeks are much more believable – and better spelled, and more grammatical – than many of the examples we’ve written about before.

Like this one:

Hello, @nakedsecurity

We recently received a complaint about a post on your Instagram. Your post has been reported as infringing copyright.

Your account will be removed if no objection is made to the copyrighted work. If you think this determination is incorrect, please fill out the objection form from the link below.

The [Appeal] button in this example uses a shortened link (this one comes from from bit.ly), but whether you check the destination of link in advance or click through anyway, the resulting website doesn’t look as bogus as you might expect.

To check a bit.ly link before visiting it, paste the link into your browser’s address bar and add a plus sign (+) at the end, which tells bit.ly to show you the original link without redirecting to it.

Here, the crooks have registered the fake-but-not-too-far-off domain name fb-notify DOT com, and the link you’re given takes you to a personalised scam page that explicitly references your account:

In the screenshot above, the account statistics are correct, or they were at the time we received the email, and the image shown does indeed come from our Instagram page. (Amusingly, and ironically, that means the email itself infringes copyright.)

In other pages linked to by these scammers, the image ripped off by the crooks always seemed to be scraped from the second-to-last post on the victim’s Instagram page. That might have been a coincidence, or it could be a deliberate ploy by the crooks to pick an image recent enough that you’ll remember posting it, but not so recent that the copyright complaint might seem unrealistically quick.

The sting

Anyone who gets this far is almost certainly starting to believe the scam, which would make the next page seem unexceptionable enough, especially given the HTTPS padlock and the sort-of-OK-looking fb-notify domain name:

The website then pretends you made an error typing in your password and tells you to try again, presumably as a simple way for the crooks to discard login attempts where a user clearly just bashed out any old garbage on the keyboard to see what happened next:

Then there’s a believable enough message to tell you that your appeal was submitted successfully:

Finally, the criminals sneakily redirect you to the real Instagram copyright page that we listed above, presumably to add an air of legitimacy that leaves you on a genuine website:

What to do?

  • Don’t click “helpful” links in emails. Learn in advance how to handle Instagram copyright complaints, so you know the procedure before you need to follow it. Do the same for the other social networks and content delivery sites you use. Don’t wait until after a complaint arrives to find out the right way to respond. If you already know the right URL to use, you never need to rely on any link in any email, whether that email is real or fake.
  • Think before you click. Although the website name in this scam is somewhat believable, it’s clearly not instagram.com or facebook.com, which is almost certainly what you would expect. We hope you wouldn’t click through in the first place (see point 1), but if you do visit the site by mistake, don’t be in a hurry to go further. A few seconds to stop and double-check the site details would be time well spent.
  • Use a password manager and 2FA whenever you can. Password managers help to prevent you putting the right password into the wrong site, because they can’t suggest a password for a site they’ve never seen before. And 2FA (those one-time codes you use together with a password) make things harder for the crooks, because your password alone is no longer enough to give them access to your account.
  • Talk to a friend you know face-to-face who’s done it before. If you are active on social media or in the blogosphere, you might as well prepare in case you ever get a copyright infringement notice for real. (We’re assuming the accuation will be false, but the complaint itself will actually exist.) If you know someone who who has already gone through the genuine process once, see if they’ll tell you how it went in real life. This will make it much easier to spot fake complaints in future.
  • Watch our video below for additional advice. Early in 2021, we presented a Facebook Live talk looking at the history and evolution of this type of scam. If you have any friends who rely on social media to generate income, and who might be worried about getting cut off from their accounts, show them the video to protect them from tricks like this one.

Watch directly on YouTube if the video won’t play here.
Click the on-screen Settings cog to speed up playback or show subtitles.

[embedded content]


go top