Category Archives: News

Phishing scam uses Sharepoint and One Note to go after passwords

Here’s a phishing email we received recently that ticks all the cybercriminal trick-to-click boxes.

From BEC, through cloud storage to an innocent-sounding One Note document, right into harm’s way.

Instead of simply spamming out a clickable link to as many people as possible, the crooks used more labyrinthine techniques, presumably in the hope of avoiding being just one more “unexpected email that goes directly to an unlikely login page” scam.

Ironically, while mainstream websites concentrate on what they call frictionlessness, aiming to get you from A to B as clicklessly as possible, some cybercrooks deliberately add extra complexity into their phishing campaigns.

The idea is to require a few extra steps, taking you on a more roundabout journey before you arrive at a website that demands your password, so that you don’t leap directly and suspiciously from an email link to a login page.

Here’s the phish unravelled so you can see how it works.

Stages of attack

First, we received an innocent looking email:


This one actually came from where it claimed – the proprietor of a perfectly legitimate UK engineering business, whose email account had evidently been hacked.

We didn’t know the sender personally, but we’re guessing he was a Naked Security reader and had corresponded with us in the past, so we appeared in his address book along with hundreds of other people.

We assume that many of the recipients corresponded with the sender regularly and would not only be inclined to trust his messages but also to expect attachments relating to business and projects they’d been discussing.

Taking over someone else’s email account for criminal purposes is often referred to as BEC, short for business email compromise, and it’s often assoicated with so-called CEO or CFO fraud, where the crooks deliberately target the CEO’s or the CFO’s account so they can issue fake payment instructions, apparently from the most senior level.

In this case, however, the crooks had clearly set out use one compromised account as a starting point to compromise as many more as they could, presumably intending either to use the new passwords for their own next wave of BEC crimes, or to sell them on for someone else to abuse.

Opening the attachment takes you to a One Drive file that looks legitimate enough at first sight, especially for recipients who communicate regularly with the sender:

The Sharepoint link you’re expected to click to access the One Note file does look suspicious because there’s no clear connection between the sender’s company and the location of the One Note lure.

But the sender’s business relates to construction, and the domain name in the Sharepoint link apparently refers to a building company, so the link is at plausible, at least.

The One Note file itself is very simple:

It’s only at this stage that the crooks present their call-to-action link – the click that they didn’t want to put directly ino the original email, where it would have stood out more obviously as a phishing scam.

You’d be forgiven for assuming that the Review Document button here simply opens up or jumps to a part of the One Note file that you’ve already got open…

…but, of course, there is no New Project PDF file, and the “link” that’s apparently there for you to review the document just takes you to the bogus login page that the criminals have been luring you towards all along.

The fake login page is hidden away (or was – the site is offline now [2020-09-02T14:00Z]) on a hacked WordPress site belonging to an events company.

Fortunately, the crooks gave themselves away doubly at this point.

Firstly, they got the name of the sender’s company wrong in this part of the scam (that’s the text redacted just before the word “Ltd”, which is the UK abbreviation for a limited liability company).

The sender’s company name ends in the word Structural, given that he’s in the construction business, but the criminals blundered and typed in the word Surgical – a small but obvious red flag to anyone who does business with the sender.

Secondly, the hacked events company where the crooks hid their phishing pages is in based Kyiv in Ukraine, and has a domain name that is neither related to the construction industry nor located in the UK, where the original email came from. (We redacted the site name in the image below.)

If you do click through, despite the unexpected link and the unlikely domain name, then you’ll finally reach a login form, three steps removed from the original email, complete with animated imagery suggestive of Office 365:

The login is apparently necessary in order to access what is supposed to be an Excel file.

However, the unexplained switch to Excel jars with the previous page, where you were promised a PDF file, and you will notice that the criminals have written Microsoft, Excel and Small Business incorrectly.

You also ought to be suspicious at a Microsoft login page that offers you so many alternative authentication choices.

That’s something smaller websites do in order to capitalise on the fact that you probably already have accounts with the big players, but you wouldn’t expect Microsoft to use any of its competitors as an authentication service.

Of course, if you do put in a password, it goes straight to the crooks, who then present you with a fake error message, perhaps in the hope you might try another account and give them a second password.

What to do?

  • Don’t click login links that you reach from an email. That’s an extension to our usual advice never to click login links that appear directly in emails. Don’t let the crooks distract you by leading you away from your email client first to make their phishing page feel more believable. If you started from an email, stop if you hit a password demand. Find your own way to the site or service you’re supposed to use.
  • Keep your eyes open for obvious giveaways. As we’ve said many times before, the only thing worse that being scammed is being scammed and then realising that the signs were there all along. Crooks don’t always make obvious mistakes, but if they do, make sure you don’t miss them.
  • If you think you put in a password where you shouldn’t, change it as soon as you can. Find your own way to the official site of the service concerned, and login directly. The sooner you fix your mistake, the less chance the crooks have of getting there first.
  • Use 2FA whenever you can. Accounts that are protected by two-factor authentication are harder for crooks to take over, because they can’t just harvest your password and use it on its own later. They need to trick you into revealing your 2FA code at the very moment that they’re phishing you.
  • Consider phishing simulators like Sophos Phish Threat. If you are part of the IT security team, Phish Threat gives you a safe way to expose your staff to phishing-like attacks, so they can learn their lessons when it’s you at the other end, not the crooks.


Fake Android notifications – first Google, then Microsoft affected

Thanks to Craig Jones, Director of Information Security at Sophos, and the Sophos Security Team
for their behind-the-scenes work on this article.

If you’re a Google Android user, you may have been pestered over the past week by popup notifications that you didn’t expect and certainly didn’t want.

The first mainstream victim seems to have been Google’s own Hangouts app.

Users all over the world, and therefore at all times of day (many users complained of being woken up unnecessarily), received spammy looking messages like this:

The messages didn’t contain any suggested links or demand any action from the recipient, so there was no obvious cybercriminal intent.

Indeed, the messages did indeed look like some sort of test – but by whom, and for what purpose?

The four exclamation points suggested someone of a hackerish persuasion – perhaps some sort of overcooked “proof of concept” (PoC) aimed at making a point, sent out by someone who lacked the social grace or the legalistic sensitivity of knowing when to stop.

Attention soon landed on an article published about two weeks ago by a cybersecurity researcher going by “Abbs”, who claimed to have made more than $30,000 in bug bounties by identifying Android apps that had been careless with their popup notification keys.

(Just to be clear here: we’re not for a moment suggesting that Abss had anything to do with sending out this week’s rash of annoying messages.)

Abss noticed that many mainstream Android apps use a notification interface provided by Google known as FCM, short for Firebase Cloud Messaging, formely Google Cloud Messaging, formerly Android Cloud to Device Messaging.

He wondered, as cybersecurity bounty hunters are wont to do, just how secure the authentication between each of these apps and the FCM backend might be.

After all, as Abss himself points out, a malicious attacker with the FCM authentication code for a particular service wouldn’t just be able to send rogue messages from an installed copy of the app.

Much worse, they might be able to send one rogue message to the FCM system and have it delivered as a phone popup notification to every single user of that app:

These notifications could contain anything the attacker wants including graphic/disturbing images(via the “image”: “url-to-image” attribute) accompanied with any demeaning or politically inclined message in the notification!

Abss discovered that it was possible to extract usable FCM authentication tokens from many mainstream apps by using debugging tools that monitor apps as they run and record the data that they use – such as argmuents to function calls – at critical points in the code.

This gave him a starting point for getting rogue notifications into the system.

Topics of interest

Next, Abss found a way to deliver rogue messages by making a specific sort of HTTP request to the FCM service interface, but it was based on what FCM calls topics.

As he explains in his article:

Topics are server side attributes that define a collection. For example, an application could define a topic called “news” and group users interested in the news category so as to send them similar notifications at once instead of sending notifications to every individual separately.

At first, he figured that an attacker would need to guess at the names of topics that the users of a particular app had signed up for, which would first mean figuring out the list of topics that each app offered.

Ultimately, and rather amusingly, however, he soon discovered that FCM allows a notification to be tagged for delivery to users who are interested in various combinations of topics, and this meant he could easily find a topic specifier that covered all users of any app.

His trick involves delivering messages to everyone who isn’t interested in a particular topic, given that it’s much easier to guess the name of something that the average person doesn’t care about than to figure out specific items in which they have a particular interest.

It’s only logical

FCM’s messaging interface allows users to combine topics of interest in a variety of different ways, using boolean logic.

For example, a food delivery service that wanted to send a notification relating to two topics, say “vegetarian” and “pizza”, wouldn’t need to trigger two separate notifications, which would result in people interested in both topics getting two messages.

Instead, they could combine two topics into one notifcation by specifying an expression denoting topic = vegetarian OR topic = pizza.

Of course, as any programmer will know, where there is OR there is also usually AND.

Indeed, the food company could choose to target only pizza lovers who were also vegetarians with an expression along the lines of topic = vegetarian AND topic = pizza, thus limiting rather than widening the number of recipients.

You can guess where this is going.

Imagine that the food delivery company wanted to notify its users of a brand new meat lovers’ pizza, but didn’t want waste the time of any vegetarian subscribers, who would have no interest in such a product and might even be irritated to have it talked up to them.

For that sort of situation, FCM allows logical expressions that denote conditions such as topic = pizza AND (topic NOT vegetarian), thus first including all pizza lovers but then removing all those who are also vegetarians from the list.

With this sort of flexibility, Abss realised that he didn’t need to guess at a topic that existed for a specific app and that most users would be interested in.

All he had to do was guess at a topic that did not exist (pretty much any random string of text garbage would do – he didn’t even need a real word) and ask FCM to deliver a notification to everyone who was NOT subscribed to his non-existent topic.

(This works because NOT NO ONE is equivalent to EVERYONE, or in boolean logic terms, NOT FALSE is TRUE.)

A job worth doing

As we mentioned above, Abbs had nothing at all to do with the annoying Hangouts messages described above – he just discovered the means by which an app could inadvertently leave itself exposed to misuse.

The best guess is that someone who had read his article, and who figured that a job worth doing was worth overdoing, decided to “prove” a point that didn’t need making.

Google seemed to get on top of the rogue messages fairly quickly, presumably by updating its app, changing its FCM authentication keys, or both.

Next it was Microsoft’s turn to get hammered, with Teams users on Android receiving messages with four exclamation points:

Microsoft quickly started investigating the problem, which only seemed to spur the troublemakers on to deliver yet another wave of bogus messages (note the grammatical and spelling mistakes):

Microsoft does now seem to have fixed the issue – and, no, the company didn’t send out yet another notification via Teams to announce the fix.

It did, however, take to Twitter for that purpose:

What to do?

If you make or support an app that uses FCM, you need to review who might have access to your authentication tokens.

If they’ve been exposed in any way, you will need to delete your old server keys and create new ones – in the same way that you change account passwords after a data breach or switch the door locks when you move into a new house.

Advice on how to do this can be found in Abss’s article.

By the way, this advice applies to any API keys for any applications or services you provide, as well as to access codes and passwords for any online portals that you use, all the way from RDP and SSH to your blogging site and your web content management system.

Be especially careful rhat you don’t accidentally upload authentication keys or codes along with your source code to online services such as GitHub.

This is especially important if you are responsible for any open source software.

You will regularly need to upload your latest source code, which is meant to be public, and it’s easy to include private data at the same time that just happens to be mixed in with the public files in your local repository.

Remember: if in doubt, don’t give it out!


Russian cybercrime suspect arrested in $1m ransomware conspiracy

Here’s a cybercrime conspiracy story with a difference.

When we write about network-wide ransomware attacks where a whole company is blackmailed in one go, two burning questions immediately come up:

  • How much money did the crooks demand?
  • Did the victim pay up?

The answers vary, but as you have probably read here on Naked Security, modern ransomware criminals often use a two-pronged extortion technique in an attempt to maximise their asking price.

First, the crooks steal a trove of company files that they threaten to make public or to sell on to other crooks; then they scramble the data files on all the company’s computers in order to bring business to a halt.

Pay up the blackmail money, say the crooks, and they will not only “guarantee” that the stolen data will never be passed on to anyone else, but also provide a decryption program to reconstitute all the scrambled files so that business operations can resume.

Recent reports include an attack on fitness tracking company Garmin, which was allegedly blackmailed for $10m and did pay up, though apparently after wangling the amount down into the “multi-million” range; and on business travel company CWT, which faced a similar seven-figure demand and ended up handing over $4.5m to the criminals to get its business back on the rails.

In contrast, legal firm Grubman Shire Meiselas & Sacks faced a whopping $42m ransomware extortion demand but faced it down, likening the crooks to terrorists and refusing to pay a penny.

More recently, US liquor giant Brown-Forman took a similar stance, refusing to deal with criminals after its network was infiltrated.

The third question

Of course, there’s a third question, one that isn’t quite as dramatic as “How much?”, but that is way more important:

  • How did the crooks get in to start with?

There are lots of possible answers to that one, including: by using exploits against unpatched software bugs; by sending infected attachments in phishing emails; by luring employees to fake login pages to steal passwords; by using existing malware in the network to download and deploy the ransomware program; by finding unprotected remote access portals such as RDP or SSH

…or by getting insider help.

And that’s what happened – or so the US Department of Justice (DOJ) alleges – in a recent cybercrime misadventure in Reno, Nevada.

According to federal criminal charges filed this week, the DOJ claims that a certain Egor Igorevich Kriuchkov of Russia not only planned a malware attack against a US company, but also travelled in person to America to negotiate with an employee of the company to implant the malware and thus initiate the attack.

Old meets new

In a fascinating mix of old-school face-to-face techniques and new-wave cybercriminality, Kriuchkov, who is 27 years old, is alleged to have set up a meeting via WhatsApp, then travelled to San Fransisco and driven on to Reno in Nevada to talk to an unnamed employee of his planned victim company to propose a “special project”.

Acting on behalf of unnamed co-conspirators, presumably safely back in Russia where (if they are Russian citizens) they have constitutional protection against extradition, Kruichkov is supposed to have dangled a million-dollar carrot in front of the insider in return for them helping to perpetrate the crime.

The court filing claims that the insider would have been expected to provide information relevant in tailoring the attack to the victim’s network, and then to connect up and run the malware to infect the network.

In return, Kriuchkov promised the insider a cool $1,000,000.

No details are given in the affidavit about what network intelligence the insider was expected to come up with, but you can probably imagine lots of details that would be valuable to attackers, including: lists of computer and server names; network diagrams including internal IP numbering, firewall setup and VLAN configuration; any security software installed; usernames and working hours; IT staff and shift patterns; and much more.

Apparently, while the malware was being unleashed from inside the network, Kruichkov – presumably back in Russia at this point – and his co-conspirators were planning to launch a “decoy” attack from outside, thus distracting the company’s IT team from the more serious problem unfolding internally.

The charge sheet doesn’t make any mention of file scrambling in the plans, claming merely that:

The co-conspirators would engage in a Distributed Denial of Service Attack to divert attention from the malware.

The malware would allow the conspirators to extract data from Victim Company A’s network.

Once the data was extracted, the conspirators would extort Victim Company A for a substantial payment.

The conspiracy comes unstuck

Whatever Kruichkov was after, things didn’t work out.

The insider contacted the authorities, and the authories, it seems, tried to contact Kruichkov.

According to the FBI, Kruichkov then drove 800km from Reno to Los Angeles overnight, presumably in the hope of flying directly out of the USA before the net closed in.

But he didn’t make it, and was arrested in Los Angeles.

What to do?

We’re assuming – if these allegations turn out to be well-founded – that the crooks would have included a file-scrambling component in their extortion malware, just because they could, and because it would almost certainly have made a bad thing worse if it worked.

But it’s important to note that this conspiracy seems to have existed on the basis of being able to extort money from the victim through stolen data alone.

In other words, cyberextortion crimes involving ransomware no longer need to rely on what would be the very last part of a traditional attack.

Cybercriminals seem to be confident there are millions to be made even if they fail at (or don’t bother with) that final file-scrambling step.

So, as we’ve said many times before, prevention is way better than cure; and earlier prevention is better yet.

We’ve often advised you to set up a single point of cybersecurity contact for all your staff, whether by phone or email, with the aim of turning everyone in the company into the eyes and ears of your IT security team.

In this case, a timely warning not only headed off the attack but also led to the arrest of a suspect.


LEARN MORE: 3 TIPS TO STOP OUTSIDERS GETTING IN

Here are three tips to help your company be more proactive against outsiders trying to wangle their way in. (When you press play the video should play from the 11’15” mark, where the tips begin.)

[embedded content]

“Chrome considered harmful” – the Law of Unintended Consequences

A recent article on the APNIC blog, entitled Chromium’s impact on root DNS traffic, has set the Chromium browser project thinking about a feature in the browser code that’s known as the Intranet Redirect Detector.

To explain.

APNIC is the Asia Pacific Network Information Centre, headquartered in Brisbane, Australia, one of five internet number registries around the world.

These Regional Internet Registries (RIRs) look after global IP number allocations, maintain definitive internet domain name databases for their regions, and generally concern themselves with the health of the global internet.

As you can imagine, anything that upsets the balance of the internet – from spamming and cybercrime to misconfigured servers and badly-behaved network software – is of great concern to the RIRs.

The root DNS servers form the heart of the global Domain Name System, which automatically converts human-friendly server names such as nakedsecurity.sophos.com into network numbers that computers can use to send and receive traffic, such as 192.0.66.200 (that was our IP number when I looked it up today, as shown below).

As you can imagine, any unnecessary load on the root DNS servers could slow down internet access for all of us, by stretching out the time taken convert names to numbers, something that our browsers need to do all the time as we click from link to link online.

Chromium, as you almost certainly know, is a Google open-source project that produces the software at the core of many contemporary browsers, notably Google’s own Chrome Browser, which accounts for the majority of web traffic these days on laptops and mobile phones alike.

Chromium is also used in many other browsers, including Vivaldi, Brave and – recently, at least – Microsoft Edge. (Of today’s mainstream browsers, only Safari and Firefox aren’t based on a Chromium core.)

As you can imagine, any problematic code in Chromium could have an enormous global effect due to the prevalence of Chromium-based browsers in moderm internet usage.

And last, but not least, the Intranet Redirect Detector is a “feature” added to the Chromium browser that is supposed to detect, and work around, a deceptive practice known as NXDOMAIN redirection (or, pejoratively, as NXDOMAIN hijacking) that is still used by some ISPs in some countries.

Domain redirection explained

Very generally speaking, DNS redirection is where an ISP or network provider doesn’t tell you the unadorned truth about a server name you want to locate online.

Sometimes, this sort of purposeful inaccuracy can be a good thing, if you’re aware that it might happen and if it’s done sensitively.

An obvious example is for the purposes of security filtering, where a network security device or cloud service deliberately redirects known bad domains, such as malware repositories, thus heading off potentially malicious traffic right at the DNS level.

A more controversial example of DNS redirection involves censorship, where a country’s government demands that ISPs kill off traffic to all sorts of sites that the authorities regard as “dangerous”, not only if those sites might lead to malware or other unexceptionably nasty content, but also if those sites present facts and opinions that the state wants to hide from its citizens.

But NXDOMAIN redirection is another thing altogether.

Simply put, a DNS lookup for a server name that doesn’t exist at all, and therefore can’t be resolved, is supposed to come back with a DNS error 3, known as NXDOMAIN, short for non-exsistent domain.

Here is some Lua script code that we used to do two DNS lookups and retreive the low-level results, using Google’s free and uncensored DNS server located at IP address 8.8.8.8.

First, we looked up a DNS record of type A (short for IP address) for nakedsecurity.sophos.com and received a chain of two answers:

 > dns = require 'dns.resolver' > lookup = dns.new('8.8.8.8') > lookup:resolveRaw('nakedsecurity.sophos.com') -- returns list of answers -- if domain resolves results = { questions = { 1 = { type = "A" name = "nakedsecurity.sophos.com" } } answers = { 1 = { type = "CNAME" name = "nakedsecurity.sophos.com" content = "news-sophos.go-vip.net" TTL = 60 } 2 = { type = "A" name = "news-sophos.go-vip.net" content = "192.0.66.200" TTL = 60 } }

The first answer is a CNAME (short for canonical name), which says that nakedsecurity.sophos.com is actually an alias for the server where our content is stored, namely news-sophos.go-vip.net, so the IP number of the go-vip.net site is actually the one we want.

The second answer then chases down the A record of the relevant go-vip.net server, giving us the answer 192.0.66.200, which is where we need to connect to access Naked Security’s content.

TTL, by the way, is short for time to live, and denotes how long, in seconds, we can assume the reply is correct before we need to look it up again. A browser, for example, might need to to fetch 10 or 20 different items from the news-sophos.go-vip.net server, such as stylesheeets, JavaScripts, images and HTML files, just to display one web page. By telling our computer that it can rely on the original DNS reply for up to a minute before checking again, load on the DNS system is greatly reduced. But with a suitably small enough TTL value, stale DNS entries don’t last long and soon get corrected automatically. TTLs typically vary from 60 seconds for cloud services that need to adapt quickly to changes in availability, to several hours for static servers that handle little traffic.

Second, we asked about a made-up name that doesn’t exist:

 > dns = require 'dns.resolver' > lookup = dns.new('8.8.8.8') > lookup:resolveRaw('iurwcnurewcnurc.example') -- function returns nil,error,errno -- if not resolved results = nil, "NXDOMAIN", 3

As you can see, the DNS server quickly and definitely told us, “There’s no such site, so there’s no point in trying to access it.”

Censorship, of course, could be achieved by keeping a list of domains you don’t like and deliberately, if dishonestly, “downgrading” their true answers to NXDOMAIN errors, thus making it look as though the sites don’t exist at all.

But it’s hard to imagine why you might want to perform this trick the other way around and pretend that a non-existent domain really does exist.

However, many years ago a number of ISPs began to to do just that, treating all NXDOMAIN replies as missed opportunities for advertising.

NXDOMAIN hijacking

If you mistyped a domain name, for example, an ISP could use a DNS hijack to send you to one of its own portals instead, and offer you a range of alternative webpages or products when you got there.

If the ISP figured that the domain name you were interested in might actually be worth some money, and therefore that you might be checking to see if it was in use, then it could pop up a page offering to sell you the domain right away.

Instead of “downgrading” a real site to non-existent status, as a censorship-happy government might do, NXDOMAIN hijacks go the other way, “upgrading” domains that don’t exist so that they appear to be there after all.

The problem, of course, is that this makes it impossible to tell if a domain really exists or not, because every server name magically works, even when it shouldn’t.

Sadly, there aren’t any globally-agreed internet regulations that the RIRs or ICANN – the Internet Corporation for Assigned Names and Numbers – can use to stamp out this practice, but for more than a decade, ICANN’s official opinion has been that:

[We] strongly [discourage] the use of DNS redirection, wildcards, synthesized responses and any other form of NXDOMAIN substitution.

Indeed, ICANN criticises NXDOMAIN hijacking as producing a poor user experience, because it means you can no longer easily tell whether a domain really exists or not, and as being a potential abuse of power, given that end users who want to resolve domain names to their own servers have to pay for each domain individually.

Problems for browsers

Interestingly, the creators of the original Chromium code faced a problem of their own with NXDOMAIN trickery, namely that they wanted their new browser’s address bar to be a search engine at the same time.

Instead of typing in a URL, you could just type in a word to search for, and Google – whose primary business, after all, is online search and advertising – would treat it as a search term.

However, many companies use unadorned domain names internally, so that staff who are in the office (or, in coronavirus times, are connected via VPN) – can use single-word names to reach internal corporate servers.

Chromium, therefore, checks to see if any “search term” your in the address bar also works as a domain name, so that it won’t stop you accessing servers that do exist on your intranet.

But if you’re using an ISP with DNS servers that do NXDOMAIN hijacking, then every domain name you try will seem to exist, and Chromium won’t usefully be able to distinguish between search terms or domain names.

In other words, domain hijacking behaviour seen as a “feature” by ISPs turned out to get in the way of the autosearch “feature” that Google wanted to implement in the Chromium code.

So, as APNIC blogger Matthew Thomas explained in the article we reference at the start, the Chromium programmers created yet another “feature” to help them decide whether to enabled their autosearch “feature”, and coded it up in a file called intranet_redirect_detector.c.

What this code does is to try three randomly-generated domain names when the browser starts up, as our DNS logs show here:

[Chrome run 1]
DNS/query: type=A name=boyhfqrconuryg source=127.0.0.1
DNS/query: type=A name=bhephhtodjgghex source=127.0.0.1
DNS/query: type=A name=akomcbscjhqzt source=127.0.0.1
[Chrome run 2]
DNS/query: type=A name=sgfagnzojxpdou source=127.0.0.1
DNS/query: type=A name=mbwuvkac source=127.0.0.1
DNS/query: type=A name=xaswmuofxyda source=127.0.0.1
{Chrome run 3]
DNS/query: type=A name=bjgqwiedftoskk source=127.0.0.1
DNS/query: type=A name=wmrwezznxjxmma source=127.0.0.1
DNS/query: type=A name=skwqumuedmf source=127.0.0.1

If two (or all) of these unlikely domains resolve, and come back with the same IP number, Chromium infers that NXDOMAIN hijacking is taking place.

In other words, if the intranet_redirect_detector triggers, then the browser can’t rely on DNS lookups to tell the difference between words you probably meant to search for, and servers you probably wanted to browse to.

Needless lookups considered harmful

As cheeky as the Chromium code might seem, three random DNS lookups every time you open your browser doesn’t sound like a lot.

After all, many people don’t even restart their browser once a day, preferring to keep apps open as long as they can, and rarely logging off or shutting down their computers.

But as the APNIC blog points out, on a network where there is no NXDOMAIN hijacking, those random DNS lookups all have to be handled by the root DNS servers.

After all, the names don’t exist, and therefore won’t be in anyone’s DNS caches, and will always require your computer’s DNS resolver to check right back to the mother ship – one of the 13 root DNS servers – for an answer that serves no purpose except to help Chromium configure its autosearch “feature”.

And given that Chromium’s code is running on the majority of laptops and phones out there – that’s an absolute majority, in the sense of “running more than 50% of all laptops and phones in the world”, not merely “running on more devices than any other browser code” – then it turns out that those randomised test lookups are giving the root DNS servers a huge amount of extra work to do…

…none of which has anything to do with finding the location of any real servers, which is the main purpose of DNS.

Even worse, as NXDOMAIN hijacking becomes less common, thanks to pressure on ISPs not to do it, Chromium’s pressure on the root DNS servers will increase, because more and more of the random intranet_redirect_detector lookups will end up making it all the way to those 13 root servers, only to produce errors.

What to do?

The good news is that the Chromium development list seems to have accepted that this autosearch detection feature has triggered what you might call the Law of Unintended Consquences, and represents bad publicity for Chromium and Chrome alike.

Of course, just dropping this the “feature” entirely from the Chromium code would probably be the quickest solution…

…but that itself would very likely trigger the Law of Unintended Consequences once more by introducing confusion amongst users who have been relying on the feature, probably without even realising it.

We therefore suspect that the solution will have more than one part, perhaps including:

  1. Make the NXDOMAIN connection process less demanding on the root servers, by finding a way to perform it less frequently.
  2. Make the autosearch detection feature optional but leave it on by default and provide a way for users who care about DNS traffic to turn it off.
  3. If problems persist, turn the feature off by default but provide a way for those who need it to turn it back on.

Avoiding the Law of Unintended Consequences in cybersecurity is very tricky…

…but having a community that is willing to listen and react to well-reasoned articles like the one from Matthew Thomas is an important part of its solution!


Outlook “mail issues” phishing – don’t fall for this scam!

Thanks to Michelle Farenci of the Sophos Security Team for her behind-the-scenes work on this article.

Here’s a phish that our own security team received themselves.

Apart from some slightly clumsy wording (but when was the last time you received an email about a technical matter that was plainly written in perfect English?) and a tiny error of grammar, we thought it was surprisingly believable and worth writing up on that account, to remind you how modern phishers are presenting themselves.

Out are the implied threats, the exclamation points (!!!) and the money ($$$) you might lose if you don’t act right now; in are the happy and unexceptionable “here’s a problem that you can fix all by yourself without waiting for IT to help you” messages of a sort that many companies are using these days to reduce support queuing times.

Yes, you ought to be suspicious of emails like this. No, you shouldn’t click through even out of interest. No, should never enter your email password in circumstances like this.

But the low-key style of this particular scam caught our eye, making it the sort of message that even a well-informed user might fall for, especially at the end of a busy day, or at the very start of the day after.

Here’s how it arrives – note that in the sample we examined here, the crooks had rigged up the email content so that it seemed to be an automated message from the recipient’s own account, which fits with the theme of an automatic delivery error:

I​n​c​o​m​i​n​g​ ​m​e​s​s​a​g​e​s​ ​f​o​r​ [REDACTED] c​o​u​l​d​n​’​t​ ​b​e d​e​l​i​v​e​r​e​d​.

This message was sent in response to multiple incoming messages being rejected consistently from 2:00 AM, Wednesday, August 19, 2020.

To fix, recover and prevent further rejection of emails by our server, connect to your Company-Assigned OWA portal securely below.

Only if you were to dig into the email headers would it be obvious that this message actually arrived from outside and was not generated automatically by your own email system at all.

The clickable link is perfectly believable, because the part we’ve redacted above (between the text https://portal and the trailing /owa, short for Outlook Web App) will be your company’s domain name.

But even though the blue text of the link itself looks like a URL, it isn’t actually the URL that you will visit if you click it.

Remember that a link in a web page consists of two parts: first, the text that is highlighted, usually in blue, and that is clickable; second, the destination, or HREF (short for hypertext reference), where you actually go if you click the blue text.

A link is denoted in HTML by an ANCHOR tag that appears between the markers <A> and </A> while the destination web address is denoted by an HREF attribute inside the opening anchor tag delimiter.

Like this:

This is a <A HREF='https://example.com'>clickable link</A> going to EXAMPLE.COM But the link <A HREF='https://example.com'>https://different.example</A> also
goes to EXAMPLE.COM, because the URL used is determined by the HREF setting, even if the text of the link itself looks like a URL. The domain DIFFERENT.EXAMPLE
here isn't actually a web address, it's just text that looks like a web address.

Why not just block links that look like other links?

If you’re thinking that “links that deliberately look as though they go somewhere else” sound suspicious, you’d be right.

You might wonder why browsers, operating systems and cybersecurity products don’t automatically detect and block this kind of trick, where there’s an obvious and deliberate mismatch between the clickable text and the link it takes you to.

Unfortunately, even mainstream sites use this approach, making it effectively impossible to rely up front on what a link looks like, or even where it claims to go in your browser, in order to work out exactly where your network traffic will go next.

For instance, here’s a Google search for here's an example:

You can see that if you ① search for here's an example, you’ll receive a answer in which ② an explicit domain name (here, english.stackexchange.com) is used as the visible text of a clickable link.

You can also see that when you hover over the domain name link, you’ll see ③ a full URL that apparently confirms that clicking the link will take you to the named site.

However, if you use Firefox’s Copy Link Location option to recover the ultimate link, you’ll see – thanks to the magic of JavaScript – that your web request actually goes to a URL of this sort:

https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=& cad=rja&uact=8&ved=[REDACTED]& url=https%3A%2F%2Fenglish.stackexchange.com%2Fquestions%2F225855%2Fheres-an-example[...]

Eventually, you will end up at the URL shown at position ③ in the screenshot above, but you’ll be redirected (quickly enough that you nmight not notice) via a Google track-and-redirect link first.

So you do end up where the browser told you, but not quite as explicitly and directly as you might have expected – you get there indirectly via Google’s own advertising network.

What happens next?

The good news is that in the case of this phish you will see the actual web page you’ll be taken to if you hover your cursor over the link-that-looks-like-a-different-link.

That’s because email clients and webmail systems generally don’t allow JavaScript to run, given that emails could have come from anywhere – even if they say they came from your own account, as this one does.

So you ought to spot this phish easily if you stop to check where the link-that-looks-internal really ends up.

In our case (note that the exact URL and server name may vary every time), the real link did not go to https://portal.[REDACTED]/owa, as suggested by the text of the link.

Instead, it went to a temporary Microsoft Azure cloud web storage URL, as shown below, which clearly isn’t the innocent-looking URL implied in the email:

[REDACTED].web.core.windows.net

A quick check of the domain name via the Sophos Intelix online threat detection service shows its true colours:

$ luax intelix-lookup.lua [REDACTED].web.core.windows.net Authenticating to Sophos Intelix: OK.
Items to check: 1 { productivityCategory "PROD_SPYWARE_AND_MALWARE" riskLevel "HIGH" securityCategory "SEC_MALWARE_REPOSITORY" ttl = 300
}

This server has nothing to do with your company’s email, and everything to do with putting you in harm’s way.

The phishing page

If you do click through, and your endpoint or firewall filter doesn’t block the request, you will see a phishing page that we must grudgingly admit is elegantly simple:

Your email address is embedded in the link in the email that you click on, so the phishing page can fill in the email field as you would probably expect.

When we tried this page, deliberately putting in fake data, we received an error message after the first attempt, as though we’d made a mistake typing in the password:

No matter what we did the second time, we achieved “success”, and moved onwards in the scam.

How it ends

One tricky problem for phishing crooks is what to do at the end, so you don’t belatedly realise it’s a scam and rush off to change your password (or cancel your credit card, or whatever it might be).

In theory, they could try using the credentials you just typed in to login for you and then dump you into your real account, but there’s a lot that could go wrong.

The crooks almost certainly will test out your newly-phished password pretty soon, but probably not right away while you are paying attention and might spot any anomalies that their attempted login might cause.

They could just put up a “thanks, you may now continue normally” page, and often that’s exactly what they do as a simple way to sign off their scam.

Or they find a page that’s related to the account they were phishing for, and redirect you there.

Thi leaves you on a web page that really does have a genuine URL in the address bar – what’s often called a decoy page because it leads you out at the end of the scam with your innocence intact.

That’s what happened here – it’s not perhaps exactly the page you might expect, but it’s believable enough because it leaves you on a genuine Outlook-related web page with a genuine Microsoft URL:

What to do?

  • Always verify links in emails before you click them. You should check where you you end up after clicking (see the next tip), but don’t click through casually and think, “I’ll wait to check further down the line to see if things look bad.” Check before you click as well. The earlier you spot a phishing scam, the less likely it is you’ll be sucked in and the earlier you’ll be able to report it.
  • Carefully check the URL of any login page. These days, most cybercriminals are using HTTPS websites, because everyone expects a padlock in the address bar. But the padlock doesn’t say you are on the correct site, merely that you are on a site with an HTTPS certificate. If you’re currently using your mobile phone, consider switching to your laptop if you can, and checking out the link from there. It’s worth the extra trouble because the address bar is easier to read and tells you more.
  • Avoid logging in at all via links you received in an email. If it’s a service you already know how to use – whether it’s your email, your banking site, your blog pages or a social media account – learn how to reach the login page directly, and how to access the account’s status pages after you’re in. If you always find your own way to your account login pages and ignore email login links even if you think they are genuine, you’ll never fall for fake links by mistake.
  • Turn on 2FA if you can. Two-factor authentication means that you need a one-time login code, usually texted to your phone or generated by a special app, that changes every time. 2FA doesn’t guarantee to keep the crooks out but it makes your password alone much less use to them.
  • Never turn off or change security settings because an email tells you to. Many phishing emails include instructions that claim to help you improve your security, but the changes they demand are there to make you less secure and help the crooks to get further. If in doubt, leave it out!
  • Change passwords at once if you think you just got phished. The sooner you change your current password after putting it into a site you subsequently suspect, the less time the crooks have to try it out. Similarly, if you get as far as a “pay page” where you enter payment card data and then realise it’s a scam, call your bank’s fraud reporting number at once. (Look on the back of your actual card so you get the right phone number.)

Two more suggestions…

If you’re a sysadmin looking to keep phishing attacks out, why not take a look at:

  • Sophos Phish Threat. This is a phishing simulator that lets you test out your staff in a sympathetic way, using realistic but artificial scams, so your users can make their mistakes when it’s you at the other end, rather than when it’s a cybercriminal.
  • Sophos Intelix. This is a live threat lookup service that you can use in your own system software and scripts to add high-speed threat detection for suspicious websites, URLs and files. A simple HTTPS-based web API that replies in JSON means you can use Sophos Intelix from just about any programming or scripting language you like. (Registration is free and you get a generous level of free submissions each month, after which you can pay-as-you-go if you want to do high volumes of queries.)


go top