Cloud Security: Don’t wait until your next bill to find out about an attack!

Google’s Cybersecurity Action Team just published the first ever edition of a bulletin entitled Cloud Threat Intelligence.

The primary warnings are hardly surprising (regular Naked Security visitors will have read about them here for years), and boil down to two main facts.

Firstly, crooks show up fast: occasionally, it takes them days to find newly-started, insecure cloud instances and break in, but Google wrote that discover-break-and-enter times were “as little as 30 minutes.”

In Sophos research conducted two years ago, where we set out specifically to measure how long before the first cybercriminals came visiting, our honeypots recorded first-knock times of 84 seconds over RDP, and 54 seconds over SSH.

Imagine if it took just one minute after you closed the contract on your new property for the first crooks came sneaking up your driveway to try all your doors and windows! (No pun intended.)

Attacked no matter what

Importantly, in our research, the cloud instances we used weren’t the sort of cloud server that a typical company would set up, given that they were never actually named via DNS, advertised, linked to, or used for any real-world purpose.

In other words, the first crooks found us in about a minute simply because we showed up on the internet at all: we were attacked no matter what we did to keep a minimal profile.

They didn’t need to wait until we’d publicised the servers ourselves, as you would if you were starting a new website, blog or download site.

Likewise, the criminals didn’t need to wait until we’d established the servers as standard network API targets (known in the jargon, slightly ambiguously, as endpoints) and started generating visible traffic ourselves that could be spotted using those online services.

In real life, therefore, the situation is probably even worse that in our research, given that you’re definintely a generic, automatic target for crooks who simply scan, re-scan and re-re-scan the internet looking for everyone; and you may also be a specific, interesting target for crooks who are on the lookout not just for anyone, but for someone.

Secondly, weak passwords are still the primary way in: Google confirmed that weak passwords are not only a thing used by cybercriminals in cloud intrusions, but the thing.

Technically, weak passwords (a category which, sadly, includes no password at all) did not not have an absolute majority in Google’s “how did they get in?” list, but at 48% it was a close call.

Notably, password security blunders were a long way ahead of the next most likely break-and-enter technique, which was unpatched software.

You’d probably already guessed that patching would be a problem, given how often we write about this issue on Naked Security: vulnerable software let in 26% of the attackers.

Amusingly, if we’re allowed to give a wry smile at this point, 4% of Google’s intrusions were allegedly caused by users accidentally publishing their own passwords or security keys by uploading them by mistake while publishing open source material on sites such as GitHub.

Ironically, Naked Security’s most recent warning about the risks of what you might call “cybersecurity self-incrimination” came just last week.

We reported how investigators in the UK were able to track down more than 4400 GitHub projects in which the uploader’s own Firefox cookie files had somehow become entangled – a search that literally took seconds when we reproduced it.

And that’s just one type of file that could contain API secrets, from one specific application, on one particular cloud sharing service.

We’re not sure whether to be relieved that self-incrimination accounted for just 4% of the intrusions, or dismayed that this break-in technique (we’re not sure it’s sophisticated enough to be called “hacking”) was on the list at all.

What about ransomware?

We know what you’re thinking.

“Surely the intrusions were all about ransomware,” you might be saying, “because that’s the only cybersecurity issue worth worrying about right now.”

Unfortunately, if you’re viewing ransomware in isolation, putting it on its own at the front of the queue to deal with in isolation, and relegating everything else to the back burner, then you’re not thinking about cybersecurity broadly enough.

The thing about ransomware is that it’s almost always the end of the line for the criminals in your network, because the whole idea of ransomware is to draw maximum attention to itself.

As we know from the Sophos Rapid Response team, ransomware attackers leave their victims in no doubt at all that they’re all over your digital life.

Today’s ransomware notifications no longer rely on simply putting up flaming skulls on everyone’s Windows desktop and demanding money that way.

We’ve seen crooks printing out ransom notes on every printer in the company (including point-of-sale terminals, so that even customers know what just happened), and threatening employees individually using highly personal stolen data such as social security numbers.

We’ve even heard them leaving chillingly laconic voicemail messages explaining in pitiless detail how they plan to finish off your business if you don’t play their game:

[embedded content]

What really happened next?

Well, in Google’s report, all but one of the items on the “actions after compromise” list involved the cybercriminals using your cloud instance to harm someone else, including:

  • Probing for new victims from your account.
  • Attacking other servers from your account.
  • Delivering malware to other people using your servers.
  • Kicking off DDoSes, short for distributed denial of service attacks.
  • Sending spam so that you get blocklisted, not the crooks.

But top of the list, apparently in 86% of successful compromises, was cryptomining.

That’s where the crooks use your processing power, your disk space, and your allotted memory – simply put, they steal your money – to mine cryptocurrency that they keep for themselves.

Remember that ransomware doesn’t work out for the crooks if you have a newly-configured cloud server that you haven’t really put to full use yet, because there’s almost certainly nothing on the server that the criminals could use to blackmail you.

Underutilised servers are unusual in regular networks, because you can’t afford to let them sit idle after you’ve bought them,

But that’s not the way the cloud works: you can pay a modest sum to have server capacity made available to you for when you might need it, with no huge up-front capital costs before you get the service going.

You only start paying out serious money if you start using your allocated resources heavily: an idle server is a cheap server; only when your server gets busy do you really start to rack up the charges.

If you’ve done your economic calculations properly, you expect to come out ahead, given that an increase in server-side load ought to correspond to an increase in client-side business, so that your additional costs are automatically covered by additional income.

But there’s none of that economic balance if the crooks are hammering away entirely for their own financial benefit on servers that are supposed to be idle.

Instead of paying dollars a day to have server power waiting for when you need it, you could be paying thousands of dollars a day for server power that is earning you a big, fat zero.

What to do?

  • Pick proper passwords. Watch our video on how to choose a good one, and read our advice about password managers.
  • Use 2FA wherever and whenever you can. If you use a password manager, set up 2FA to help you keep your password database secure.
  • Patch early, patch often. Don’t zoom in only on so-called zero-days that the crooks already know about. Patches for security holes are routinely reverse-engineered to work out how to exploit them, often by security researchers who then make these exploits public, supposedly to educate everyone about the risks. Everyone, of course, includes the cyberunderworld.
  • Invest in proactive cloud security protection. Don’t wait until your next cloud bill arrives (or until your credit card company sends you an account balance warning!) before finding out that there are criminals racking up fees and kicking off attacks on your dime.

Think of it like this: sorting out your cloud security is the best sort of altruism.

You need to do it anyway, to protect yourself, but in doing so you protect everyone else who would otherwise get DDoSed, spammed, probed, hacked or infected from your account.


S3 Ep60: Exchange exploit, GoDaddy breach and cookies made public [Podcast]

[00’27”] Cybersecurity tips for the holiday season and beyond.
[02’20”] Fun fact: The longest-lived Windows version ever.
[03’40”] Exchange at risk from public exploit.
[10’34”] GoDaddy loses passwords for 1.2m users.
[18’25”] Tech history: What do you mean, “It uses a mouse?”
[20’25”] Don’t make your cookies public!
[27’51”] Oh! No! DDoS attack in progress – unfurl the umbrellas!

With Paul Ducklin and Doug Aamoth. Intro and outro music by Edith Mudge.

LISTEN NOW

Click-and-drag on the soundwaves below to skip to any point in the podcast. You can also listen directly on Soundcloud.


WHERE TO FIND THE PODCAST ONLINE

You can listen to us on Soundcloud, Apple Podcasts, Google Podcasts, Spotify, Stitcher and anywhere that good podcasts are found.

Or just drop the URL of our RSS feed into your favourite podcatcher software.

If you have any questions that you’d like us to answer on the podcast, you can contact us at tips@sophos.com, or simply leave us a comment below.


US government securities watchdog spoofed by investment scammers – don’t fall for it!

The US Securities and Exchange Commission (SEC) has issued numerous warnings over the years about fraudsters attempting to adopt the identity of SEC officials, including by phone call spoofing.

Call spoofing is where a scammer calls you up on your landline or mobile phone, claims to be from organisation X, and then reassures you by saying, “If you don’t believe me, check the number I’m calling from.”

Lo and behold, when you do, the Caller ID (as it’s known in North America) or Calling Line Identification (CLI, a term used elsewhere in the world) says that the call is coming from X’s official number.

Proof… except that is isn’t!

The problem here is that the jargon terms Caller ID and CLI are misnomers, because the technology identifies neither the caller themselves nor the phone line that the caller is using.

It’s a suggestion, not a fact

Identifying the actual caller is as good as impossible in the case of a regular landline or mobile call, because the phone (or the phone system) has no reliable way of identifying the person who dialled the call in the first place, or who is speaking into the microphone.

And even identifying the phone number of the calling line is troublesome, because the Caller ID data that’s decoded and displayed on your device is unauthenticated, and therefore unauthenticatable.

If it can’t be authenticated, then it’s not really any sort of identification at all.

In fact, anyone who knows the necessary techniques can inject pretty much any number they like into the call signalling process, and thus can cause almost any number they like to show up before you answer.

As it happens, altering the Caller ID to give a completely different number when you place a call is still legal, and considered useful, in many countries.

For example, you might want to call someone from a call centre (where they wouldn’t be able to return the call to the individual employee’s extension anyway), but to show up on their phone with a toll-free number or a central switchboard number for any return calls.

In short, you need to think of Caller ID or CLI as being no more reliable, and no more precise, than the return address on the back of a snail-mail letter, the choice of which is entirely up to the sender.

In other words, if Caller ID says the call isn’t from someone you expect, it’s OK to decide you are not going to trust it.

But that doesn’t work the other way around: just because it seems to come from someone you do expect, it’s not OK to trust it.

(You may want to read the last two sentences twice each.)

Now targeting cryptocurrency investors

Well, the SEC has recently re-iterated its warning about spoofed phone calls, thanks to investment scammers using the SEC’s “phone identity” to trick people into believing that the caller actually represents the SEC.

As you’ve probably guessed, today’s scammers are often focusing on the hot topic of the day, cryptocurrencies, claiming to be SEC officials who are doing you the favour of warning you about “fraudulent” transactions:

We are aware that several individuals recently received phone calls or voicemail messages that appeared to be from an SEC phone number. The calls and messages raised purported concerns about unauthorized transactions or other suspicious activity in the recipients’ checking or cryptocurrency accounts.

[…]

SEC staff do not make unsolicited communications – including phone calls, voicemail messages, or emails – asking for payments related to enforcement actions, offering to confirm trades, or seeking detailed personal and financial information. Be skeptical if you are contacted by someone claiming to be from the SEC and asking about your shareholdings, account numbers, PIN numbers, passwords, or other information that may be used to access your financial accounts.

We’ve also had Naked Security readers report to us that they’ve had similar scam calls in the UK, where the calls came up with their own bank’s real number, and the crooks (unsurprisingly) opened the call by “identifying” themselves as working for the bank.

Unearned trust

Unfortunately, it’s easy, and very handy, to get in the habit of trusting, or at least relying on, the Caller ID number that shows up.

We know someone who recently had a coronavirus outbreak at home (one of the kids caught the virus at school, so all the family ended up infectious at the same time), and therefore got caught up in a mini-pingdemic all of their own.

Everyone in the household got Track-and-Trace calls triggered by everyone else in the household…

…so the fact that a “Track-and-Trace” Caller ID popped up before they answered each call turned out to be very useful, because they knew – or assumed that they knew – what to expect.

But they admitted, afterwards, that the effect of this was to “teach” them all (or perhaps “innocently misdirect them” is a better term) to trust those incoming caller numbers more than they had been inclined to do so before.

What to do?

Here’s a simple approach: treat Caller ID names or numbers like those unwanted weather icons that your phone insists on showing you, even when you’re already outside.

Often they’re right, or partly right; sometimes they’re wrong, and even badly wrong; but they are never definitive.

If you see an icon showing rainclouds, you might as well take your umbrella, on the grounds that if the sun comes out instead, you can at least use it as a parasol.

But never leave your umbrella behind merely because you see an icon of a shining sun: that icon is a suggestion; it’s not proof of anything.

Most importantly, if any caller ever invites you to look at the Caller ID number as evidence of their truthfulness…

you can be 100% certain, right away, that they are lying. (We recommend that you simply end the call at once, without a further word.)

If you need to contact an organisation by phone, find your own way there, for example by using a number:

  • From a trustworthy document such as the back of your credit card,
  • In the letter you got when you signed up for the service, or
  • As displayed inside one of the branches or offices of the company itself, if there is one near you.

(We snapped a photo of the various official helpline numbers of our bank from a sign in a nearby branch, after asking one of the uniformed staff inside the branch if the information was current.)

And, remember our overarching anti-scammer advice to protect your personal information: if in doubt, don’t give it out.


Check your patches – public exploit now out for critical Exchange bug

At the start of this month, CVE-2021-42321 was technically an Exchange zero-day flaw.

This bug could be exploited for unauthorised remote code execution (RCE) on Microsoft Exchange 2016 and 2019, and was patched in the November 2021 Patch Tuesday updates.

Microsoft officially listed the bug with the words “Exploitation Detected”, meaning that someone, somewhere, was already using it to mount cyberttacks.

The silver lining, if there is such a thing for any zero-day hole, is that the attacker first needs to be authenticated (logged on, if you like) to the Exchange server.

This means that anyone in the position to exploit the CVE-2021-42321 vulnerability would almost certainly already either be logged on to the network itself or signed in to a user’s email account, which at least rules out anonymous, remote attacks mounted by just about anyone from just about anywhere.

Nevertheless, a bug of this sort still represents a critical security issue, because regular users aren’t supposed to be able to upload and run arbitrary programs on any of your network servers, least of all your mail server.

Although cybercriminals who can read your email are already a serious concern, crooks who can infiltrate the email server itself, without needing to be a sysadmin to start with, are a very much greater threat.

With control over the entire mail server, rather than just a single user’s email account, attackers could potentially implant malware to spy on all corporate email, in and out; send bogus emails in anyone’s name right from inside the organisation; implant RAM-scraping malware to watch for business secrets held only temporarily in memory, or to retreive temporary network passwords; snoop on network activity from a central location; and much more.

Check your patches

If you’re the sort of person who is conservative about patching, and likes to delay for a while to see if other people have problems first…

…we’re hoping that the “zero-day/already in the wild” tag on this bug encouraged you not to wait too long, and that you have already applied this month’s updates.

If you haven’t, don’t delay any longer.

For better or worse, a security researcher going by Janggggg (yes, with five Gs), also known as @testanull, has recently published a proof-of-concept (PoC) exploit for the CVE-2021-42321 hole.

By his own admission, his attack code (ironically published on Microsoft’s GitHub site) “just pop[s] mspaint.exe on the target”, meaning that the published exploit can’t directly be used to run arbitrary code.

But Janggggg has also provided a link to a “grey hat” tool that he says will help you to generate your own so-called shellcode (executable code masquerading as data) that can be embedded into the exploit in place of simply launching Microsoft Paint.

Bluntly, this means you can adapt Jangggg’s PoC so that instead of merely requesting it to do something, you can instruct it to do anything.

This is a good example of how Patch Tuesday is often followed by what is jocularly referred to as Weaponised Wednesday or Takeback Thursday, when security practioners scramble to reverse engineer the patch itself in order to get insights into what was fixed, and how.

This sort of patch analysis isn’t trivial, but it does frequently help researchers and attackers alike to “rediscover” the bug, and also to get helpful insights into how it might actively be exploited.

As you can imagine, finding and exploiting a security hole in any software product is much easier and quicker if you know where to start looking, in the same way that you’re much more likely to win at blackjack if you know which cards have already been dealt from the pack.

Often, the details of how a bug was patched – for example, new error-checking code added to detect and reject invalid input data – can provide a handy shortcut to understanding not only how the bug works, but also how to construct booby-trapped input that allows the vulnerable program to be taken over completely, instead of simply crashed.

What to do?

Patch at once!

To verify that your Exchange servers are safe against this and other known security holes, you can use Microsoft’s official Exchange Server HealthChecker PowerShell script.

This extensive script reports on numerous aspects of your Exchange configuration, including advising you about missing security updates.

Note. Microsoft added Exchange 2013 to the list of vulnerable versions on 2021-11-16, only to change its mind on 2021-11-17 and report that it had “removed Exchange Server 2013 from the Security Updates table as it is not affected by this vulnerability.”


GoDaddy admits to password breach: check your Managed WordPress site!

The US Securities and Equities Commission (SEC) has just published a “Security Incident” submitted last week by Web services behemoth GoDaddy.

GoDaddy says that on 17 November 2021 it realised that there were cybercriminals in its network, kicked them out, and then set about trying to figure out when the crooks got in, and what they’d managed to do while they were inside.

According to GoDaddy, the crooks – or the unauthorised third party, as the report refers to them:

  • Had been active since 06 September 2021, a ten-week window.
  • Acquired email addresses and customer numbers of 1,200,000 Managed WordPress (MWP) customers.
  • Got access to all active MWP usernames and passwords for sFTP (secure FTP) and WordPress databases.
  • Got access to SSL/TLS private keys belonging to some MWP users. (The report just says “a subset of active users”, rather than stating how many.)

Additionally, GoDaddy stated that default WordPress admin passwords, created when each account was opened, were accessed, too, though we’re hoping that few, if any, active users of the system had left this password unchanged after setting up their WordPress presence.

(Default starting passwords generally need to be sent to you somehow in cleartext, often via email, specifically so you can login for the first time to set up a proper password that you chose yourself.)

GoDaddy’s wording states that “sFTP […] passwords were exposed”, which makes it sound as though those passwords had been stored in plaintext form.

We’re assuming, if the passwords had been salted-hashed-and-stretched, as you might expect, that GoDaddy would have reported the breach by saying so, given that properly-hashed passwords, once stolen, still need to be cracked by the attackers, and with well-chosen passwords and a decent hashing process, that process can take weeks, months or years.

Indeed, researchers at WordFence, a company that focuses on WordPress security, say that they were able to read out their own sFTP password via the official MWP user interface, something that shouldn’t have been possible if the passwords were stored in a “non-reversible” hashed form.

What could have happened to affected websites?

GoDaddy has now reset all affected passwords, and says it’s in the process of replacing all potentially stolen web certificates with freshly generated ones.

GoDaddy is also in the process of contacting as many of the 1,200,000 affected users at it can. (Customers who can’t be contacted due to incorrect or outdated details may not actually receive GoDaddy’s alerts, but there’s not a lot GoDaddy can do about that.)

This is a useful response, and GoDaddy hasn’t dithered over getting it out, given that the breach was first spotted just five days ago.

(The company also issued an uncomplicated and unqualified apology, as well as saying that “we will learn from this incident and are already taking steps to strengthen our provisioning system with additional layers of protection”, which is a refreshing change from companies that start off by telling you how strong their protection was even before the incident.)

However, with ten weeks in hand before getting spotted, the criminals in this attack could have used the compromised sFTP passwords and web certificates to pull off further cybercrimes against MWP users.

In particular, crooks who know your sFTP password could, in theory, not only download the files that make up your site, thus stealing your core content, but also upload unauthorised additions to the site.

Those unauthorised website additions could include:

  • Backdoored WordPress plugins to let the crooks sneak back in again even after your passwords are changed.
  • Fake news that would embarrass your business if customers were to come across it.
  • Malware directly targeting your site, such as cryptomining or data stealing code designed to run right on the server.
  • Malware targeting visitors to your site, such as zombie malware to be served up as part of a phishing scam.

Also, crooks with a copy of your SSL/TLS private key could set up a fake site elsewhere, such as an investment scam or a phishing server, that not only claimed to be your site, but also actively “proved” that it was yours by using your very own web certificate.

What to do?

  • Watch out for contact from GoDaddy about the incident. You might as well check that your contact details are correct so that if the company needs to send you an email, you’ll definitely receive it.
  • Turn on 2FA if you haven’t already. In this case, the attackers apparently breached security using a vulnerability, but to get back into users’ accounts later using exfiltrated passwords is much harder if the password alone is not enough to complete the authentication process.
  • Review all the files on your site, especially those in WordPress plugin and theme directories. By uploading booby-trapped plugins, the attackers may be able to get back into your account later, even after the all the original holes have been patched and stolen passwords changed.
  • Review all accounts on your site. Another popular trick with cybercriminals is to create one or more new accounts, often using usernames that are carefully chosen to fit in with the existing names on your site, as a way of sneaking back in later.
  • Be careful of anyone contacting you out of the blue and offering to “help” you to clean up. The attackers in this case made off with email addresses for all affected users, so those “offers” could be coming directly from them, or indeed from any other ambulance-chasing cybercrook out there who knows or guesses that you’re an MWP user.

By the way, we’re hoping, if GoDaddy was indeed storing sFTP passwords in plaintext, that it will stop doing so at once, and contact all its MWP customers to explain what it is now doing instead.


go top