IoT devices must “protect consumers from cyberharm”, says UK government

The UK legislature is currently interested in a law about what it calls PSTI, short for Product Security and Telecommunications Infrastructure.

If you’ve seen that abbreviation before, it’s almost certainly in the context of the PSTI Bill. (A Bill is proposed new legislation that has not yet been agreed upon; if ultimately enacted into law, it turns into an Act.)

Your first thought, on hearing of a proposed law about computer products and telecommunications, might be to wonder, “What sort of new surveillance, interception and encryption-cracking powers are they hunting around for now?”

Happily, for those who can remember the past and have learned that encryption backdoors generally favour the enemy and disadvantage the Good Guys, or for those who have already made the intellectually unimpeachable assumption that cybersecurity is unlikely to get stronger if you go out of your way to weaken it on purpose…

…that’s not what this is about.

It’s a much more modest regulatory proposal, and unlike those proposals that aim to disrupt security and cryptography “just in case we ever lock the keys in the car”, its goal is to demand a modest increase in security and basic cyber-reliability in products such as mobile phones, fitness trackers, internet webcams, cloud doorbells, and temperature sensors for your pet fish.

The IoT cybersecurity party – you’re invited

Very simply put, the UK government wants to set some basic, minimum standards for at least the following:

  • Default passwords. If Parliament gets its way, there won’t be any. You won’t be allowed to have pre-configured passwords in your devices, so that you can’t flood the market with products that every crook already knows how to get into.
  • Vulnerability disclosures. You’ll need a reliable way for security researchers who believe in responsible disclosure to contact you, and (we hope) some visible commitment to closing off security holes that you already know about before the crooks figure them out.
  • Update commitments. You’ll need to tell buyers in advance how long you are going to provide security fixes for the product they’re buying today.

Presumably, the third item in this list will be used hand-in-hand with the second one to stop you unilaterally disowning a tricky security problem by simply abandoning support as soon as it suits you, leaving your users – and the environment! – with a landfill device that became useless long before they might reasonably have expected.

We alluded to pet fish above because the Gov-dot-UK documents discussing this Bill include an example of how default passwords cause trouble: “In 2018, attackers were able to compromise a connected thermometer in a fish tank that had a default password. The fish tank was in the lobby of a US casino, and attackers used this vulnerability to enter the network and access sensitive details, such as bank details”. Beware the aquarium!

Too little, too late?

On one hand, you can easily criticise this entry-level regulation on the grounds that its demands could be considered a case of “too little, too late”, and that consumers would be better protected simply by urging experts to get more aggressive about naming and shaming devices that don’t meet reasonable standards, so consumers know to avoid them.

In other words, let the market force the issues.

On the other hand, you can equally well support basic rules like this on the grounds that they are likely to make even the most egregious offenders start doing at least something about cybersecurity in their product management and product development processes.

Those vendors who spurn the cybersecurity party altogether risk having their shoddy products simply swept off the shelves at a stroke, and returned for bulk refunds by unimpressed retailers.

Sometimes, say those who support cybersecurity rules of this low-level sort, the hardest part about cybersecurity inside a pile-’em-high-and-sell-’em-cheap electronics company is to get the topic onto the agenda at all, let alone to get it high up on the list.

Consumers are price conscious and often quite reasonably unaware of the issues involved, so you first need to get the government to force the market to force the issues.

What next?

As the government’s announcment puts it, in what we think is an entirely satisfactory example of cybersecurity discussed in plain English:

[C]ybersecurity continues to be an afterthought for many manufacturers of connectable products, and consumers often expect that a product is secure. In a 2020 report by the Internet of Things Security Foundation, only 1 in 5 manufacturers maintained systems for the disclosure of security vulnerabilities. This threatens citizens’ privacy, the security of a network, and adds to the growing risk of harms.

The document ends up with a final paragraph that we found rather less readable:

Since the government first published its Code of Practice in 2018, it has intentionally adopted a consultative and collaborative approach with industry, academia, subject-matter experts, and other key stakeholders. A primary aim of this approach has been to ensure that interventions in this space are maximally effective whilst minimising impact on organisations involved in the manufacture and distribution of consumer connectable products.

We’ve never warmed to jargon such as “interventions in this space”, which makes us think of tradespeople squeezing into cramped loft areas in an effort to fit modern insulation to poorly-designed older houses.

But we understand why Her Majesty’s Government has made this point, which we translate as “we intend to push through changes that unarguably give IoT vendors no choice about coming to the cybersecurity party”.

Manufacturers’ lobby groups understandably go out of their way to head off legislation that might increase their costs without persuading consumers to accept higher prices as a result.

Sidestepping that sort of lobbying altogether is perhaps best achieved by ensuring that no one in the process is faced with unexpected or unreasonable changes, thus effectively making the changes unexceptoinable…

…while at the same time forcing even the most recalcitrant manufacturers to do at least something about some of the underlying cybersecurity problems that they themselves have tipped into the marketplace.

In proverbial words, “A journey of 1,609,344 metres starts with a single step.”

Perhaps some vendors who would otherwise have shirked that first step forever might eventually have no choice but to do so.


Controversial face matchers Clearview set to be fined over $20m

The UK data protection regulator has announced its intention to issue a fine of £17m (about $23m) to controversial facial recognition company Clearview AI.

Clearview AI, as you’ll know if you’ve read any of our numerous previous articles about the company, essentially pitches itself as a social network contact finding service with extraordinary reach, even though no one in its immense facial recognition database ever signed up to “belong” to the “service”.

Simply put, the company crawls the web looking for facial images from what it calls “public-only sources, including news media, mugshot websites, public social media, and other open sources.”

The company claims to have a database of more than 10 billion facial images, and pitches itself as a friend of law enforcement, able to search for matches against mug shots and scene-of-crime footage to help track down alleged offenders who might otherwise never be found.

That’s the theory, at any rate: find criminals who would otherwise evade both recognition and justice.

In practice, of course, any picture in which you appeared that was ever posted to a social media site such as Facebook could be used to “recognise” you as a suspect or other person of interest in a criminal investigation.

Importantly, this “identification” would take place not only without your consent but also without you knowing that the system had alleged some sort of connection between you and criminal activity.

Any expectations you may have had about how your likeness was going to be used and licensed when it was uploaded to the relevant service (if you even knew it had been uploaded in the first place) would thus be ignored entirely.

Understandably, this attitude provoked an enormous privacy backlash, including from giant social media brands including Facebook, Twitter, YouTube and Google.

You can’t do that!

Early in 2020, those behemoths firmly told Clearview AI, “Stop leeching image data from our services.”

You don’t have to like any of those companies, or their own data-slurping terms-and-conditions of service, to sympathise with their position.

Uploaded images, no matter how publicly they may be displayed, don’t suddenly stop being personal information just because they’re published, and the terms and conditions applied to their ongoing use don’t magically evaporate as soon as they appear online.

Clearview, it seemed, was having none of this, with its self-confident and unapologetic founder Hoan Ton-That claiming that:

There is […] a First Amendment right to public information. So the way we have built our system is to only take publicly available information and index it that way.

The other side of that coin, as a commenter pointed out on the CBS video from which the above quote is taken, is the observation that:

You were so preoccupied with whether or not you could, you didn’t stop to think if you should.

Clearview AI has apparently continued scraping internet images heartily over the 22 months since that video aired, given that it claimed at that time to have processed 3 billion images, but now claims more than 10 billion images in its database.

That’s despite the obvious public opposition implied by lawsuits brought against it, including a class action suit in Illinois, which has some of the strictest biometric data processing regulations in the USA, and an action brought by the American Civil Liberties Union (ACLU) and four community organisations.

UK and Australia enter the fray

Claiming First Amendment protection is an intriguing ploy in the US, but is meaningless in other jurisdictions, including in the UK and Australia, which have completely different constitutions (and, in the case of the UK, an entirely different constitutional apparatus) to the US.

Those two countries decided to pool their resources and conduct a joint investigation into Clearview, with both country’s privacy regulators recently publishing reports on what they found, and interpreting the results in local terms.

The Office of the Australian Information Commisioner (OAIC) decided that Clearview “interfered with the privacy of Australian individuals” because the company:

  • Collected sensitive information without consent;
  • Collected information by unlawful or unfair means;
  • Did not notify individuals of data that was collected; and
  • Did not ensure that the information was accurate and up-to-date.

Their counterparts at the ICO (Information Commissioner’s Office) in the UK, came to similar conclusions, including that Clearview:

  • Had no lawful reason for collecting the information in the first place;
  • Did not process information in a way that people were likely to expect;
  • Had no process to to stop the data being retained indefinitely;
  • Did not meet the “higher data protection standards” required for biometric data;
  • Did not tell anyone what was happening to their data.

Loosely speaking, both the OAIC and the ICO clearly concluded that an individual’s right to privacy trumps any consideration of “fair use” or “free speech”, and both regulators explicity decried Clearview’s data collection as unlawful.

The ICO has now decided what it actually plans to do, as well as what it thinks about Clearview’s business model.

The proposed intervention includes: the aforementioned $17m ($23m) fine; a requirement not to touch UK residents’ data any more; and a notice to delete all data on British people that Clearview already holds.

The Aussies don’t seem to have proposed a financial penalty, but also demanded that Clearview must not scrape Australian data in future; must delete all data already collected from Australians; and must show in writing within 90 days that it has done both of those things.

What next?

According to reports, Clearview CEO Hoan Ton-That has reacted to these unequivocally adverse findings with an opening sentiment that would surely not be out of place in a tragic lovesong:

It breaks my heart that Clearview AI has been unable to assist when receiving urgent requests from UK law enforcement agencies seeking to use this technology to investigate cases of severe sexual abuse of children in the UK.

Clearview AI may, however, find its plentiful opponents replying with song lyrics of their own:

Cry me a river. (Don’t act like you don’t know it.)

What do you think?

Is Clearview AI providing a genuinely useful and acceptable service to law enforcement, or merely taking the proverbial? (Let us know in the comments. You may remain anonymous.)


Cloud Security: Don’t wait until your next bill to find out about an attack!

Google’s Cybersecurity Action Team just published the first ever edition of a bulletin entitled Cloud Threat Intelligence.

The primary warnings are hardly surprising (regular Naked Security visitors will have read about them here for years), and boil down to two main facts.

Firstly, crooks show up fast: occasionally, it takes them days to find newly-started, insecure cloud instances and break in, but Google wrote that discover-break-and-enter times were “as little as 30 minutes.”

In Sophos research conducted two years ago, where we set out specifically to measure how long before the first cybercriminals came visiting, our honeypots recorded first-knock times of 84 seconds over RDP, and 54 seconds over SSH.

Imagine if it took just one minute after you closed the contract on your new property for the first crooks came sneaking up your driveway to try all your doors and windows! (No pun intended.)

Attacked no matter what

Importantly, in our research, the cloud instances we used weren’t the sort of cloud server that a typical company would set up, given that they were never actually named via DNS, advertised, linked to, or used for any real-world purpose.

In other words, the first crooks found us in about a minute simply because we showed up on the internet at all: we were attacked no matter what we did to keep a minimal profile.

They didn’t need to wait until we’d publicised the servers ourselves, as you would if you were starting a new website, blog or download site.

Likewise, the criminals didn’t need to wait until we’d established the servers as standard network API targets (known in the jargon, slightly ambiguously, as endpoints) and started generating visible traffic ourselves that could be spotted using those online services.

In real life, therefore, the situation is probably even worse that in our research, given that you’re definintely a generic, automatic target for crooks who simply scan, re-scan and re-re-scan the internet looking for everyone; and you may also be a specific, interesting target for crooks who are on the lookout not just for anyone, but for someone.

Secondly, weak passwords are still the primary way in: Google confirmed that weak passwords are not only a thing used by cybercriminals in cloud intrusions, but the thing.

Technically, weak passwords (a category which, sadly, includes no password at all) did not not have an absolute majority in Google’s “how did they get in?” list, but at 48% it was a close call.

Notably, password security blunders were a long way ahead of the next most likely break-and-enter technique, which was unpatched software.

You’d probably already guessed that patching would be a problem, given how often we write about this issue on Naked Security: vulnerable software let in 26% of the attackers.

Amusingly, if we’re allowed to give a wry smile at this point, 4% of Google’s intrusions were allegedly caused by users accidentally publishing their own passwords or security keys by uploading them by mistake while publishing open source material on sites such as GitHub.

Ironically, Naked Security’s most recent warning about the risks of what you might call “cybersecurity self-incrimination” came just last week.

We reported how investigators in the UK were able to track down more than 4400 GitHub projects in which the uploader’s own Firefox cookie files had somehow become entangled – a search that literally took seconds when we reproduced it.

And that’s just one type of file that could contain API secrets, from one specific application, on one particular cloud sharing service.

We’re not sure whether to be relieved that self-incrimination accounted for just 4% of the intrusions, or dismayed that this break-in technique (we’re not sure it’s sophisticated enough to be called “hacking”) was on the list at all.

What about ransomware?

We know what you’re thinking.

“Surely the intrusions were all about ransomware,” you might be saying, “because that’s the only cybersecurity issue worth worrying about right now.”

Unfortunately, if you’re viewing ransomware in isolation, putting it on its own at the front of the queue to deal with in isolation, and relegating everything else to the back burner, then you’re not thinking about cybersecurity broadly enough.

The thing about ransomware is that it’s almost always the end of the line for the criminals in your network, because the whole idea of ransomware is to draw maximum attention to itself.

As we know from the Sophos Rapid Response team, ransomware attackers leave their victims in no doubt at all that they’re all over your digital life.

Today’s ransomware notifications no longer rely on simply putting up flaming skulls on everyone’s Windows desktop and demanding money that way.

We’ve seen crooks printing out ransom notes on every printer in the company (including point-of-sale terminals, so that even customers know what just happened), and threatening employees individually using highly personal stolen data such as social security numbers.

We’ve even heard them leaving chillingly laconic voicemail messages explaining in pitiless detail how they plan to finish off your business if you don’t play their game:

[embedded content]

What really happened next?

Well, in Google’s report, all but one of the items on the “actions after compromise” list involved the cybercriminals using your cloud instance to harm someone else, including:

  • Probing for new victims from your account.
  • Attacking other servers from your account.
  • Delivering malware to other people using your servers.
  • Kicking off DDoSes, short for distributed denial of service attacks.
  • Sending spam so that you get blocklisted, not the crooks.

But top of the list, apparently in 86% of successful compromises, was cryptomining.

That’s where the crooks use your processing power, your disk space, and your allotted memory – simply put, they steal your money – to mine cryptocurrency that they keep for themselves.

Remember that ransomware doesn’t work out for the crooks if you have a newly-configured cloud server that you haven’t really put to full use yet, because there’s almost certainly nothing on the server that the criminals could use to blackmail you.

Underutilised servers are unusual in regular networks, because you can’t afford to let them sit idle after you’ve bought them,

But that’s not the way the cloud works: you can pay a modest sum to have server capacity made available to you for when you might need it, with no huge up-front capital costs before you get the service going.

You only start paying out serious money if you start using your allocated resources heavily: an idle server is a cheap server; only when your server gets busy do you really start to rack up the charges.

If you’ve done your economic calculations properly, you expect to come out ahead, given that an increase in server-side load ought to correspond to an increase in client-side business, so that your additional costs are automatically covered by additional income.

But there’s none of that economic balance if the crooks are hammering away entirely for their own financial benefit on servers that are supposed to be idle.

Instead of paying dollars a day to have server power waiting for when you need it, you could be paying thousands of dollars a day for server power that is earning you a big, fat zero.

What to do?

  • Pick proper passwords. Watch our video on how to choose a good one, and read our advice about password managers.
  • Use 2FA wherever and whenever you can. If you use a password manager, set up 2FA to help you keep your password database secure.
  • Patch early, patch often. Don’t zoom in only on so-called zero-days that the crooks already know about. Patches for security holes are routinely reverse-engineered to work out how to exploit them, often by security researchers who then make these exploits public, supposedly to educate everyone about the risks. Everyone, of course, includes the cyberunderworld.
  • Invest in proactive cloud security protection. Don’t wait until your next cloud bill arrives (or until your credit card company sends you an account balance warning!) before finding out that there are criminals racking up fees and kicking off attacks on your dime.

Think of it like this: sorting out your cloud security is the best sort of altruism.

You need to do it anyway, to protect yourself, but in doing so you protect everyone else who would otherwise get DDoSed, spammed, probed, hacked or infected from your account.


S3 Ep60: Exchange exploit, GoDaddy breach and cookies made public [Podcast]

[00’27”] Cybersecurity tips for the holiday season and beyond.
[02’20”] Fun fact: The longest-lived Windows version ever.
[03’40”] Exchange at risk from public exploit.
[10’34”] GoDaddy loses passwords for 1.2m users.
[18’25”] Tech history: What do you mean, “It uses a mouse?”
[20’25”] Don’t make your cookies public!
[27’51”] Oh! No! DDoS attack in progress – unfurl the umbrellas!

With Paul Ducklin and Doug Aamoth. Intro and outro music by Edith Mudge.

LISTEN NOW

Click-and-drag on the soundwaves below to skip to any point in the podcast. You can also listen directly on Soundcloud.


WHERE TO FIND THE PODCAST ONLINE

You can listen to us on Soundcloud, Apple Podcasts, Google Podcasts, Spotify, Stitcher and anywhere that good podcasts are found.

Or just drop the URL of our RSS feed into your favourite podcatcher software.

If you have any questions that you’d like us to answer on the podcast, you can contact us at tips@sophos.com, or simply leave us a comment below.


US government securities watchdog spoofed by investment scammers – don’t fall for it!

The US Securities and Exchange Commission (SEC) has issued numerous warnings over the years about fraudsters attempting to adopt the identity of SEC officials, including by phone call spoofing.

Call spoofing is where a scammer calls you up on your landline or mobile phone, claims to be from organisation X, and then reassures you by saying, “If you don’t believe me, check the number I’m calling from.”

Lo and behold, when you do, the Caller ID (as it’s known in North America) or Calling Line Identification (CLI, a term used elsewhere in the world) says that the call is coming from X’s official number.

Proof… except that is isn’t!

The problem here is that the jargon terms Caller ID and CLI are misnomers, because the technology identifies neither the caller themselves nor the phone line that the caller is using.

It’s a suggestion, not a fact

Identifying the actual caller is as good as impossible in the case of a regular landline or mobile call, because the phone (or the phone system) has no reliable way of identifying the person who dialled the call in the first place, or who is speaking into the microphone.

And even identifying the phone number of the calling line is troublesome, because the Caller ID data that’s decoded and displayed on your device is unauthenticated, and therefore unauthenticatable.

If it can’t be authenticated, then it’s not really any sort of identification at all.

In fact, anyone who knows the necessary techniques can inject pretty much any number they like into the call signalling process, and thus can cause almost any number they like to show up before you answer.

As it happens, altering the Caller ID to give a completely different number when you place a call is still legal, and considered useful, in many countries.

For example, you might want to call someone from a call centre (where they wouldn’t be able to return the call to the individual employee’s extension anyway), but to show up on their phone with a toll-free number or a central switchboard number for any return calls.

In short, you need to think of Caller ID or CLI as being no more reliable, and no more precise, than the return address on the back of a snail-mail letter, the choice of which is entirely up to the sender.

In other words, if Caller ID says the call isn’t from someone you expect, it’s OK to decide you are not going to trust it.

But that doesn’t work the other way around: just because it seems to come from someone you do expect, it’s not OK to trust it.

(You may want to read the last two sentences twice each.)

Now targeting cryptocurrency investors

Well, the SEC has recently re-iterated its warning about spoofed phone calls, thanks to investment scammers using the SEC’s “phone identity” to trick people into believing that the caller actually represents the SEC.

As you’ve probably guessed, today’s scammers are often focusing on the hot topic of the day, cryptocurrencies, claiming to be SEC officials who are doing you the favour of warning you about “fraudulent” transactions:

We are aware that several individuals recently received phone calls or voicemail messages that appeared to be from an SEC phone number. The calls and messages raised purported concerns about unauthorized transactions or other suspicious activity in the recipients’ checking or cryptocurrency accounts.

[…]

SEC staff do not make unsolicited communications – including phone calls, voicemail messages, or emails – asking for payments related to enforcement actions, offering to confirm trades, or seeking detailed personal and financial information. Be skeptical if you are contacted by someone claiming to be from the SEC and asking about your shareholdings, account numbers, PIN numbers, passwords, or other information that may be used to access your financial accounts.

We’ve also had Naked Security readers report to us that they’ve had similar scam calls in the UK, where the calls came up with their own bank’s real number, and the crooks (unsurprisingly) opened the call by “identifying” themselves as working for the bank.

Unearned trust

Unfortunately, it’s easy, and very handy, to get in the habit of trusting, or at least relying on, the Caller ID number that shows up.

We know someone who recently had a coronavirus outbreak at home (one of the kids caught the virus at school, so all the family ended up infectious at the same time), and therefore got caught up in a mini-pingdemic all of their own.

Everyone in the household got Track-and-Trace calls triggered by everyone else in the household…

…so the fact that a “Track-and-Trace” Caller ID popped up before they answered each call turned out to be very useful, because they knew – or assumed that they knew – what to expect.

But they admitted, afterwards, that the effect of this was to “teach” them all (or perhaps “innocently misdirect them” is a better term) to trust those incoming caller numbers more than they had been inclined to do so before.

What to do?

Here’s a simple approach: treat Caller ID names or numbers like those unwanted weather icons that your phone insists on showing you, even when you’re already outside.

Often they’re right, or partly right; sometimes they’re wrong, and even badly wrong; but they are never definitive.

If you see an icon showing rainclouds, you might as well take your umbrella, on the grounds that if the sun comes out instead, you can at least use it as a parasol.

But never leave your umbrella behind merely because you see an icon of a shining sun: that icon is a suggestion; it’s not proof of anything.

Most importantly, if any caller ever invites you to look at the Caller ID number as evidence of their truthfulness…

you can be 100% certain, right away, that they are lying. (We recommend that you simply end the call at once, without a further word.)

If you need to contact an organisation by phone, find your own way there, for example by using a number:

  • From a trustworthy document such as the back of your credit card,
  • In the letter you got when you signed up for the service, or
  • As displayed inside one of the branches or offices of the company itself, if there is one near you.

(We snapped a photo of the various official helpline numbers of our bank from a sign in a nearby branch, after asking one of the uniformed staff inside the branch if the information was current.)

And, remember our overarching anti-scammer advice to protect your personal information: if in doubt, don’t give it out.


go top