Website operator fined for using Google Fonts “the cloudy way”

Typefaces can be a tricky business, both technically and legally.

Before word processors, laser printers and digital publishing, printed materials were quite literally “set in metal” (or wood), with typesetters laying out lines and pages by hand, using mirror-image letters cast on metal stalks (or carved into wooden blocks) that could be arranged to create a back-to front image of the final page.

The laid-out page was effectively a giant stamp; when inked up and pressed against a paper sheet, a right-way-round image of the printing surface would be transferred to the page.

Ming Dynasty movable type set with wooden blocks
.Note how the printed page is the mirror of the typesetter’s blocks.

For books printed in Roman script, typesetters kept multiple copies of each letter in separate pigeonholes in a handy tray, or printer’s case, making them easy to find at speed. The capital letters were kept in their own case, which was placed by convention above the case containg the small letters, presumably so that the more commonly-used small letters were closer to hand. Thus capital letters came from the upper case, and small letters from the lower case, with the result that the terms upper case and lower case became metaphorical phrases used to refer to the letters themselves – names that have outlived both printers’ cases and movable type.

Getting the right look

Designing a typeface (or “font”, as we somewhat inexactly refer to it today) that is both visually appealing and easy to read, and that retains a unique and attractive look across a range of different sizes, weights and styles, is an astonishingly complex task.

Indeed, although the digital age has made it easy to create new fonts from scratch, and cheap to ship them as computer files (another physical document metaphor that has survived into the computer era), designing a good typeface is harder than ever.

Users expect the font to look good not only when scaled up or down to any size, including fractions of a millimetre, but also when displayed or printed as a collection of separate pixels at a variety of resolutions using a range of different technologies.

As a result, good typefaces can be expensive, especially if you want to adopt a font collection as a company standard for your corporate identity, and you want to license it correctly for all possible uses, including on the web, in print, for editorial, in advertising, on posters, in films and videos, for redistribution embedded in presentations and documents, and more.

“Free” font collections abound online, but – as with videos, music, games and other artistic content – many of these downloads may leave you with dubiously licensed or even outright pirated fonts installed on your computer or used in your work.

Nevertheless, many distinguished font creators provide open source fonts available for personal and commercial use, and numerous free-and-properly-licensed font collections do exist, including the well-known Google Fonts.

In fact, the Google Fonts site not only allows you to download font files to use in your own documents or to copy onto your own web servers to embed into your web pages…

…but also allows you to link back to a Google Font server so you don’t even need to host the file yourself.

For boutique websites, that’s convenient because it means you get font updates automatically, and you don’t have to pay any bandwidth fees to your hosting provider for sending the font file to every visitor.

Local or cloudy?

On the Naked Security website, for example, our body text [2022-01-31] is set in a typeface called Flama, which isn’t open source.

So, we host the font file ourselves and serve it up as part of the web page, from the same domain as the rest of the site, using an @font-face style setting, in the fashion you see here:

Highlighted style code shows font file loaded from same source as this page.

This means that even though you are unlikely to have Flama installed yourself, our website should render with it in your browser just as it does in ours, using the WOFF (Web Open Font Format) version of the font file.

The Flama WOFF font you see below is modestly sized at just 26KBytes, but is our responsibility to serve up as needed:

Licensing and serving in one place

So, Google Fonts not only “solves” your licensing issues by offering open source fonts that you are allowed to use commercially, it can also solve your “how to serve it” hassles, too.

You simply link to a Google-hosted web stylesheet (CSS) page that sets up the necessary @font-family specifications for you, and fetched the desired font files from the Google Fonts service, like this:

<link rel="stylesheet" href="https://fonts.googleapis.com/css?family=fontyouwant">

Of course, that means that Google’s servers get a visit from your browser, and thus Google unavoidably gets your IP number (or an IP number provided by your ISP or VPN provider, which loosely amounts to the same thing).

If you have some sort of tracking protection turned on, your browser might not fetch the requested CSS and font data, in which case you’ll see the text in the closest available font your browser has available.

But if you haven’t set your browser to block these downloads, you’ll get the font and Google will get your IP number.

Is that private enough?

Apparently, not always.

A District Court in Munich, Germany, recently heard a legal complaint in which the plaintiff argued that a website that had linked across to Google Fonts, instead of downloading and hosting a copy of the free font on its own site, had violated their privacy.

The court agreed, demanded that the website operator start hosting fonts locally, and awarded the complainant damages of €100 (about $110).

The court’s argument doesn’t seem to be suggesting any and all other third party “widget linking” is now considered illegal in Germany (or, more particularly, in the region where this court holds sway), but only that websites are expected to host content locally if that’s easily possible:

Google Fonts kann durch die Beklagte auch genutzt werden, ohne dass beim Aufruf der Webseite eine Verbindung zu einem Google-Server hergestellt wird und eine Übertragung der IP-Adresse der Webseitennutzer an Google stattfindet.

(The defendant [i.e. the website operator] can make use of Google Fonts without establishing a connection to a Google server, and without the IP address of the website user being transmitted to Google.)

What next?

If you’ve ever had rogue adverts – what’s known as malvertising – thrust into your browser when you’ve visited an otherwise unexceptionable and trustworthy website, you might be thinking, “This is a great decision, because if everyone who monetised ads served them up from their own domains, it would be much easier to keep track of who was responsible for what, and ad filtering would become a whole lot simpler.”

But if you’ve ever visited boutique websites that have tried to do it all themselves and found yourself struggling with content such as JavaScript that could have been updated but hasn’t been, or server-side plugins that seem to contain bugs that you thought were fixed long ago, you might be thinking, “Sometimes, it’s worth having a web content supply chain that’s longer and more complex that is strictly necessary, if the content providers further up the chain have more knowledge and resources to keep things up to date.

There’s also the problem that this judgement has penalised a website provider for linking to a Google service that has (or at least claims to have) a pretty liberal privacy and tracking policy:

The Google Fonts API is designed to limit the collection, storage, and use of end-user data to only what is needed to serve fonts efficiently.

Use of Google Fonts API is unauthenticated. No cookies are sent by website visitors to the Google Fonts API. Requests to the Google Fonts API are made to resource-specific domains, such as fonts.googleapis.com or fonts.gstatic.com. This means your font requests are separate from and don’t contain any credentials you send to google.com while using other Google services that are authenticated, such as Gmail.

Yet the judgement is of necessity mute about embedded links that track users as part of their service, such as web analytics tools, because those services are almost always cloud-based by design, and therefore cannot be hosted locally.

Are those to be made illegal in Bavaria, too? Or will the cloud-centric nature of web analytics effectively exempt analytics services from this sort of judgement simply because the expectation is that they’re rarely, if ever, hosted locally?

And what about so-called “live content” from other sites?

Twitter, for example, requires that if you want to show a complete tweet in your web page, you need to embed it directly, rather than locally hosting a screenshot and providing a link that a user can optionally click later on.

From a traffic point of view, that makes sense for Twitter, because “live” links not only display current tweet statistics, but also make it really easy for readers to engage frictionlessly with the tweet.

But it also makes sense from a legal and cybersecurity point of view, because Twitter itself can adapt data that’s embedded via links to its site (such as deleting offensive, illegal or misleading content as desired or required), instead of relying on every website that ever took a screenshot of a tweet to go back and update or remove the content if common sense or a court order demands it.

Have your say

Where do you stand on this?

Do you think this is an overreach by the court?

Do rulings like this suggest we’re heading towards the end of the era of third-party adverts (after all, adverts don’t have to be served via the cloud; they all could be served locally, even if most services don’t yet support that way of working, and even if it’s a lot less convenient)?

Will we be more secure if all website operators are required to self-host all content such as the stylesheets and JavaScript they rely upon, or would that inadvertently favour the crooks by leaving us with more out-of-date code than we would otherwise have?

Let us know below… you many remain anonymous if you like.


Coronavirus SMS scam offers home PCR testing devices – don’t fall for it!

A Naked Security reader in the UK alerted us to a scam they received this afternoon in a text message.

The message claimed to come from the NHS, Britain’s National Health Service, which administers coronavirus vaccinations and provides free testing throughout the country:

As you probably know, PCR tests, which currently require processing in a laboratory, are considered more accurate than self-administered lateral flow tests.

Indeed, PCR tests are both advised and free in the UK if you already have coronavirus symptoms, or have been in contact with someone who’s infectious.

You can have a one-off test set sent through the mail, and post the completed test out to the lab for processing, but that adds time until you get the result – and if the test is positive but you don’t yet have any symptoms, that in turn adds time to your mandatory isolation period.

So, as you can imagine, for anyone who is self-employed but who needs to be out and about for their job – plumbers, electricians, care workers, painters and dozens of other professions – a home testing device that could reduce the time to receive a trustworthy result would be very useful.

We have no idea if such a consumer device could affordably be made, and if so whether the results could reliably and securely be validated online, but in a world in which retail companies can deliver esoteric products to your doorstep within hours and securely receive payment, in which telephones include high-resolution video cameras that can stream the images worldwide in real time, and in which private citizens can buy joyrides into space, we’re going to assume that there aren’t any insurmountable technological reasons that would make this a laughable idea.

Even better, for people who are self-employed and visit lots of other houesholds to do their jobs, is that a home testing device could allow workers to test so rapidly and reliably that they might even be able turn up at their appointments with a fresh and verifiable “COVID test pass” performed that very morning.

So you can understand why people who received the message above might have considerable interest in checking it out.

What to do?

We hope you’d spot this for a scam right away, but you can see why it was worth the while of the crooks to try it out.

After all, the UK government is an enthusiastic user of text messages for numerous purposes, including 2FA, reminders, notifications and more, so SMSes from government departments are not a rarity.

Indeed, you can’t book a coronavirus test online without providing a mobile phone number in advance, ready to receive the test results by text.

So, if you’re tempted to click through just in case, ask yourself the following questions first:

  • Is the story likely? No. Test results may come by SMS, but offers of amazing new experimental medical equipment don’t!
  • Does the link look likely? No. NHS links usually end NHS dot OK, wereas this one has a weird-looking dot COM address.
  • Do I need to click the link at all? No. Even if the link were genuine, you should be able to ignore the link and find your own way to the right place.

We’d be happy to show you what this scam looked like if you did click through, but we’re happier still to tell you that the website currently isn’t working properly.

The domain is brand new, registered just this morning; the HTTPS web certificate was issued at 7am today; and the web server is active and accepting connections…

…but all we could coax out of it was a short list of filenames, and a page that said Error 600:

The HTTPS certificate was issued this morning.
At the time of publication, that’s all she wrote.

(In case you’re wondering, the web page that says Error 600 actually had a HTTP response code of 200. Error 600 is meaningless, because there aren’t any HTTP codes above 599.)

Happy Data Privacy Day – and we really do mean “happy” :-)

You’ve probably had 42 emails already this week to tell you this…

…but we’re going to say it anyway: “Happy Data Privacy Day!”

Don’t panic.

We’re not going to assail you with an academic argument about asserting your privacy, or provoke you with a polemic positing that privacy and a private life are human rights. (It is, and they are, so let’s save time and crack straight on.)

The problem is that although most of us care about our personal data, few of us want to exclude ourselves entirely from what the internet has to offer.

Indeed, many of us actively enjoy using online services – especially social networks – and making online friends.

Loosely speaking, we’re happy to trade information about our own lives in return for insights into, and enagement with, the lives of other people.

And why shouldn’t we?

So here are a few simple tips to help you indulge more safely…

GET TO KNOW YOUR PRIVACY CONTROLS

Take the time to learn what privacy controls are available in all the apps and online services you use.

Unfortunately, every app and every social network seems to do things differently, with privacy and security options often scattered liberally across numerous Settings pages.

Don’t be afraid to dig through all the options (you may be pleasantly surprised as some of the controls available), and don’t just rely on the default settings.

Try turning off as many data sharing options off as you can, and only turn them back on if you decide you really want and need them.

DECIDE WHAT YOUR DATA IS REALLY WORTH

Sometimes, a service may demand you to share more than you are willing to hand over – your address, phone number or birthday, for example.

If an app or website asks for data that you just don’t think is relevant for what you are getting in return, ask yourself, “Do I really need to sign up for this, or should I find somewhere else that isn’t so nosy?”

BE FAIR TO YOURSELF AND TO OTHERS

Don’t let your friends talk you into airing and sharing more than you’re comfortable with – after all, it’s your digital life and your data, not theirs.

It’s easy to get swept into privacy-sapping online behaviour due to FOMO – the infamous Fear of Missing Out.

If FOMO is a problem for you, take heart: these days, JOMO is a respectable option too.

JOMO is short for the Joy of Missing Out, described in splendidly highbrow fashion by the BBC as “relief from the breathless and guilt-laden need to be perennially switched on.”

The flipside of this is to respect your friends when you have something that includes them – such as a photo – that you want to go public with, but that they would prefer to keep private.

For example, even though the law in your country may allow you to share selfies with your friends even if they ask you not to…

…honour their request, and let them have their JOMO moment.

DON’T LET SCAMMERS INTO YOUR LIFE

Meeting new people online can be fun, and there’s nothing wrong with doing it – just don’t be in too much of a hurry to believe what people tell you about themselves.

As the US public service likes to remind people when they’re making decisions online: Stop. Think. Connect.

Many scams and scammers are actually fairly obvious, as long as you take the time to look for the signs, so:

  • Be aware before you share. Every little bit you give away about yourself makes it easier for a scammer to charm you, threaten you, or entice you into an online relationship you didn’t ask for in the first place.
  • If in doubt, don’t give it out. If it feels like a scam, back yourself and assume that it is.
  • No reply is a often good reply. Never feel compelled to reply out of politeness or completeness. It’s easier to stay out of a wheedler’s clutches if you don’t open the door for a reply-to-your-reply that might entice you into an ongoing conversation.
  • Listen to friends and family. Don’t spurn the advice of people who already know you, especially when money is involved. Whether it’s a romance scammer who falsely claims to love you, or a newfound “business associate” who has fraudulently pitched you a “job” in their “company”, don’t let FOMO triumph over JOMO.

ADVICE YOU CAN SHARE WITH FRIENDS AND FAMILY

Are you in the tricky position of having friends or family whom you think have been ensnared by scammers online, but who won’t give you the time of day because they think you’re deliberately trying to puncture their dreams?

Here’s a low-key video where someone with no connection to them says it for you:


S3 Ep67: Tax scams, carder busts and crypto capers [Podcast + Transcript]

LISTEN NOW

Click-and-drag on the soundwaves below to skip to any point. You can also listen directly on Soundcloud.

With Doug Aamoth and Paul Ducklin.

Intro and outro music by Edith Mudge.

You can listen to us on Soundcloud, Apple Podcasts, Google Podcasts, Spotify, Stitcher and anywhere that good podcasts are found. Or just drop the URL of our RSS feed into your favourite podcatcher.

READ THE TRANSCRIPT

DOUG AAMOTH. Tech scams, bad guys arrested, and 2FA – what could possibly go wrong?

All that, and more, on the Naked Security Podcast.

[MUSICAL MODEM]

Welcome to the podcast, everybody.

I am Doug; he be Paul…

…and Paul, I’m going to be the first to wish you happy tax scam season, my friend.


PAUL DUCKLIN. Oh, dear.

I guess it’s particularly relevant to the US just right now, isn’t it?


DOUG. Yes: we are girding our financial loins, collectively getting ready to file our taxes.


DUCK. Of course, any time of the year kind of works for a tax scam, doesn’t it?

If you’re in the UK, the tax year is April to March; South Africa, it’s March to February; Australia it’s July to June.

So everywhere there’s *something* going on.

But in the US, it probably fits in quite well now – so do be on the alert!


DOUG. Yes: we will talk about our first of possibly many tax scam stories shortly.

But first, we like to begin the show with a Fun Fact, and I found this fact to be very fun.

The etymology of the word helicopter may not be what you think.

It is not a combination of heli- and -copter, but of helico-, the derivation of helix, in this case meaning spiral, and -pter, from the Greek “pteron”, meaning wings or feathers, commonly used to describe flying creatures such as the pteranodon and pterodactyl.

So it’s helico- plus -pter!

How do you like that?


DUCK. That’s great, Doug!

Like helicobacter. That’s the screw-shaped bacterium that two Aussies… whose names I forget; they got the Nobel Prize after being laughed at for many years when they discovered that ulcers are caused by bacteria.

Because nobody believed that bacteria could live in the gut: “Too acidic.”

And everyone laughed at them and said, “It’s not a bacterium. Forget it!”

And they found helicobacter pylori


DOUG. Wow!


DUCK. ..the “screw-shaped bacterium of the stomach”. And I’d never connected that back with helico…pter!


DOUG. A free and interesting bonus Fun Fact – it’s always welcome on the Naked Security Podcast.


DUCK. Love your work, Doug!


DOUG. Love your work.. and let’s talk about someone else getting to work.

You got your first tax scam of the year, and it is an odd tax scam that doesn’t really ask for much….


DUCK. That’s correct, Doug.

I thought I would write about it just because, as you say, it’s that time of year for people in the US.

In previous years, when we’ve written about tax scams, they’ve always been either high pressure – “something bad will happen; if you don’t click this link login and fix this, you could get audited”, and who wants that? – or like the one that I got personally last year, apparently from the UK Tax Office, Her Majesty’s Revenue and Customs: “a tax rebate of £278.44 has been issued to you; click here”.

SMS tax scam unmasked: Bogus but believable – don’t fall for it!

We spoke about this on the podcast; it was a perfect facsimile of the HMRC login page, or an almost perfect facsimile.

Now this one, this year, this was obviously US based because it mentioned W-2. (In the UK, the equivalent form is the P60: that’s the thing you get from your employer that says, “This is how much we paid you, and this is how much tax we’ve already taken away and paid to the Revenue.”)

And it just says, “2021 new client fillings”… they mean *filings*, obviously… “I intend to change CPA.” (For people outside the US, CPA is a CA, a Chartered Accountant.)

“I intend to change CPA for 2021. Would like to know if you’re taking new clients. I’ve got all the documents. I just haven’t quite got my W-2 yet.”

In other words, I’m nearly there. Then it says, “Kindly advise on how to proceed, and if I can send forth all the available documents. And what are your fees for individual returns? Thank you.” And then the person claims to be a Managing Director.

So it’s basically fishing for a little bit of business friendship, I guess, Doug.


DOUG. It is odd, because I am reasonably sure you are not an accountant.

So this seems like a spray-and-pray, sent to who knows how many people in the hope that some of them are accountants.

And of those that are responding and saying, “Oh yes, I can help you out. Let’s talk business.”


DUCK. I’m sure that another part of this, Doug, is that it just looks like somebody who basically emailed the wrong business/person.

So you can imagine people going, “Oh, you must have made a mistake. I’m not a CPA. You’ve got the wrong person.”

In other words, although it’s spray-and-pray, the pray is not, “If the person doesn’t click the link, then the scam isn’t going to work.”

It feels to me like a kind of romance scam – it’s an interesting way to start a conversation that gets people to identify their willingness to communicate.


DOUG. We’ve got some advice, the first of which is – you touched on this a little bit – “Be aware before you share.”


DUCK. Yes, because every little bit that you give away about yourself – it might not feel that it matters individually, but it does help somebody who has your worst interests at heart to build a backstory that gels with you and maintains your interest, in just the same way that romance scammers do.

If you come along and say, “I like the movies of XYZ director”, they don’t say, “Oh, no, I hate that person!”

The romance scammer just adapts their behavior, their backstory, their made up life, to match the things that will keep you on the hook.

Romance scammer who targeted 670 women gets 28 months in jail


DOUG. As we’ve said before many times, “If in doubt, don’t give it out.”


DUCK. Yes!

Simply put, it if it feels like a scam, maybe just back yourself: assume that it is!


DOUG. And, “No reply is often a good reply.”


DUCK. Yes, I think a lot of people, perhaps older people more – although with younger people, there’s always that FOMO, isn’t there: Fear of Missing Out?

Perhaps, for older people, there’s a sense that the idea that you would just “show someone the hand” and just not reply… that’s seen as being a bit difficult or maybe a bit pretentious.

If that’s the way you feel in real life, then you’re probably a nice person to meet and know!

But online, it just means that you’re probably a bit too likely to give away stuff that you shouldn’t.


DOUG. I did learn this week that the opposite of FOMO is JOMO, the Joy of Missing Out, which is perfect for an introvert like me.

I do like missing out on things – so it’s the opposite of FOMO!


DUCK. I’m going to adopt that!

I think it could be very uplifting – thank you for that, Douglas!


DOUG. You’re welcome.

And finally, “Listen to friends and family.”


DUCK. If friends and family – we said this last week – are advising you that maybe you are in over your head; maybe you are talking to somebody who is out to fleece you… remember: JOMO!

If they’re right and you listen to them, you will be much, much happier!


DOUG. Okay, great tips.

Especially in light of this being Data Privacy Week, and Data Privacy Day on Friday.


DUCK. Yes. It’s what we always say with those days.

It’s like Quit Smoking Day: it’s the day you start not smoking anymore. It’s not just one day in the year where you give it a break, and then the rest of it you carry on as normal.

And I know you can get tired of all these special days, but data privacy is important, because once you’ve let it out, it’s kind of hard, and takes a lot of time, to recapture what you didn’t want to leak.

So, yes: forget the FOMO. Love the JOMO!


DOUG. Very good.

That is: Tax scam emails are alive and well as US tax season starts, on nakedsecurity.sophos.com.

Tax scam emails are alive and well as US tax season starts

And now, let us talk about this alleged carder gang mastermind, and three acolytes, under arrest in Russia.

What happened?

This is like cutting off a few heads of a Hydra and then they grow back, I’m guessing?


DUCK. Certainly seems so, Doug.

This is a gang known as the Infraud Organisation.

That was their name, and their motto was “In Fraud We Trust”, which I presume is a poor-taste joke on… what does it say on the $1 bill? “In God We Trust”, isn’t it?


DOUG. It is.


DUCK. And 36 people were alleged to belong to this gang by getting themselves listed in an indictment in the US back in 2018.

Unfortunately, they were only able to arrest 13 of those people, and they were spread across seven different countries.

As we’ve often said before, it’s as if “cybercrime abhors a vacuum”.

The rest of the gang, it seems, formed back up, as you say, like a Hydra growing back heads, and the whole thing carried on.

Anyway, one of the people mentioned in that indictment three years ago was a chap by the name of Andrey Novak.

UniCC was one of his handles; Faxxx-with0three-Xs; Faxtrod: those were his online handles.

Apparently, he has now been busted in Russia, along with three other people.

I don’t have their names handy, but they weren’t on the original charge sheet – sounds like either they weren’t known before, or they’re people who have come to fill the vacuum left by the departure of others.

So, it’s an interesting reminder, as you say, that cybercrime does have this Hydra-like property.

Often, you can chop off even quite a lot of heads, and they’ll sort-of grow back or reappear with other names, other faces, other places, and carry on.

And even back in 2018, the US DOJ [Department of Justice] was claiming that they had $500 million worth of fraud, an amount that they could essentially prove as what they call “actual losses”. Then they had another $2 billion that were referred to as “intended losses”.

‘In Fraud We Trust’ – Cybercrime org bust shows we’re fighting pros

So that gives you an idea of the scale of this operation.

It’s as big as, or bigger than, modern ransomware gangs that we hear about.

But still, three years ago, they were already apparently $500 million to the good. Thus, “In Fraud We Trust.”

Maybe that motto just got a little bit more tarnished with this bust in Russia…


DOUG. All right, that is: Alleged carder gang mastermind and three acolytes under arrest in Russia, on nakedsecurity.sophos.com.

Alleged carder gang mastermind and three acolytes under arrest in Russia

And it is time for This Week in Tech History.

This week, on 26 January 1983, Lotus 1-2-3 was released: the spreadsheet plus database plus graphical charting program – hence the “1-2-3” – was believed to play a large role in the success of IBM PC compatible computers throughout the 1980s, quickly surpassing the Apple-centric Visicalc in sales.

Lotus was slow to respond to the Windows 3.0 graphical user interface, and was effectively killed off by Microsoft Excel in the early 1990s.

And Paul, please tell me you have some stories about the glory days of Lotus 1-2-3…


DUCK. The only one I can think of off the top of my head – going back, I guess, to the 1990s – was a joke that my wife told me.

She was going through the newspaper… remember them?


DOUG. [LAUGHS]. Barely!


DUCK. And she got to the classified ads, where somebody was looking for help with their computers.

This person obviously had a deep misunderstanding of what they are after, because they were looking for someone who knew dBase, if you remember that…


DOUG. Mmmmmm.


DUCK. …but also they wanted someone who knew Lotus One, Lotus Two *and* Lotus Three.


DOUG. [LAUGHS]


DUCK. So I presume they figured, “I don’t know which version we’ve got. You’d better know all of them.”


BOTH. [LAUGHTER]


DUCK. That was one of our household jokes for quite some time.


DOUG. Lovely.

All right, let’s talk about Crypto.com.

So, this was a 2FA bypass – and I thought 2FA was supposed to be impenetrable.

Let’s talk about what happened, and then we’ll go through the myriad ways that 2FA can actually go wrong.

So, what happened in this theft?


DUCK. [IRONIC] Well, “Cryptocurrency company suffers unexpected behaviour of website”, Doug.


DOUG. Ummm…


DUCK. That doesn’t happen often, does it?


DOUG. [IRONIC] Uh-uh


DUCK. Anyway, this is a company, it’s actually, I believe, called Foris DAX MT Ltd, of Malta, but they’re better known by Crypto.com, which is the domain they own: they’re a cryptocurrency trading company.

And it seems that earlier in January 2022, 483 customers of theirs experienced what I guess you could call “phantom withdrawals”, or “ghost withdrawals”.

In other words, it wasn’t just one or two people: there was a sudden spate of withdrawals where people said,”No, I definitely didn’t do that.”

Of course, “That’s easy for you to say”, but, apparently, when they investigated, they realised that these withdrawals were very unusual indeed.

And ultimately, anyone who lost money in this way, Crypto.com is claiming they’ve been reimbursed, or they will be reimbursed.

But the important thing is that they put out a security breach report.

Good on them!

Sadly, in many cases, if it’s a cryptocurrency scam where people put in money and then there’s a breach and everyone disappears, the only report you get is everyone else saying, “Oh dear, they did a rug-pull; they took the money and ran off.”

So, in this case, they did come up with a security report that explained what I just said.

They said, “All accounts found to be affected were fully restored.” They also said transactions were being approved “without the 2FA authentication code being inputted by the user”.

And that was all they said – they didn’t say how or why.

So I found that data breach notification very underwhelming.

Go and read it – it’s a good example of what *not* to say, because it just raises 20 more questions.

Cryptocoin broker Crypto.com says 2FA bypass led to $35m theft

Importantly, what *did* go wrong with the 2FA in this case?

And that left me thinking: what kind of things could go wrong, if you’re someone reading this story and thinking, “Yey, I’ve got a 2FA solution; where should I be focusing my attention?”


DOUG. Well, let’s talk about the ways that 2FA could go wrong.

You have five ways here.

The first being: “A fundamental flaw in the underlying 2FA system.”


DUCK. That’s one way that it could go wrong: the system just doesn’t work.

And one way that it might not work is this: let’s say you’re using SMS-based 2FA, and the code that comes up is random.

But let’s say there’s actually a flaw in the code, and it’s possible – say from the time of day, or the country you’re in or some other background circumstance… let’s say you can make a jolly good guess of what the next random number coming up is going to be.

It’s well worth having a go at someone’s account.

You can only really fix this by going and patching the 2FA code itself, but that’s not commensurate with “the 2FA didn’t require anybody to input a code.”

So that’s one way that it can go wrong: visibly it’s working; somebody’s entering a code; everything in the logs will look right… but it wasn’t the right person entering the code, because somebody was able to guess.


DOUG. Okay, then we’ve got: “A breach of the 2FA authentication database.”


DUCK. Yes, that’s another way that 2FA could go wrong.

Let’s say you’re not using SMS 2FA; you’re using one that’s based on one of those TOTP authenticator apps.

You seed them by scanning in a QR code, or typing in some weird Base32 combination of letters and numbers, when you set up an account.

That’s stored securely in your phone, or so you hope.

That sounds great, except that it means that, at the other end, it’s not like storing a conventional password.

We’ve spoken about this on the podcast; written about it on Naked Security many times – we’ve got a fantastic article from a few years ago about how to store passwords safely.

Serious Security: How to store your users’ passwords safely

When you’re dealing with someone typing in a password, you don’t need to store the real password: you can store a hash – a salted-and-stretched hash of the password.

But with 2FA based on code sequences, both the client and the server need to have access to the plaintext “starting seed” – that QR code you scanned in at the beginning.

And so, if the server gets breached and someone gets hold of those starting seeds for a whole load of accounts, basically they can then set up their own phone to generate exactly the same sequence as somebody else’s.

And that would be a complete bypass of the 2FA.

But the 2FA would still be apparently doing its job in the logs.

*Somebody* would be inputting the code, and it would show up that *somebody* inputted the code; it just wouldn’t be the right person.


DOUG. Okay. Next way: “Poor coding in the online login process.”


DUCK. Basically, in your login process, there are typically many ways you can do it, even if you have 2FA and even if it’s mandatory.

Most accounts have some kind of password reset system, or they have some kind of “I don’t have my phone, I want to use one of the backup codes that I printed out and put in my safe.”

So they have typically a number of different ways in which the front end of the authentication system can interact with the back end, including the part that does 2FA.

And it is possible that the 2FA system itself could be working perfectly; that the SMS codes have perfectly random numbers; that the generator sequence seeds have not been stolen… but that there’s some way – say from the website: some weird header you can add to a web request, or some extra secret parameter you can add to the request – that somehow indicates, “I want to skip that part.”

And it’s up to the back end whether it actually calls on the 2FA or not.

The 2FA system itself doesn’t protect the system that it’s supposed to protect if it’s never called upon to do so, due to some kind of mistake!


DUCK. Okay. And then this one is always a challenge: “Weak internal controls to detect risky behavior by support or IT staff.”

The so-called “insider attack”, as it were.


DUCK. Memories of the Twitter attack of 2020, if you remember that one.

What was it? Elon Musk, Joe Biden, Barack Obama, Bill Gates, Apple Computer: about 40-something very high profile accounts all got compromised at the same time.

And it seems that the ultimate reason is that some person or persons unknown inside Twitter.., it didn’t look as though they were corrupt, or they did anything wrong.

Bitcoin scammer who hacked celeb Twitter accounts gets 3 years

They were just too helpful, and they gave the crooks enough information that the crooks were able to do password resets on those accounts and come in with or without 2FA.

So you can keep 2FA going, but actually lock out the real user and lock yourself in instead, in which case you’d still be inputting the code, but once again it would be the wrong person.

And, as you said, this is a very, very hard thing to defend against, particularly – and perhaps ironically – if you genuinely *do* have a really helpful support department.

Unfortunately, somebody could get into the *spirit* of that inside your organisation without complying with the *letter* of it, and they could let the side down, even though their motivation was the very best.

They weren’t corrupt, they weren’t crooked, they weren’t lazy; they were actually almost trying *too* hard.


DOUG. A nice segue to our final point, and an interesting one: “Fail-open behavior in the authentication process.”


DUCK. I guess that’s the technological version of someone in support being, if you like, too helpful.

When you think about security systems (cybersecurity systems or physical security systems), they’re generally expected to fail cleanly in one of two ways.

Fail open: things like electric circuits.

When your mains trips, it fails *open*, so the current is *off*.

And there are other things, like bank vaults: you’d normally expect them to fail *closed*.

Otherwise, if there was a power failure, someone could sneak in and steal all your gold bars!

And, sometimes, it’s hard to know which is the right one for which circumstance.

For example, if your 2FA back end is relying on some cloud based service and it completely breaks… do you want *nobody* to be able to log in, and you just say, “We’re really sorry; logins are suppressed until we fix this”?

Or do you actually think, “Well, we’re only treating 2FA as an add-on extra, to to avoid people getting too antsy, we’ll just not ask for the number. Until we fix the backend, we’ll fail back to 1FA.”

And that means, if you have 2FA yourself and you want to go and review, “Hey, am I doing it right?”, it’s not just enough to go, “Did I buy the right product? Did I install it correctly?”

You can’t just to a trial login and say, “Yes, it’s fine”… because there are all the ancillary things about how you integrate it into your business, into your technology, into your customer workflow, that could let you down as well.

And there’s nothing worse than something that gives you an inflated sense of security…

…when in fact you don’t have anything at all.


DOUG. Okay, well, as Crypto.com says, they have migrated to a completely new 2FA infrastructure.

[DRAMATIC] And they did this, Paul, out of “an abundance of caution”, wouldn’t you know?

So…


DUCK. I’ve never got on with those words.


DOUG. [LAUGHS]


DUCK. I know that they’re a must-have in modern data breach notifications.

But if someone’s telling me about a data breach they’ve had, I don’t want to think they’re suddenly having “an abundance of caution”, because it implies they’re just doing things in the hope that they might add some security magic.

That’s how it sounds to me.

And in this case, if they go, “Hey, don’t worry, we’ve got a completely new 2FA backend”…

Making that change in this case, because they’re not saying how the bypass happened, it’s not clear whether changing the underlying technology will make *any* difference at all.

I would prefer, in a data breach notification, when it talks about what you have done, that you have taken *appropriate* precautions – ones that you know work – and that you aren’t wasting your time doing things that aren’t going to help but sound good.

Not that I feel strongly about it.

What you sound like after a data breach


DOUG. [LAUGHS] And we have some advice, and this is a good one: “If you’re looking at adding 2FA to your own online services, don’t just test the obvious parts of the system.”


DUCK. Yes, as I said (I hope it wasn’t an overreaction to the words “abundance of caution”), “Hey, we had 2FA problems, so we ripped out the whole 2FA system and put in a brand new one.”

That seems like an obvious fix, but that’s like saying, “You know what: my flat [apartment] got burgled, so I’ve had a new front door put in.”

And then later you find out that actually the person climbed in over the balcony, and it’s your balcony doors – that you leave open all the time – where the problem was.

If you have had a data breach of this sort, then: fix what you’ve got; take appropriate precautions to deal with what happened this time; and then go and review everything, including the things that you might not have thought about before.

Because the only thing worse than suffering one data breach is suffering another data breach shortly afterwards.


DOUG. Aaaaargh!


DUCK. If trust in your business was dented before, you might say that it’s had a hole punched in it the second time.


DOUG. And this is a great one: “If you’re in PR or marketing, make sure the whole company practises how it will react if a breach should occur.”

Have a breach response plan, in other words…


DUCK. Yes!

In the old days, we used to say to people: when it comes to building your anti-virus policy (when it was all about malware and self-spreading viruses), you need to think about what you’re going to say if it turns out that *you’re* the company that’s been massively spreading the next LoveBug…


DOUG. [LAUGHS]


DUCK. …and all the fingers are pointing back at you, and you look very bad.

Because that was an extra-super-bad look, when you were the Typhoid Mary: your business was okay, but everyone else is getting hammered by you.

And of course, if that were to happen, even back then, it was much too late to go and think, “I wonder how we should deal with this.”

And it’s even more important now that data breach notifications have both a moral necessity for your customers and a legal necessity from the regulator.

You can’t afford to have time eaten up – when your techies are actually trying to deal with a breach that has just happened – figuring out: who you need to contact; what you’re going to say; who’s going to say it; how you’re going to say it.

So, planning what you would say if there were an attack… is not an admission that you expect an attack to occur.

It’s just being wise, and recognising that preparation is by definition, *only ever something that you can do in advance*.


DOUG. All right, that is: Cryptocoin broker Crypto.com says 2FA bypass led to $35 million theft.

Cryptocoin broker Crypto.com says 2FA bypass led to $35m theft

And, as the sun begins to set on our show for the week, we leave you with the Oh! No! from Reddit user CityGentry, who writes:

“One from a colleague of mine who looks after support for our telephone and conference equipment.

User calls and says they can’t dial into a phone conference because their phone doesn’t have the correct button on it.

They explain they can dial the general conference number, but they can’t enter the five-digit code to connect them to their specific conference call.

So, colleague asks them for the number and for permission to connect as a test.

User agrees; colleague connects without issue.

Colleague is puzzled and asks the user to go through it again step by step with them, saying what buttons they’re pressing as they’re pressing it.

Everything’s OK until the user gets to the five-digit code, which has a nice sequence: 7-8-9-10.”

[AMUSED] You can see where this is going…

“Easy to remember, easy to type. However, the user explains that their phone keypad only goes from 0 to 9, so they don’t have a ’10’ key.”


DUCK. [LAUGHS]


DOUG. “The colleague goes on mute for a few seconds, and once they’ve stopped laughing, they diplomatically suggest that someone may have given them an incorrect code and to try ‘one-zero’, not ‘ten’.”

That is a very diplomatic reply – good on them!


DUCK. That is *very* well done.


DOUG. Yes!


DUCK. But that’s tech support, isn’t it?


DOUG. It is!


DUCK. For anyone who’s ever done it, “Mysteries never cease.”


DOUG. So true!

All’s well that ends well… and if you have an Oh! No! you’d like to submit, we’d love to read it on the podcast.

You can email tips@sophos.com; you can comment on any one of our articles; or you can hit us up on social media: @NakedSecurity.

That’s our show for today; thanks very much for listening…

For Paul Ducklin, I’m Doug Aamoth, reminding you, until next time, to…


BOTH. …stay secure!

[MUSICAL MODEM].


Apple patches Safari data leak (oh, and a zero-day) – patch now!

Just under two weeks ago, we wrote about an Apple Safari bug that could allow rogue website operators to track you even if they gave every impression of not doing so, and even if you had strict privacy protection turned on.

In fact, that vulnerability, now known as CVE-2022-22594, showed up in Safari because of a bug in WebKit, the “browser rendering engine”, as these things are generally known, on which the Safari app is based.

And although Safari is the only mainstream WebKit-based browser on Apple’s macOS (Edge and Chromium use Google’s Blink engine; Firefox uses Mozilla’s Gecko renderer), that’s not the case on Apple’s mobile devices.

Any browser or browser-like app in the App Store, which is essentially the only source of software for iPhones, iPads, Apple Watches and so on, must be programmed to use WebKit, even if it uses a third-party rendering engine on other platforms.

As a result, macOS users could simply switch browsers to sidestep the bug, while iDevice users could not.

The CVE-2022-22594 bug was annoyingly simple. It relied on the fact that although your website couldn’t access any of the data stored locally by my website (a consequence of the Same Origin Policy enforced by browsers to keep web data private to the page that created it in the first place), it could list the names of any databases I’d created for my data. If I chose a database name unique to my own service, to avoid clashing with anyone else, that name would uniquely identify my site, and would therefore leak the user’s browsing history. But if I chose a random name in order to avoid clashes while not identifying my website, that name would instead act as a kind of “supercookie” that would uniquely identify the user. Lose/lose.

Patches out now

The good news is that CVE-2022-22594 has been patched in Apple’s latest security updates, available as follows:

  • iOS 15.3 and iPadOS 15.3. See security bulletin HT213053.
  • macOS Monterey 12.2. See security bulletin HT213054.
  • tvOS 15.3. See security bulletin HT213057.
  • watchOS 8.4. See security bulletin HT213059.
  • Safari 15.3. This update is autmotically included in the four listed above, but needs downloading separately for macOS Big Sur and Catalina. HT213058.

Of course, the big-news Safari “supercookie” bug isn’t the only security hole patched in this batch of updates: numerous other yet-more-serious bugs were patched as well.

There aren’t any updates for iOS 12 or iOS 14, the previous two official versions of Apple’s iDevice platform, but there are bulk patches for both Catalina and Big Sur, the previous two macOS versions:

  • macOS Big Sur 11.6.3. See security bulletin HT213055.
  • macOS Catalina Security Update 2022-001. See security bulletin HT213056.

These security updates can be considered critical, given the number of remote code execution (RCE) bugs that could, in theory at least, be used without your consent to install covert surveillance software, implant malware, steal data, secretly jailbreak your device, and more.

Indeed, on iOS 15, iPadOS 15, Monterey 12 and BigSur 11, one of the RCE bugs that potentially gives kernel-level control – typically the worst sort of RCE bug you can get – is listed with Apple’s typically understated warning that the company “is aware of a report that this issue may have been actively exploited.”

In plain English, we translate those words as follows: “This is a zero-day bug. An in-the-wild exploit is already doing the rounds.” (Simply put: patch right now, because the crooks are onto this one already.)

What to do?

As we just said above, the equation here is really simple: Zero-day kernel hole in the wild –> Patch right now.

The new version numbers that you should look out for are listed above.

Once again: on a Mac, it’s Apple menu > About this Mac > Software Update… and on an iDevice, it’s Settings > General > Software Update.

Don’t delay; do it today!

(And don’t forget that, on older Macs that aren’t running Monterey 12, there are two updates to install: one for the operating system in general, and a second specifically for WebKit and Safari.)


go top