Category Archives: News

S3 Ep106: Facial recognition without consent – should it be banned?

WE’RE SCRAPING YOUR FACES FOR YOUR OWN GOOD! (ALLEGEDLY)

Click-and-drag on the soundwaves below to skip to any point. You can also listen directly on Soundcloud.

With Doug Aamoth and Paul Ducklin. Intro and outro music by Edith Mudge.

You can listen to us on Soundcloud, Apple Podcasts, Google Podcasts, Spotify, Stitcher and anywhere that good podcasts are found. Or just drop the URL of our RSS feed into your favourite podcatcher.


READ THE TRANSCRIPT


DOUG.  Cryptology, cops hacking back, Apple updates and… card counting!

All that, and more, on the Naked Security podcast.

[MUSICAL MODEM]

Welcome to the podcast, everybody.

I am Doug Aamoth; he is Paul Ducklin.

Paul, how do you do today?


DUCK.  I’m very well, thank you, Douglas.

And I’m very excitedly looking forward to the card-counting bit, not least because it’s not just about counting, it’s also about card shuffling.


DOUG.  All right, very good, looking forward to that!

And in our Tech History segment, we’ll speak about something that was not random – it was very calculated.

This week, on 25 October 2001, Windows XP was released to retail.

It was built upon the Windows NT operating system, and XP replaced both Windows 2000 and Windows Millennium Edition as “XP Professional Edition” and “XP Home Edition” respectively.

XP Home was the first consumer version of Windows to not be based on MS-DOS or the Windows 95 kernel.

And, on a personal note, I loved it.

I may just be remembering simpler times… I don’t know if it was actually as good as I remember it, but I remember it being better than what we had before.


DUCK.  I agree with that.

I think there are some rose-tinted spectacles you may be wearing there, Doug…


DOUG.  Umm-hmmm.


DUCK.  …but I would have to agree that it was an improvement.


DOUG.  Let us talk a bit about comeuppance, specifically, comeuppance for unwanted facial recognition in France:

Clearview AI image-scraping face recognition service hit with €20m fine in France


DUCK.  Indeed!

Regular listeners will know that we have spoken about a company called Clearview AI many times, because I think it’s fair to say that this company is controversial.

The French regulator very helpfully publishes its rulings, or has published at least its Clearview rulings, in both French and in English.

So, basically, here’s how they describe it:

Clearview AI collects photographs from many websites, including social media. It collects all the photos that are directly accessible on those networks. Thus, the company has collected over 20 billion images worldwide.

Thanks to this collection, the company markets access to its image database in the form of a search engine in which a person can be found using a photograph. The company offers this service to law enforcement authorities.

And the French regulator’s objection, which was echoed last year by at least the UK and the Australian regulator as well, is: “We consider this unlawful in our country. You can’t go scraping people’s images for this commercial purpose without their consent. And you’re also not complying with GDPR rules, data destruction rules, making it easy for them to contact you and say, ‘I want to opt out’.”

So, firstly, it should be opt in if you want to run this.

And having collected the stuff, you should not be hanging on to it even after they want to make sure that their data is removed.

And the issue in France, Doug, is that last December the regulator said, “Sorry, you can’t do this. Stop scraping data, and get rid of what you’ve got on everybody in France. Thank you very much.”

Apparently, according to the regulator, Clearview AI just didn’t seem to want to comply.


DOUG.  Uh-oh!


DUCK.  So now the French have come back and said, “You don’t seem to want to listen. You don’t seem to understand that this is the law. Now, the same thing applies, but you also have to pay €20 million. Thanks for coming.”


DOUG.  We’ve got some comments brewing on the article… we’d love to hear what you think; you can comment anonymously.

Specifically, the questions we put forth are: “Is Clearview AI really providing a beneficial and socially acceptable service to law enforcement? Or is it casually trampling on our privacy by collecting biometric data unlawfully and commercialising it for investigative tracking purposes without consent?”

All right, let us stick to this theme of comeuppance, and talk about a bit of comeuppance for the DEADBOLT criminals.

This is an interesting story, involving law enforcement and hacking back!

When cops hack back: Dutch police fleece DEADBOLT criminals (legally!)


DUCK.  Hats off to the cops for doing this, even though, as we’ll explain, it was sort-of a one-off thing.

Regular listeners will remember DEADBOLT – it’s come up a couple of times before.

DEADBOLT is the ransomware gang who basically find your Network Attached Storage [NAS] server if you’re a home user or small business…

…and if it isn’t patched against a vulnerability they know how to exploit, they’ll come in, and they just scramble your NAS box.

They figured that’s where all your backups are, that’s where all your big files are, that’s where all your important stuff is.

“Let’s not worry about having to write malware for Windows and malware for Mac, and worrying what version you’ve got. We’ll just go straight in, scramble your files, and then say, ‘Pay us $600’.”

That’s the current going rate: 0.03 bitcoins, if you don’t mind.

So they’re taking that consumer-oriented approach of trying to hit lots of people and asking for a somewhat affordable amount each time.

And I guess if everything you’ve got is backed up on there, then you might feel, “You know what? $600 is a lot of money, but I can just about afford it. I’ll pay up.”

To simplify matters (and we’ve grudgingly said, this is a clever part, if you like, of this particular ransomware)… basically, what you do is you tell the crooks you’re interested by sending them a message via the Bitcoin blockchain.

Basically, you pay them the money to a specified, unique-to-you Bitcoin address.

When they get the payment message, they send back a payment of $0 that includes a comment that is the decryption key.

So that’s the *only* interaction they need with you.

They don’t need to use email, and they don’t have to run any dark web servers.

However, the Dutch cops figured the crooks had made a protocol-related blunder!

As soon as your transaction hit the Bitcoin ecosystem, looking for someone to mine it, their script would send the decryption key.

And it turns out that although you cannot double-spend bitcoins (otherwise the system would fall apart), you can put in two transactions at the same time, one with a high transaction fee and one with a very low or a zero transaction fee.

And guess which one the bitcoin miners and ultimately the bitcoin blockchain will accept?

And that’s what the cops did…


DOUG.  [LAUGHS] Very clever, I like it!


DUCK.  They’d stick in a payment with a zero transaction fee, which could take days to get processed.

And then, as soon as they got the decryption key back from the crooks (they had, I think, 155 users that they sort of clubbed together)… as soon as they got the decryption key back, they did a double-spend transaction.

“I want to spend the same Bitcoin again, but this time we’re going to pay it back to ourselves. And now we’ll offer a sensible transaction fee.”

So that transaction was the one that ultimately actually got confirmed and locked into the blockchain…

…and the other one just got ignored and thrown away… [LAUGHS] as always, shouldn’t laugh!


DOUG.  [LAUGHS]


DUCK.  So, basically, the crooks paid out too soon.

And I guess it’s not *treachery* if you’re law enforcement, and you’re doing it in a legally warranted way… it’s basically a *trap*.

And the crooks walked into it.

As I mentioned at the beginning, this can only work once because, of course, the crooks figured, “Oh, dear, we shouldn’t do it that way. Let’s change the protocol. Let’s wait for the transaction to be confirmed onto the blockchain first, and then once we know that nobody can come along with a transaction that will trump it later, only then will we send out the decryption key.”


DUCK.  But the crooks did get flat-footed to the tune of 155 decryption keys from victims in 13 different countries who called on the Dutch police for help.

So, chapeau [French cycling slang for a “hat doff”], as they say!


DOUG.  That’s great… that’s two positive stories in a row.

And let’s keep the positive vibes rolling with this next story.

It’s about women in cryptology.

They have been honoured by the US Postal Service, which is celebrating World War 2 code breakers.

Tell us all about this – this is a very interesting story, Paul:

Women in Cryptology – USPS celebrates WW2 codebreakers


DUCK.  Yes, it was one of those nice things to write about on Naked Security: Women in cryptology – United States Postal Service celebrates World War 2 codebreakers.

Now, we’ve covered Bletchley Park code breaking, which is the UK’s cryptographic efforts during the Second World War, mainly to try and crack Nazi ciphers such as the well known Enigma machine.

However, as you can imagine, the US faced a huge problem from the Pacific theatre of war, trying to deal with Japanese ciphers, and in particular, one cipher known as PURPLE.

Unlike the Nazi’s Enigma, this was not a commercial device that could be bought.

It was actually a homegrown machine that came out of the military, based on telephone switching relays, which, if you think about it, are kind of like “base ten” switches.

So, in the same way that Bletchley Park in the UK secretly employed more than 10,000 people… I didn’t realise this, but it turned out that there were well over 10,000 women recruited into cryptology, into cryptographic cracking, in the US to try and deal with Japanese ciphers during the war.

By all accounts, they were extremely successful.

There was a cryptographic breakthrough made in the early 1940s by one of the US cryptologists called Genevieve Grotjan, and apparently this led to spectacular successes in reading Japanese secrets.

And I’ll just quote from the US Postal Service, from their stamp series:

They deciphered Japanese fleet communications, helped prevent German U-boats from sinking vital cargo ships, and worked to break the encryption systems that revealed Japanese shipping routes and diplomatic messages.

You can imagine that gives you very, very, usable intelligence indeed… that you have to assume helped to shorten the war.

Fortunately, even though the Japanese had been warned (apparently by the Nazis) that their cipher was either breakable or had already been broken, they refused to believe it, and they carried on using PURPLE throughout the war.

And the women cryptologists of the time definitely made hay secretly while the sun shone.

Unfortunately, just as happened in the UK with all the wartime heroes (again, most of them women) at Bletchley Park…

…after the war, they were sworn to secrecy.

So it was many decades until they got any recognition at all, let alone what you might call the hero’s welcome that they essentially deserved when peace broke out in 1945.


DOUG.  Wow, that is a cool story.

And unfortunate that it took that long to get the recognition, but great that they finally got it.

And I urge anyone who is listening to this to head over to the site to read that.

It’s called: Women in cryptology – USPS celebrates World War 2 codebreakers.

Very good piece!


DUCK.  By the way, Doug, on the stamp series that you can buy (the commemorative series, where you get the stamps on a full sheet)… around the stamps, the USPS has actually put a little cryptographic puzzle, which we’ve repeated in the article.

It is not as difficult as Enigma or PURPLE, so you can actually do it fairly easily with pen and paper, but it’s a good bit of commemorative fun.

So come on over and have a try if you like.

We’ve also put a link to an article that we wrote a couple of years ago (What 2000 years of cryptography can teach us) in which you will find hints that will help you solve the USPS cryptographic puzzle.

Good bit of fun to go with your commemoration!


DOUG.  All right, so let’s stick with randomness and cryptography a little bit, and ask a question that maybe some have wondered before.

How random are those automatic card shufflers you might see at a casino?

Serious Security: How randomly (or not) can you shuffle cards?


DUCK.  Yes, another fascinating story that I picked up thanks to cryptography guru Bruce Schneier, who wrote about it on his own blog, and he entitled his article On the randomness of automatic card shufflers.

The paper we’re talking about goes back, I think, to 2013, and the work that was done, I think, goes back to the early 2000s.

But what fascinated me about the story, and made me want to share it, is that it has incredible teachable moments for people who are currently involved in programming, whether or not in the field of cryptography.

And, even more importantly, in testing and quality assurance.

Because, unlike the Japanese, who refused to believe that their PURPLE cipher might not be working properly, this is a story about a company that made automatic card shuffling machines but figured, “Are they really good enough?”

Or could someone actually figure out how they work, and get an advantage from the fact that they aren’t random enough?

And so they went out of their way to hire a trio of mathematicians from California, one of whom is also an accomplished magician…

…and they said, “We built this machine. We think it’s random enough, with one shuffle of the cards.”

Their own engineers had gone out of their way to devise tests that they thought would show whether the machine was random enough for card shuffling purposes, but they wanted a second opinion, and so they actually went out and got one.

And these mathematicians looked at how the machine worked, and were able to come up, believe it or not, with what’s known as a closed formula.

They analysed it completely: how the thing would behave, and therefore what statistical inferences they could make about how the cards would come out.

They discovered that although the shuffled cards would pass a significant battery of good randomness tests, there were still sufficiently many unbroken sequences in the cards after they’d been shuffled that allowed them to predict the next card twice as well as chance.

And they were able to show the reasoning by which they were able to come up with their mental algorithm for guessing the next card twice as well as they should…

…so not only did they do it reliably and repeatably, they actually had the mathematics to show formulaically why that was the case.

And the story is perhaps most famous for the earthy but entirely appropriate response from the president of the company that hired them.

He is supposed to have said:

We are not pleased with your conclusions, but we believe them, and that is what we hired you for.

In other words, he’s saying, “I didn’t pay to be made happy. I paid to find out the facts and to act upon them.”

If only more people did that when it came to devising tests for their software!

Because it’s easy to create a set of tests that your product will pass and where if it fails, you know something has definitely gone wrong.

But it’s surprisingly difficult to come up with a set of tests that it is *worth your product passing*.

And that is what this company did, by hiring in the mathematicians to look into how the card shuffling machine worked.

Quite a lot of life lessons in there, Doug!


DOUG.  It’s a fun story and very interesting.

Now, every week we generally talk about some sort of Apple update, but not this week.

No, no!

This week we’ve got for you… an Apple *megaupdate*:

Apple megaupdate: Ventura out, iOS and iPad kernel zero-day – act now!


DUCK.  Unfortunately, if you have an iPhone or an iPad, the update covers a zero-day currently being actively exploited, which, as always, smells of jailbreak/complete spyware takeover.

And as always, and perhaps understandably, Apple is very cagey about exactly what the zero-day is, what it’s being used for, and, just as interestingly, who is using it.

So if you’ve got an iPhone or an iPad, this is *definitely* one for you.

And confusingly, Doug…

I’d better explain this, because it actually wasn’t obvious at first… and thanks to some reader help, thanks Stefaan from Belgium, who has been sending me screenshots and explaining exactly what happened to him when he updated his iPad!

The update for iPhones and iPads said, “Hey, you’ve got iOS 16.1, and iPadOS 16”. (Because iPad OS version 16 was delayed.)

And that’s what the security bulletin says.

When you install the update, the basic About screen just says “iPadOS 16”.

But if you zoom into the main version screen, then both versions actually come out as “iOS/iPadOS 16.1”.

So that’s the *upgrade* to version 16, plus this vital zero-day fix.

That’s the hard and confusing part… the rest is just that there are lots of fixes for other platforms as well.

Except that, because Ventura came out – macOS 13, with 112 CVE-numbered patches, though for most people, they won’t have had the beta, so this will be *upgrade* and *update* at the same time…

Because macOS 13 came out, that leaves macOS 10 Catalina three versions behind.

And it does indeed look as though Apple is only now supporting previous and pre-previous.

So there *are* updates for Big Sur and Monterey, that’s macOS 11 and macOS 12, but Catalina is notoriously absent, Doug.

And as annoyingly as always, what we cannot tell you…

Does that mean it simply was immune to all these fixes?

Does that mean it actually needs at least some of the fixes, but they just haven’t come out yet?

Or does that mean it’s fallen off the edge of the world and you will never get an update again, whether it needs one or not?

We don’t know.


DOUG.  I feel winded, and I didn’t even do any of the heavy lifting in that story, so thank you for that… that’s a lot.


DUCK.  And you don’t even have an iPhone.


DOUG.  Exactly!

I have got an iPad…


DUCK.  Oh, do you?


DOUG.  …so I’ve got to go and make sure I get it up to date.

And that leads us into our reader question of the day, on the Apple story.

Anonymous Commenter asks:

Will the 15.7 update for iPads resolve this, or do I have to update to 16? I am waiting until the minor nuisance bugs in 16 are resolved before updating.


DUCK.  That’s the second level of confusion, if you like, caused by this.

Now, my understanding is, when iPadOS 15.7 came out, that was exactly the same time as iOS 15.7.

And it was, what, just over a month ago, I think?

So that’s an old-time security update.

And what we now don’t know is…

Is there an iOS/iPadOS 15.7.1 still in the wings that hasn’t come out yet, fixing security holes that do exist in the previous version of operating systems for those platforms?

Or is your update path for security updates for iOS and iPadOS now to go down the version 16 route?

I just don’t know, and I don’t know how you tell.

So it’s looking as though (and I’m sorry if I sound confused, Doug, because I am!)…

…it’s looking as though the *update* and the *upgrade* path for users of iOS and iPadOS 15.7 is to shift to version flavour 16.

And at this current time, that means 16.1.

That would be my recommendation, because then at least you know that you have the latest and greatest build, with the latest and greatest security fixes.

So that’s the long answer.

The short answer is, Doug, “Don’t know.”


DOUG.  Clear as mud.


DUCK.  Yes.

Well, perhaps not that clear… [LAUGHTER]

If you leave mud long enough, eventually the bits settle to the bottom and there’s clear water on the top.

So maybe that’s what you have to do: wait and see, or just bite the bullet and go for 16.1.

They do make it easy, don’t they? [LAUGHS]


DOUG.  All right, we will keep an eye on that, because that could change a little bit between now and next time.

Thank you very much for sending that comment in, Anonymous Commenter.

If you have an interesting story, comment or question you’d like to submit, we’d love to read it on the podcast.

You can email tips@sophos.com, you can comment on any one of our articles, and you can hit us up on social @NakedSecurity.

That’s our show for today, thanks very much for listening.

For Paul Ducklin, I’m Doug Aamoth, reminding you until next time to…


BOTH.  Stay secure!


Online ticketing company “See” pwned for 2.5 years by attackers

See Tickets is a major global player in the online event ticketing business: they’ll sell you tickets to festivals, theatre shows, concerts, clubs, gigs and much more.

The company has just admitted to a major data breach that shares at least one characteristic with the amplifiers favoured by notorious rock performers Spinal Tap: “the numbers all go to 11, right across the board.”

According to the email template that See Tickets used to generate the mailshot that went to customers (thanks to Phil Muncaster of Infosecurity Magazine for a link to the Montana Department of Justice website for an official copy), the breach, its discovery, its investigation and remediation (which are still not finished, so this one might yet go all the way to 12) unfolded as follows:

  • 2019-06-25. By this date at the latest, cybercriminals had apparently implanted data-stealing malware on event checkout pages run by the company. (Data at risk included: name, address, zip code, payment card number, card expiry date, and CVV number.)
  • 2021-04. See Tickets “was alerted to activity indicating potential unauthorized access”.
  • 2021-04. Investigation launched, involving a cyberforensics firm.
  • 2022-01-08. Unauthorised activity is finally shut down.
  • 2022-09-12. See Tickets finally concludes that attack “may have resulted in unauthorised access” to payment card information.
  • 2022-10. (Investigation ongoing.) See Tickets says “we are not certain your information was affected”, but notifies customers.

Simply put, the breach lasted more than two-and-a-half years before it was spotted at all, but not by See Tickets itself.

The breach then continued for nine more months before it was properly detected and remediated, and the attackers kicked out.

The company then waited another eight months before accepting that data “may” have been stolen.

See Tickets than waited one more month before notifying customers, admitting that it still didn’t know how many customers had lost data in the breach.

Even now, well over three years after the earliest date at which the attackers are known to have been in See Ticket’s systems (though the groundwork for the attack may have predated this, for all we know), the company still hasn’t concluded its investigation, so there may yet be more bad news to come.

What next?

The See Tickets notification email includes some advice, but it’s primarily aimed at telling you what you can do for yourself to improve your cybersecurity in general.

As far as telling you what the company itself has done to make up for this long-running breach of customer trust and data, all it has said is, “We have taken steps to deploy additional safeguards onto our systems, including by further strengthening our security monitoring, authentication, and coding.”

Given that See Tickets was alerted to the breach by someone else in the first place, after failing to notice it for two-and-a-half years, you can’t imagine it would take very much for the company to be able to lay claim to “strengthening” its security monitoring, but apparently it has.

As for the advice See Tickets handed out to its customers, this boils down to two things: check your financial statements regularly, and watch out for phishing emails that try to trick you into handing over personal information.

These are good suggestions, of course, but protecting yourself from phishing would have made no difference in this case, given that any personal data stolen was taken directly from legitimate web pages that careful customers would have made sure they visited in the first place.

What to do?

Don’t be a cybersecurity slowcoach: make sure your own threat detection-and-response procedures keep pace with the TTPs (tools, techniques and procedures) of the cyberunderworld.

The crooks are continually evolving the tricks they use, which go way beyond the old-school technique of simply writing new malware.

Indeed, many compromises these days hardly (or don’t) use malware at all, being what are known as human-led attacks in which the criminals try to rely as far as they can on system administration tools that are already available on your network.

The crooks have a wide range of TTPs not merely for running malware code, but also for:

  • Breaking in to start with.
  • Tiptoeing round the network once they’re in.
  • Going undetected for as long as possible.
  • Mapping out your network and your naming conventions as well as you know them yourself.
  • Setting up sneaky ways as they can of getting back in later if you kick them out.

This sort of attacker is generally known as an active adversary, meaning that they are often just as hands-on as your own sysadmins, and able to blend in with legitimate operations as much as they can:

Just removing any malware the crooks may have implanted is not enough.

You also need to review any configuration or operational changes they may have made, too, in case they’ve opened up a hidden backdoor through which they (or any other crooks to whom they sell on their knowledge later) may be able to wander back in later at their leisure.

Remember, as we like to say on the Naked Security podcast, even though we know it’s a cliche, that cybersecurity is a journey, not a destination.

If you don’t have enough time or expertise to keep pressing ahead with that journey on your own, don’t be afraid to reach out for help with what’s known as MDR (managed detection and response), where you team up with a trusted group of cybersecurity experts to help to keep your own data breach dials well below a Spinal Tap-like “11”.


Clearview AI image-scraping face recognition service hit with €20m fine in France

The Clearview AI saga continues!

If you haven’t heard of this company before, here’s a very clear and concise recap from the French privacy regulator, CNIL (Commission Nationale de l’Informatique et des Libertés), which has very handily been publishing its findings and rulings in this long-running story in both French and English:

Clearview AI collects photographs from many websites, including social media. It collects all the photographs that are directly accessible on these networks (i.e. that can be viewed without logging in to an account). Images are also extracted from videos available online on all platforms.

Thus, the company has collected over 20 billion images worldwide.

Thanks to this collection, the company markets access to its image database in the form of a search engine in which a person can be searched using a photograph. The company offers this service to law enforcement authorities in order to identify perpetrators or victims of crime.

Facial recognition technology is used to query the search engine and find a person based on their photograph. In order to do so, the company builds a “biometric template”, i.e. a digital representation of a person’s physical characteristics (the face in this case). These biometric data are particularly sensitive, especially because they are linked to our physical identity (what we are) and enable us to identify ourselves in a unique way.

The vast majority of people whose images are collected into the search engine are unaware of this feature.

Clearview AI has variously attracted the ire of companies, privacy organisations and regulators over the last few years, including getting hit with:

  • Complaints and class action lawsuits filed in Illinois, Vermont, New York and California.
  • A legal challenge from the American Civil Liberties Union (ACLU).
  • Cease-and-desist orders from Facebook, Google and YouTube, who deemed that Clearview’s scraping activities violated their terms and conditions.
  • Crackdown action and fines in Australia and the UK.
  • A ruling finding its operation unlawful in 2021, by the abovementioned French regulator.

No legitimate interest

In December 2021, CNIL stated, quite bluntly, that:

[T]his company does not obtain the consent of the persons concerned to collect and use their photographs to supply its software.

Clearview AI does not have a legitimate interest in collecting and using this data either, particularly given the intrusive and massive nature of the process, which makes it possible to retrieve the images present on the Internet of several tens of millions of Internet users in France. These people, whose photographs or videos are accessible on various websites, including social media, do not reasonably expect their images to be processed by the company to supply a facial recognition system that could be used by States for law enforcement purposes.

The seriousness of this breach led the CNIL chair to order Clearview AI to cease, for lack of a legal basis, the collection and use of data from people on French territory, in the context of the operation of the facial recognition software it markets.

Furthermore, CNIL formed the opinion that Clearview AI didn’t seem to care much about complying with European rules on collecting and handling personal data:

The complaints received by the CNIL revealed the difficulties encountered by complainants in exercising their rights with Clearview AI.

On the one hand, the company does not facilitate the exercise of the data subject’s right of access:

  • by limiting the exercise of this right to data collected during the twelve months preceding the request;
  • by restricting the exercise of this right to twice a year, without justification;
  • by only responding to certain requests after an excessive number of requests from the same person.

On the other hand, the company does not respond effectively to requests for access and erasure. It provides partial responses or does not respond at all to requests.

CNIL even published an infographic that sums up its decision, and its decision making process:

The Australian and UK Information Commissioners came to similar conclusions, with similar outcomes for Clearview AI: your data scraping is illegal in our jurisdictions; you must stop doing it here.

However, as we said back in May 2022, when the UK reported that it would be fining Clearview AI about £7,500,000 (down from the £17m fine first proposed) and ordering the company not to collect data on UK redidents any more, “how this will be policed, let alone enforced, is unclear.”

We may be about to find how the company will be policed in the future, with CNIL losing patience with Clearview AI for not comlying with its ruling to stop collecting the biometric data of French people…

…and announcing a fine of €20,000,000:

Following a formal notice which remained unaddressed, the CNIL imposed a penalty of 20 million Euros and ordered CLEARVIEW AI to stop collecting and using data on individuals in France without a legal basis and to delete the data already collected.

What next?

As we’ve written before, Clearview AI seems not only to be happy to ignore regulatory rulings issued against it, but also to expect people to feel sorry for it at the same time, and indeed to be on its side for providing what it thinks is a vital service to society.

In the UK ruling, where the regulator took a similar line to CNIL in France, the company was told that its behaviour was unlawful, unwanted and must stop forthwith.

But reports at the time suggested that far from showing any humility, Clearview CEO Hoan Ton-That reacted with an opening sentiment that would not be out of place in a tragic lovesong:

It breaks my heart that Clearview AI has been unable to assist when receiving urgent requests from UK law enforcement agencies seeking to use this technology to investigate cases of severe sexual abuse of children in the UK.

As we suggested back in May 2022, the company may find its plentiful opponents replying with song lyrics of their own:

Cry me a river. (Don’t act like you don’t know it.)

What do you think?

Is Clearview AI really providing a beneficial and socially acceptable service to law enforcement?

Or is it casually trampling on our privacy and our presumption of innocence by collecting biometric data unlawfully, and commercialising it for investigative tracking purposes without consent (and, apparently, without limit)?

Let us know in the comments below… you may remain anonymous.


Apple megaupdate: Ventura out, iOS and iPad kernel zero-day – act now!

Apple’s latest collection of security updates has arrived, including the just-launched macOS 13 Ventura, which was accompanied by its own security bulletin listing a whopping 112 CVE-numbered security holes.

Of those, we counted 27 arbitrary code execution holes, of which 12 allow rogue code to be injected right into the kernel itself, and one allows untrusted code to be run with system privileges.

On top of that, there are two elevation-of-privilege (EoP) bugs listed for Ventura that we assume could be used in conjunction with some, many or all of the remaining 14 non-system code execution bugs to form an attack chain that turns a user-level code execution exploit into a system-level one.

iPhone and iPad at real-life risk

That’s not the most critical part of this story however.

The “clear-and-present danger” prize goes to iOS and iPadOS, which get updated to version 16.1 and 16 respectively, where one of the listed security vulnerabilites allows kernel code execution from any app, and is already actively being exploited.

In short, iPhones and iPads needs patching right away because of a kernel zero-day.

Apple hasn’t said which cybercrime group or spyware company is abusing this bug, dubbed CVE-2022-42827, but given the high price that working iPhone zero-days command in the cyberunderworld, we assume that whoever is in in possession of this exploit [a] knows how to make it work effectively and [b] is unlikely to draw attention to it themselves, in order to keep existing victims in the dark as much as possible.

Apple has trotted out its usual boilerplate remark to the effect that the company “is aware of a report that this issue may have been actively exploited”, and that’s all.

As a result, we can’t offer you any advice on how to check for signs of attack on your own device – we’re not aware of any so-called IoCs (indicators of compromise), such as weird files in your backup, unexpected configuration changes, or unusual logfile entries that you might be able to search for.

Our only recommendation is therefore our usual urging to patch early/patch often, by heading to Settings > General > Software Update and choosing Download and Install if you haven’t received the fixes already.

Why wait for your device to find and suggest the updates itself when you can jump to the head of the queue and fetch them right away?

Catalina dropped?

As you might have assumed, given that the release of Ventura takes macOS to version 13, three-versions-ago macOS 10 Catalina doesn’t appear in the list this time.

Apple typically provides security updates only for the previous and pre-previous versions of macOS, and that’s how the patches played out here, with patches to take macOS 11 Big Sur to version 11.7.1, and macOS 12 Monterey to version 12.6.1.

However, those versions also get a separate update listed as Safari 16.1, which fixes several dangerous-sounding bugs in Safari and its underlying software library WebKit.

Remember that WebKit is used not only by Safari but also by any other apps that rely on Apple’s underlying code to display any sort of HTML-based content, including help systems, About screens, and built-in “minibrowsers”, commonly seen in messaging apps that offer an option to view HTML files, pages or messages.

Apple watchOS and tvOS also get numerous fixes, and their version numbers update to watchOS 9.1 and tvOS 16.1 respectively.

What to do?

The good news is that only early adopters and software developers are likely to be running Ventura already, as part of Apple’s Beta ecosystem.

Those users should update as soon as possible, without waiting for a system reminder or for auto-updating to kick in, given the huge number of bugs fixed.

If you aren’t on Ventura but intend to upgrade right away, your first experience of the new version will automatically include the 112 CVE patches mentioned above, so the version upgrade will automatically include the needed security updates.

If you’re planning on sticking with the previous or pre-previous macOS version for a while yet (or if, like us, you have an older Mac that can’t be upgraded), don’t forget that you need two updates: one specific to Big Sur or Monterey, and the other an update for Safari that’s the same for both operating system flavours.

To summarise:

  • On iOS or iPad OS, urgently use Settings > General > Software Update
  • On macOS, use Apple menu > About this Mac > Software Update…
  • macOS 13 Ventura Beta users should update immediately to the full release.
  • Big Sur and Monterey users who upgrade to Ventura get the macOS 13 security fixes at the same time.
  • macOS 11 Big Sur goes to 11.7.1, and needs Safari 16.1 as well.
  • macOS 12 Monterey goes to 12.6.1, and needs Safari 16.1 as well.
  • watchOS goes to 9.1.
  • tvOS goes to 16.1.

Note that macOS 10 Catalina gets no updates, but we assume that’s because it’s the end of the road for Catalina users, not because it’s still supported but was immune to any of the bugs found in later versions.

If we’re right, Catalina users who can’t upgrade their Macs are stuck with running increasingly outdated Apple software forever, or switching to an alternative operating system such as a Linux distro that is still supported on their device.

Quick links to Apple’s security bulletins:

  • APPLE-SA-2022-10-24-1: HT213489 for iOS 16.1 and iPadOS 16
  • APPLE-SA-2022-10-24-2: HT213488 for macOS Ventura 13
  • APPLE-SA-2022-10-24-3: HT213494 for macOS Monterey 12.6.1
  • APPLE-SA-2022-10-24-4: HT213493 for macOS Big Sur 11.7.1
  • APPLE-SA-2022-10-24-5: HT213491 for watchOS 9.1
  • APPLE-SA-2022-10-24-6: HT213492 for tvOS 16.1
  • APPLE-SA-2022-10-24-7: HT213495 for Safari 16.1

Serious Security: You can’t beat the house at Blackjack – or can you?

Cryptoguru Bruce Schneier (where crypto means cryptography, not the other thing!) just published an intriguing note on his blog entitled On the Randomness of Automatic Card Shufflers.

If you’ve ever been to a casino, at least one in Nevada, you’ll know that the blackjack tables don’t take chances with customers known in the trade as card counters.

That term is used to refer to players who have trained their memories to the point that they can keep close track of the cards played so far in a hand, which gives them a theoretical advantage over the house when predicting whether to stand or hit as play progresses.

Card counters can acquire an advantage even if all they do is keep track of the ratio of 10-cards (Ten, Jack, Queen and King) to non-10s left in the dealer’s shoe.

For example, if the dealer is sitting with an Ace, but an above-average number of 10-value cards have already been used up, then the dealer has a below-average chance of making a blackjack (21 points with two cards, i.e. Ace and one of 10-J-Q-K) and winning at once, and an above-average chance of going bust before reaching the stopping point of 17 and above.

If you can balance the probabilities in your head in real time, then you may be able modify your bets accordingly and come out ahead in the long run.

Don’t actually try this, at least in Nevada: the casino is likely to catch you out pretty quickly, because your pattern of play will diverge notably from the most informed winning choices available if you aren’t counting cards. You might not end up in court, but you will almost certainly get escorted off the premises, and never let back in again.

Levelling the odds

To reduce the counterbalance of probabilities that card counters enjoy (those who haven’t been caught yet, at least), the casinos typically:

  • Deal hands from a shoe loaded with six packs (decks) of 52 cards. This means that each hand dealt out skews the remaining distribution of cards less than if a single pack were used.
  • Shuffle the entire shoe of 312 cards (six packs) before every hand. To save time and to remove suspicion from the dealer, a pseudorandom electromechanical machine shuffles the cards right on the table, in front of all the players.

That immediately raises the question posed by Schneier: just how well-shuffled are the cards when they emerge from the machine?

Notably, with six new packs of cards, which arrive in a predictable order (e.g. Ace to King of Hearts, Ace to King of Clubs, King to Ace of Diamonds, King to Ace of Spades), how much partial ordering is left after the machine has done its work?

Could you “guess” the next card out of the shoe better than chance suggests?

A fully electronic randomiser is limited in its complexity mainly by the speed of the CPU that it uses, which is typically measured in hundreds of millions or billions of arithmetical operations a second.

But an electromechanical card shuffler literally has to move the cards around in real life.

There’s obviously a limit to how quickly it can perform pack splits, card swaps and interleaving operations before the speed of the mechanism starts to damage the cards, which means that there’s a limit to how much randomness (or, more precisely, pseudorandomness) the machine can introduce before it’s time to play the next hand.

Shuffle for too short a time, and the casino might actually make things easier for card counters, if there’s a known bias in the distribution of the cards right from the start.

Shuffle for too long, and play will be too slow, so that players will get bored and wander off, something that casinos desperately try to avoid.

Schneier’s blog posts links to a fascinating piece by the BBC that describes how a mathematician/magician called Persi Diaconis of Stanford University, together with Jason Fulman and Susan Holmes, conducted a formal investigation into this very issue earlier this century, in a paper entitled simply: ANALYSIS OF CASINO SHELF SHUFFLING MACHINES.

Levels of complexity

Clearly, there are some shuffling techniques that don’t mix the cards up much at all, such as simply cutting the pack into two parts and moving the bottom part to the top.

Other techniques result in (or feel as though they should result in) to better mixing, for example the riffle shuffle, where you split the pack roughly in half, hold one half in each hand, and “flip” the two halves together, interleaving them in a pseudorandom way that alternates between taking a few cards from one side, then a few cards from the other.

The idea is that if you riffle-shuffle the pack several times, you perform a pseudorandom sequence of cuts each time you divide the pack before each riffle, mixed together with a pseudorandomly variable sequence of pseudorandom interleaving operations involving an N-from-the-left-then-M-from-the-right process.

Intriguingly, however, when skilled human shufflers are involved, none of those assumptions of unpredictability are safe.

Dextrous magicians and crooked dealers (Diaconis himself is the former, but not the latter) can perform what are known as faro shuffles, or perfect shuffles, where they do both of the following things every time they riffle the pack:

  • Split the cards precisely in two, thus getting exactly 26 cards in each hand.
  • Interleave them perfectly, flipping down exactly one card at a time alternately from each hand, every single time.

Diaconis himself can do perfect shuffles (including the rare skill of doing so with just one hand to hold both halves of the pack!), and according to the BBC:

[He] likes to demonstrate the perfect shuffle by taking a new deck of cards and writing the word RANDOM in thick black marker on one side. As he performs his sleight of hand with the cards, the letters get mixed up, appearing now and then in ghostly form, like an imperfectly tuned image on an old TV set. Then, after he does the eighth and final shuffle, the word rematerialises on the side of the deck. The cards are in their exact original sequence, from the Ace of Spades to the Ace of Hearts.

Two types of perfection

In fact, there are two sorts of perfect shuffle, depending on which hand you start riffling from after cutting the cards into two 26-card piles.

You can interleave the cards so they end up in the sequence 1-27-2-28-3-29-…-25-51-26-52, if the first card you flip downwards comes from the hand in which you are holding he bottom half of the pack.

But if the first card you flip down is the bottom card of what was previously top half of the pack, you end up with 27-1-28-2-29-3-…-51-25-52-26, so the card just past halfway ends up on top afterwards.

The former type is called an out-shuffle, and reorders the pack every eight times you repeat it, as you can see here (the image has 52 lines of pixels, each line corresponding to the edge of one card with the word RANDOM written on it with a marker pen):

Every 8 out-shuffles, the original order of the lines in the image repeats.

The latter type is an in-shuffle, and this, amazingly, takes 52 re-shuffles before it repeats, though you can see clearly here that the pack never really shows any true randomness, and even passes through a perfect reversal half way through:

The in-shuffle repeats in a fascinating way every 52 times.

What did the mathematicians say?

So, back in 2013, when Diaconis el al. studied the shelf shuffler machine at the manufacturer’s invitation, what did they find?

As the paper explains it, a shelf shuffler is an electromechanical attempt to devise an automated, randomised “multi-cut multi-riffle shuffle”, ideally so that the cards only needs to be worked through once, to keep shuffling time short.

The cards in a shelf shuffler are rapidly “dealt out” pseudorandomly, one at a time, onto one of N metal shelves inside the device (whence the name), and each time a card is added to a shelf it’s either slid in at the bottom, or dropped on the top of previous cards. (We assume that trying to poke the card in between two random cards already in the stack would be both slower and prone to damage the cards.)

After all cards have been assigned to a shelf, so that each shelf has about 1/Nth of the cards on it, the cards are reassembled into a single pile in a pseudorandom order.

Intuitively, given the pseudorandomness involved, you’d expect that additional re-shuffles would improve the overall randomness, up to a point…

…but in this case, where the machine had 10 shelves, the researchers were specifically asked, “Will one pass of the machine be sufficient to produce adequate randomness?”

Presumably, the company wanted to avoid running the machine through multiple cycles in order to keep the players happy and the game flowing well, and the engineers who had designed the device had not detected any obviously expoitable statistical anomalies during their own tests.

But the company wanted to make sure that it hadn’t passed its own tests simply because the tests suited the machine, which would give them a false sense of security.

Ultimately, the researchers found not only that the randomness was rather poor, but also that they were able to quantify exactly how poor it was, and thus to devise alternative tests that convincingly revealed the lack of randomness.

In particular, they showed that just one pass of the device left sufficiently many short sequences of cards in the shuffled output that they could reliably predict between 9 and 10 cards on average when a pack of 52 shuffled cards was dealt out afterwards.

As the researchers wrote:

[U]sing our theory, we were able to show that a knowledgeable player could guess about 9-and-a-half cards correctly in a single run through a 52-card deck. For a well-shuffled deck, the optimal strategy gets about 4-and-a-half cards correct. This data did convince the company. The theory also suggested a useful remedy.

[…]

The president of the company responded, “We are not pleased with your conclusions, but we believe them and that’s what we hired you for.” We suggested a simple alternative: use the machine twice. This results in a shuffle equivalent to a 200-shelf machine. Our mathematical analysis and further tests, not reported here, show that this is adequately random.

What to do?

This tale contains several “teachable moments”, and you’d be wise to learn from them, whether you’re programmer or product manager wrestling specifically with randomess yourself, or a SecOps/DevOps/IT/cybersecurity professional who’s involved in cybersecurity assurance in general:

  • Passing your own tests isn’t enough. Failing your own tests is definitely bad, but it’s easy to end up with tests that you expect your algorithm, product or service to pass, especially if your corrections or “bug fixes” are measured by whether they get you through the tests. Sometimes, you need a second opinion then comes from an objective, independent source. That independent overview could come from a crack team of mathematical statisticians from California, as here; from a external “red team” of penetration testers; or from an MDR (managed detection and reponse) crew who bring their own eyes and ears to your cybersecurity situation.
  • Listening to bad news is important. The president of the shuffling machine company in this case answered perfectly when he admitted that he was displeased at the result, but that he had paid to uncover the truth, not simply to hear what he hoped.
  • Cryptography in particular, and cybersecurity in general, is hard. Asking for help is not an admission of failure but a recognition of what it takes to succeed.
  • Randomness is too important to be left to chance. Measuring disorder isn’t easy (read the paper to understand why), but it can and should be done.

Short of time or expertise to take care of cybersecurity threat response? Worried that cybersecurity will end up distracting you from all the other things you need to do?

Learn more about Sophos Managed Detection and Response:
24/7 threat hunting, detection, and response  ▶


go top