Category Archives: News

Silk Road drugs market hacker pleads guilty, faces 20 years inside

Here’s an important thing to remember about jurisprudential arithmetic, where two negatives definitely don’t make a positive: stealing money from someone who originally acquired it through criminal means doesn’t “cancel out” the criminality.

You can still go to prison for a very lengthy stretch, and here’s one way.

Remember Silk Road?

Not the actual road, or more properly, the web of East-West trading routes linking China to the Middle East and Europe for many centuries until about AD 1450.

We’re talking about the metaphorical Silk Road, one of the first large-scale sell-what-you-want-and-buy-what-you-like online markets that operated from early 2011 to late 2013 on what’s now loosely known as the dark web.

Given that the Silk Road website was very widely used for selling prohibited items, mostly recreational drugs but also stolen identities and other enablers of cybercrime, the adjective dark in the phrase “dark web” came to be interpreted as dark-as-in-devilish-and-dangerous.

In fact, the word more generally reflects the fact that it is a part of the web that is effectively unilluminated, deliberately kept in the dark from the spotlight of conventional searching and geolocation techniques.

Network traffic in a dark web can’t easily be tracked forwards from visitor to server, or backwards from server to visitor, thus providing a measure of anonymity and untraceability.

This makes online clients and servers hard to identify, and their actual computers hard to locate, thus making both the users and the infrastructure hard to take down:

[embedded content]

The Onion Router

The most popular dark web implementation is the pseudoanonymous network known loosely as Tor, short for The Onion Router, in which traffic between two points in the network is shuffled through multiple computers chosen in advance from a global collection of about 6000 “onion routers” provided by volunteers.

To make tracking and tracing traffic difficult, users who are connecting via Tor choose their own random sequence of so-called relays.

Then they encrypt their desired destination address with the last relay’s public encryption key, then encrypt that destination with the previous relay’s key, and so on, thus wrapping the commmunication in a series of protected routing layers, like an onion.

The first relay knows who started the connection, so it can, in theory, identify you, but it has no idea what’s in your message, or where it’s going.

The final relay knows who you’re talking to, and perhaps even what you are saying if the innermost message is itself unencrypted, but has no idea where the message came from, so it doesn’t know who you are.

Any relays in between serve to keep the first and last relays apart, so they can’t identify each other and collude to expose you.

Each relay can only strip off the next layer of encryption, so all it knows is where to forward what’s left of the onion in order to get the data to the next hop in the chain, which was chosen up front by the sender.

As you can imagine, this technology, plus the arrival of online sites where non-technical computer users could buy cryptocurrencies such as Bitcoin, rather than needing to “mine” them for themselves, quickly led to online marketplaces that could circumvent the regulations that applied to regular online retail sites.

Buyers didn’t need credit cards; sellers could sell products that would banned in regular stores; and the authorities couldn’t easily control the process, or even identify the buyers and sellers involved.

Many a slip ’twixt the cup and the lip

Of course, as the current Web 3.0 and DeFi (decentralised finance) era has reminded us over and over (indeed, very sadly, over and over and over) again, the fact that technology exists to make online trading fast, anonymous, unblockable and libertarian, unbeholden to any national or supranational regulators…

…doesn’t mean that the programmers who implement that technology into new products and services, or who rely on it for their own cybersecurity, will get it right.

The founder and primary operator of Silk Road, for example, was for about two years known only by his online handle Dread Pirate Roberts, and apparently boasted in a tweet in June 2013, saying: “Illegal drugs, home delivered, and our cops are clueless.”

By October 2013, however, his site was shuttered and he was in custody, having been unable to keep himself anonymous for long.

Under his real-life name of Ross Ulbricht, he was found guilty of several serious criminal offences in 2015, and ultimately sent to prison for life (twice over, in fact, as strange as that concept sounds) without parole.

And cybersecurity problems at Silk Road weren’t limited just to Ulbricht’s poor operational security.

The site also suffered a cryptographic crisis in September 2012, when a then-unknown hacker figured out a way to game Silk Road’s accounting system by making a rapid sequence of automated transactions in which multiple outbound payments could be completed immediately after making a single inbound payment.

(We’re assuming that the system failed to wait for the user’s remaining balance to be properly debited between each outgoing transaction, thus inadvertently allowing the the same bitcoin deposit to be “spent” repeatedly, only noticing the overspend after it was too late.)

According to the US Department of Justice (and the involvement of the DOJ gives you a hint where this story is going, if you didn’t figure it out already from the headline), the perpetrator:

creat[ed] a string of approximately nine Silk Road accounts […] in a manner designed to conceal his identity; trigger[ed more than] 140 transactions in rapid succession in order to trick Silk Road’s withdrawal-processing system into releasing approximately 50,000 Bitcoin from its Bitcoin-based payment system into [his] accounts; and transferr[ed] this Bitcoin into a variety of separate addresses […], all in a manner designed to prevent detection, conceal his identity and ownership, and obfuscate the Bitcoin’s source.

Simply put, the perpetator, James Zhong, who was just 22 years old at the time, started with between 200 and 2000 Bitcoins, and by quickly ended up with more than BTC 50,000.

He figured out how to “withdraw” each new “deposit” he made five or more times, allowing him to ramp up his stash in a series of rogue trading loops, before exiting in a hurry with everything.

At the time, his stolen stash of at least BTC 50,000 was worth about $600,000 (BTC1 = USD12).

Caught red-handed

Intriguingly, it seems that Zhong didn’t so much hold onto most of his ill-gotten gains for about nine years, as find himself unable to do anything with his cold wallet of rogue cryptocoins…

…even (or perhaps especially) at the dizzy heights of Bitcoin’s surge to $20k in late 2017, to over $60k in April 2021, and then to $68k in November 2021.

Ironically, if that is the right word, Zhong was busted right at that more-than-$65,535 Bitcoin peak: “On November 9, 2021, pursuant to a judicially authorized premises search warrant of ZHONG’s Gainesville, Georgia, house, law enforcement seized approximately 50,676.17851897 Bitcoin”, then valued at over $3.36 billion.”

Fascinatingly, the bulk of the stolen cryptocurrency was hidden, says the DOJ, “in an underground floor safe, and […] on a single-board computer that was submerged under blankets in a popcorn tin stored in a bathroom closet.”

Technically, that figure of BTC 50,676.17851897 seized doesn’t just sound absurdly precise for an “approximate” amount, it is as precise as you can be in the Bitcoin ecosystem, given that the smallest transactable unit on the Bitcoin blockchain is 1 Satoshi.

A Satoshi is a one-hundred-millionth part of a Bitcoin, or BTC0.00000001, where that 1-digit is in the eighth decimal place.

(At the time of the crime, 8 Satoshis were worth only about one-hundred-thousandth of a US cent; at the time of the bust, however, 16 Satoshis were worth worth just over a cent.)

Apparently, over the past year, Zhong must have decided to play ball with the investigators: “Beginning in or around March 2022, [he] began voluntarily surrendering to the Government additional Bitcoin that [he] had access to and had not dissipated. In total, [he] voluntarily surrendered 1,004.14621836 additional Bitcoin.”

He has now pleaded guilty to the original crime, and agreed to forfeit $600,000 in cash that was found at his house during his arrest in 2021 (coincidentally, the same amount that his BTC heist had been worth at the time of the crime nine years earlier), plus what the DOJ describes as an “80% interest in RE&D Investments LLC, a Memphis-based company with substantial real estate holdings”.

A weird sort of second-best

As the DOJ wryly notes, Zhong’s BTC stash was the biggest cryptocurrency amount ever recovered in a law enforcement operation, based on rates at the time of the bust, though now it’s considered only second-best.

Apaprently, the new record was set just three months later, when the self-proclaimed Crocodile of Wall Street (and wannabe rapper) Heather Morgan and her husband Ilya Lichtenstein were busted after investigators cracked the password on a cold wallet of Lichtenstein’s containing a whopping BTC94,636.

Those funds are alleged to be the after-effects of a 2016 cyberheist against cryptocoin exchange Bitfinex, in which BTC119,756 was stolen, worth about $72m at the time. (The abovementioned suspects weren’t charged with actually pulling off the heist itself, just with ending up with the stolen funds afterwards.)

Even though the cops only recovered 80% of the stolen Bitfinex hoard, and even though BTC values had gone down sharply in the short time since Zhong’s peak-of-the-market bust, the stash recouped from Lichtenstein’s cold wallet nevertheless trumped the Zhong seizure, with a dramatic theoretical value of more than $4 billion.

A final note

Zhong’s confiscated stockpile is down to just under a billion dollars, while the Crocodile Coin Collection is “only” about $1.8 billion now.

In a curious way, it’s just as well that all this is true, because you simply couldn’t make it up…


Public URL scanning tools – when security leads to insecurity

Well-known cybersecurity researcher Fabian Bräunlein has featured not once but twice before on Naked Security for his work in researching the pros and cons of Apple’s AirTag products.

In 2021, he dug into the protocol devised by Apple for keeping tags on tags and found that the cryprography was good, making it hard for anyone to keep tabs on you via an AirTag that you owned.

Even though the system relies on other people calling home with the current location of AirTags in their vicinity, neither they nor Apple can tell whose AirTag they’ve reported on.

But Bräunlein figured out a way that you could, in theory at least, use this anonymous calling home feature as a sort-of free, very low-bandwidth, community-assisted data reporting service, using public keys for data signalling:

He also looked at AirTags from the opposite direction, namely how likely it is that you’d spot an AirTag that someone had deliberately hidden in your belongings, say in your rucksack, so that they could track you under cover of tracking themselves:

Indeed, the issue of “AirTag stalking” hit the news in June 2022 when an Indiana woman was arrested for running over and killing a man in whose car she later admitted to planting an AirTag in order to keep track of his comings and goings.

In that tragic case, which took place outside a bar, she could probably have guessed were he was anyway, but law enforcement staff were nevertheless obliged to bring the AirTag into their investigations.

When security scans reveal more than they should

Now, Bräunlein is back with another worthwhile warning, this time about the danger of cloud-based security lookup services that give you a free (or paid) opinion about cybersecurity data you may have collected.

Many Naked Security readers will be familiar with services such as Google’s Virus Total, where you can upload suspicious files to see what static virus scanning tools (including Sophos, as it happens) make of it.

Sadly, lots of people use Virus Total to gauge how good a security product might be at blocking a threat in real life when its primary purpose is to disambiguate threat naming, to provide a simple and reliable way for people to share suspicious files, and to assist with prompt and secure sample sharing across the industry. (You only have to upload the file once.)

This new report by Bräunlein looks at a similar sort of public service, this time urlscan.io, which aims to provide a public query-and-reporting tool for suspicious URLs.

The idea is simple… anyone who’s worried about a URL they just received, for example in what they think is a phishing email, can submit the domain name or URL, either manually via the website, or automatically via a web-based interface, and get back a bunch of data about it.

Like this, checking to see what the site (and the community at large) think of the URL http://example.com/whatalotoftextthisis:

You can probably see where Fabian Bräunlein went with this if you realise that you, or indeed anyone else with the time to keep an eye on things, may be able to retrieve the URL you just looked up.

Here, I went back in with a different browser via a different IP address, and was able to retrieve the recent searches against example.com. including the one with the full URL I submitted above:

From there, I can drill down into the page content and even access the request headers at the time of the original search:

And no matter how hard urlscan.io tries to detect and avoid saving and retrieving private data that happens to be given away in the original search…

…there’s no way that the site can reliably protect you from “searching” for data that you shouldn’t have revealed to a third-party site.

This shouldn’t-really-have-been-revealed data may leak out as a text strings in URLs, perhaps encoded to make them less obvious to casual observers, that denote information such as tracking codes, usernames, “magic codes” for password resets, order numbers, and so on.

Worse still, Bräunlein realised that many third-party security tools, both commerical and open source, perfom automated URL lookups via urlscan.io if so configured.

In other words, you might be making your security situation worse while trying to make it better, by inadvertently authorising your security software to give away personally identifiable information in its online security lookups.

Indeed, Bräunlein documented numerous “sneaky searches” that attackers could potentially use to home in on personal information that could be leeched from the system, including but not limited to (in alphabetical order) data that really ought to kept secret:

  • Account creation links
  • Amazon gift delivery links
  • API keys
  • DocuSign signing requests
  • Dropbox file transfers
  • Package tracking links
  • Password reset links
  • PayPal invoices
  • Shared Google Drive documents
  • Sharepoint invites
  • Unsubscribe links

What to do?

  • Read Bräunlein’s report. It’s detailed but explains not only what you can do to reduce the risk of leaking data this way y mistake, but also what urlscan.io has done to make it easier to do searches privately, and to get rogue data expired quickly.
  • Read urlscan.io‘s own blog post based on lessons learned from the report. The article is entitled Scan Visibility Best Practices and contains plenty of useful advice summarised as how to: “understand the different scan visibilities, review your own scans for non-public information, review your automated submission workflows, enforce a maximum scan visibility for your account and work with us to clean non-public data from urlscan.io“.
  • Review any code of your own that does online security lookups. Be as proactive and as conservative as you can in what you remove or redact from data before you submit it to other people or services for analysis.
  • Learn what privacy features exists for online submissions. If there’s a way to identify your submissions as “do not share”, use it unless you are happy for it to be used by the community at large to improve security in general. Use these privacy features as well as, not instead of, redacting the input you submit in the first place.
  • Learn how to report rogue data to online service of this sort it you see it. And if you run a service of this sort that publishes data that you later find out (through no fault of your own) wasn’t supposed to be public, make sure you have a robust and quick way to remove it to reduce potential future harm.

Simply put…

To users of online security scanning services: If in doubt/Don’t give it out.

To the operators of those services: If it shouldn’t be in/Stick it straight in the bin.

And to cybersecurity coders everywhere: Never make your users cry/By how you use an API.

A bin, if you aren’t familiar with that pungently useful word, or rubbish bin in full, is what English-speaking people outside North America call a garbage can.


Twitter Blue Badge email scams – Don’t fall for them!

It’s only a week since Elon Musk’s take-private of Twitter on 28 October 2022…

…but if you take into account the number of news stories about it (and, perhaps ironically under the circumstances, the volume of Twitter threadspace devoted to it), it probably feels a lot longer.

There’s been plenty to set the fur flying, starting with Musk’s curious choice of metaphor in arriving at Twitter HQ on takeover day with a kitchen sink, as though the company’s products and services were already so close to complete that they needed nothing more than the aforementioned dishwashing receptacle to finish things off.

Then there was the peremptory, if not-at-all unexpected, dismissal of the top tier of management; a pair of pranksters carrying cardboard boxes who tricked journalists into reporting they’d just been sacked and escorted offsite; staff who had been sacked apparently finding out when their access codes abruptly stopped working; and Twitter’s apparent rush to switch its well-known Blue Badge into a subscription service, not simply a verification system.

At the time of writing [2022-11-04T17:00Z], however, Twitter’s own documentation still stressed that so-called Verified Accounts are so labelled in order to denote that “an account of public interest is authentic, […] notable, and active.”

In fact, once you’re Verified, at least under today’s rules, you can’t voluntarily cast off your blue badge yourself, though you can have it pulled by Twitter “at any time without notice.”

Where FUD goes…

As you can therefore imagine, or as you’ve probably seen for yourself, Twitter’s current intention to make the blue badge into a pay-to-play service has stirred up plenty of fear, uncertainty and doubt, and where FUD goes…

…cybercriminals love to follow, whether it’s calling you up out of the blue (no pun intended) and telling you “Microsoft” has detetced “dangerous viruses” on your computer, or texting you to ask you to reschedule your latest home “delivery”, or emailing you to warn you about an Instagram copyright “infringement” on your account.

Indeed, the Twitter Verified scamming started quickly, with Zack Whittaker at TechCrunch publishing screenshots of blue-badge-themed phishing attacks last weekend:

The emails reported to Whittaker had been sent to journalists, and guessed that Twitter would be charging $20 a month for a blue-badge privilege. (The crooks actually went for $19.99, presumably because round numbers are surpisingly uncommon as prices in the English speaking world, with that one-cent reduction apparently making a $1000 ripoff look like a bargain when it turns up for just $999.99.)

The crooks in this scam suggested that you could simply “reverify” in order to retain your existing blue badge and thus avoid future charges, and helpfully provided a login button so you could do just that.

Of course, clicking through took you to a fake site that tried to harvest your phone number and Twitter login details, but you can imagine many other approaches that scammers could take, including:

  • Inviting you to “sign up early” to avoid disappointment, and then phishing for your payment card details.
  • Offering to help you stake a claim on an existing account name, and then phishing for significant personal information.
  • Urging you to “pre-apply” to save time later, then requesting similar information.

Elon Musk himself, apparently, has subsequently said, “Power to the people! Blue for $8/month,” which certainly invalidates the first round of scam emails that insisted the price was going to be $19.99…

…but does nothing to prevent the next round of scammers from simply coming up with new verbiage that’s updated for the new terms and conditions.

What to do?

Our usual cybersecurity advice applies, and it will help you avoid phishing scams whether their hook is the Twitter takeover, Black Friday “superdeals”, home delivery “failures”, bank account “problems”, or any other sort of message that tries to lure you in with fear (including fear of missing out), uncertainty and doubt:

  • Use a password manager. This helps stop you putting a real password into a fake site, because your password manager won’t recognise the imposter web pages.
  • Turn on 2FA if you can. Two-factor authentication means you need a one-time code as well as your password, making stolen passwords alone less useful to the crooks.
  • Avoid login links and action buttons in emails. If there’s action you need to take on the website of a service you genuinely use, find your own way to the real site using a URL you already know or can look up securely.
  • Never ask the sender of an uncertain message if they’re legitimate. If they’re genuine, they’ll say so, but if they’re scammers, they’ll say exactly the same thing, so you’ve learned nothing!

Remember: If in doubt, don’t give it out.

If it sounds like a scam, simply assume that it is, and bail out up front.


S3 Ep107: Eight months to kick out the crooks and you think that’s GOOD? [Audio + Text]

WE DON’T KNOW HOW BAD WE WERE, BUT PERHAPS THE CROOKS WEREN’T ANY GOOD?

Click-and-drag on the soundwaves below to skip to any point. You can also listen directly on Soundcloud.

With Doug Aamoth and Paul Ducklin. Intro and outro music by Edith Mudge.

You can listen to us on Soundcloud, Apple Podcasts, Google Podcasts, Spotify, Stitcher and anywhere that good podcasts are found. Or just drop the URL of our RSS feed into your favourite podcatcher.


READ THE TRANSCRIPT

DOUG.  Patches galore, horrifying therapy sessions, and case studies in bad cybersecurity.

All that, and more, on the Naked Security podcast.

[MUSICAL MODEM]

Welcome to the podcast, everybody.

I am Doug Aamoth; he is Paul Ducklin.

Paul, how do you do?

We’ve got a big show today.


DUCK.  Yes, let’s hope we get through them all, Doug!


DOUG.  Let us do our best!

We will start, of course, with our Tech History segment…

..this week, on 02 November 1815, George Boole, was born in Lincolnshire, England.

Paul, TRUE or FALSE: Boole made several great contributions to mathematics, the information age, and beyond?

IF you have some context THEN I will gladly listen to it ELSE we can move on.


DUCK.  Well, Doug, let me just say then, because I prepared something I could read out…

…e wrote a very famous scientific work entitled, and you’ll see why I wrote it down [LAUGHS]:

An Investigation of the Laws of Thought on which are Founded the Mathematical Theories of Logic and Probability


DOUG.  Rolls right off the tongue!


DUCK.  He was right behind symbolic logic, and he influenced Augustus De Morgan. (People may know De Morgan’s laws.)

And DeMorgan was Ada Lovelace’s mathematics tutor.

She took these grand ideas of symbolic logic and figured, “Hey, when we get programmable computers, this is going to change the world!”

And she was right! [LAUGHS]


DOUG.  Excellent.

Thank you very much, George Boole, may you rest in peace.

Paul, we have a ton of updates to talk about this week, so if you could update us on all these updates…

Let’s start with OpenSSL:

The OpenSSL security update story – how can you tell what needs fixing?


DUCK.  Yes, it’s the one everyone’s been waiting for.

OpenSSL do the exact opposite of Apple, who say absolutely nothing until the updates just arrive. [LAUGHTER]

OpenSSL say, “Hey, we’re going to be releasing updates on XYZ date, so you might want to get ready. And the worst update in this batch will have the level…”

And this time they wrote CRITICAL in capital letters.

That doesn’t happen often with OpenSSL, and, being a cryptographic library, whenever they say, “Oh, golly, there’s a CRITICAL- level hole”, everyone thinks back to… what was it, 2014?

“Oh, no, it’s going to be as bad as Heartbleed all over again,” because it could be, for all you know:

Anatomy of a data leakage bug – the OpenSSL “Heartbleed” buffer overflow

So we had a week of waiting, and worrying, and “What are we going to do?”

And on 01 November 2022, the updates actually dropped.

Let’s start with the numbers: OpenSSL 1.1.1 goes to version S-for-Sierra, because that uses letters to denote the individual updates.

And OpenSSL 3.0 goes to 3.0.7:

OpenSSL patches are out – CRITICAL bug downgraded to HIGH, but patch anyway!

Now, the critical update… actually, it turned out that while investigating the first update, they found a second related update, so there are actually two of them… those only apply to OpenSSL 3.0, not to 1.1.1.

So I’m not saying, “Don’t patch if you’ve got 1.1.1”, but it’s less urgent, you could say.

And the silver lining is that the CRITICAL level, all in capital letters, was downgraded to HIGH severity, because it’s felt that the bugs, which relate to TLS certificate validation, can almost certainly be used for denial-of-service, but are probably going to be very hard to turn into remote code execution exploits.

There are buffer overflows, but they’re kind of limited.

There are two bugs… let me just give the numbers so you can refer to them.

There’s CVE 2022-3602, where you can overwrite four bytes of the stack: just four bytes, half a 64-bit address.

Although you can write anything you want, the amount of damage you can do is probably, but not necessarily, limited to denial-of-service.

And the other bug is called CVE-2022-3786, and in that one you can do as big a stack overflow as you like, apparently [LAUGHS]… this is quite amusing.

But you can only write dots, hexdecimal 0x2E in ASCII.

So although you can completely corrupt the stack, there’s a limit to how creative you can be in any remote code execution exploit you try and dream up.

The other silver lining is that, generally speaking… not in all cases, but in most cases, particularly for things like web servers, where people might be using OpenSSL and they’re panicking: “What if people can steal secrets from our web server like they could in the Heartbleed days?”

Most web servers don’t ask clients who are connecting, visitors, to provide a certificate to validate themselves.

They don’t care; anyone is welcome to visit.

But server sends the client a certificate so the client, if it wishes, can determine, “Hey, I really am visiting Sophos”, or Microsoft, or whatever site I think it is.

So it looks as though the most likely way this will be exploited would be for rogue servers to crash clients, rather than the other way around.

And I think you will agree that servers crashing clients is bad, and you could do bad things with it: for example, you could block somebody from getting updates, because it keeps failing over and over and over and over.

But it doesn’t look as likely that this bug could be exploited for any random person on the Internet just to start scanning all your web servers and crashing them at will.

I don’t think that’s likely.


DOUG.  We do have a reader comment here: “I have no idea what I’m supposed to update. Chrome firefox windows. Help?”

You never know.., there are all these different flavours of SSL.


DUCK.  The good news here is that, although some Microsoft products do use and include their own copy of OpenSSL, it’s my understanding that neither Chrome nor Firefox nor Edge use it.

So I think the answer to the question is that although you never know, from a pure Windows, Chrome, Firefox, Edge perspective, I don’t think you need to worry about this one.

It’s if you’re running servers, particularly Linux servers, where your Linux distro comes with either or both versions of OpenSSL, or if you have specific Windows products you’ve installed that happen to come along with OpenSSL… and the product will normally tell you if it does.

Or you can go looking for libcrypto*.dll or libssl*.dll.

And a great example of that, Doug, is Nmap, the very famous and very useful network scanning tool that lots of Red Teams use.

That program comes not only with OpenSSL 1.1.1, packaged along with itself, but with also OpenSSL 3.0, as far as I can see.

And both of them currently, at least when I looked last night, are out of date.

I shouldn’t say this, but…


DOUG.  [INTERRPTS, LAUGHING] If I’m a Blue Team member…


DUCK.  Exactly! EXACTLY! [LAUGHING]

If you’re a Blue Teamer trying to protect your network and you think, “Oh, the Red Team are going to be scanning like crazy, and they love their Nmap”, you have a fighting chance to counterhack!

[LOUD LAUGHTER]


DOUG.  OK, we’ve got some other updates to talk about: Chrome, Apple and SHA-3 updates.

Let’s start with Chrome, which had an urgent zero-day fix, and they patched it pretty quickly…

…but they weren’t super clear on what was going on:

Chrome issues urgent zero-day fix – update now!


DUCK.  I don’t know whether three lawyers wrote these words, each adding an extra level of indirection, but you know that Google have this weird way of talking about zero-days, just like Apple, where they tell the *literal* truth:

Google is aware of reports that an exploit for this vulnerability, CVE-2022-3723, exists in the wild.

Which is sort of two levels of indirection away from saying, “It’s an 0-day, folks!”

Instead, it’s, “Someone wrote a report that says it exists, and then they told us about the report.”

I think we can all agree it needs patching, and Google must agree, because…

…to be fair to them, they fixed it almost immediately.

Ironically, they did a big security fix on the very day that this bug was reported, which I think was 25 October 2022, and Google had fixed it within what, three days?

Two days, actually.

And Microsoft have themselves followed up with a very clear report on their Edge release notes: on the 31 October 2022, they release an update and it explicitly said that it fixes the bug reported by Google and the Chromium team.


DOUG.  OK, very good.

I am reticent to bring this up, but are we safe to talk about Apple now?

Do we have any more clarity on this Apple zero-day?

Updates to Apple’s zero-day update story – iPhone and iPad users read this!


DUCK.  Well, the critical deal here is when we wrote about the update that included iOS 16.1 and iPadOS 16, which actually turned out to be iPadOS 16.1 after all…

…people are asking us, understandably, “What about iOS 15.7? Do I have to go to iOS 16 if I can? Or is there going to be a 15.7.1? Or have they dropped support for iOS 15 altogether, game over?”

And, lo and behold, as good fortune would have it (I think it the day after we recorded last week’s podcast [LAUGHS]), they suddenly sent out a notification saying, “Hey, iOS 15.7.1 is out, and it fixes exactly the same holes that iOS 16.1 and iPadOS 16/16.1 did.”

So now we know that if you’re on iOS or iPadOS, you *can* stick with version 15 if you want, and there’s a 15.7.1 that you need to get.

But if you have an older phone that doesn’t support iOS 16, then you definitely need to get 15.7.1 because that’s your only way to fix the zero-day.

And we also seem to have satisfied ourselves that iOS and iPadOS now both have the same code, with the same fixes, and they’re both on 16.1, whatever the security bulletins may have implied.


DOUG.  Alright, great job, everybody, we did it.

Great work… took a few days, but alright!

And last, but certainly not least in our update stories…

…it feels like we keep talking about this, and keep trying to do the right thing with cryptography, but our efforts aren’t always rewarded.

So, case in point, this new SHA-3 bug?

SHA-3 code execution bug patched in PHP – check your version!


DUCK.  Yes, this is a little different from the OpenSSL bugs we just talked about, because, in this case, the problem is actually in the SHA-3 cryptographic algorithm itself… in an implementation known as XKCP, that’s X-ray, Kilo, Charlie, Papa.

And that is, if you like, the reference implementation by the very team that invented SHA-3, which was originally called Keccak [pronounced ‘ketchak’, like ‘ketchup’].

It was approved about ten years ago, and they decided, “Well, we’ll write a collection of standardised algorithms for all the cryptographic stuff that we do, including SHA-3, that people can use if they want.”

Unfortunately, it looks as though their programming wasn’t quite as careful and as robust as their original cryptographic design, because they made the same sort of bug that Chester and I spoke about a few months ago in a product called NetUSB:

Home routers with NetUSB support could have critical kernel hole

So, in the code, they were trying to check: “Are you asking us to hash too much data?”

And the theoretical limit was 4GB minus one byte, except that they forgot that there are supposed to be 200 spare bytes at the end.

So they were supposed to check whether you were trying to hash more than 4GB minus one bytes *minus 200 bytes*.

But they didn’t, and that caused an integer overflow, which could cause a buffer overflow, which could cause either a denial-of-service.

Or, in the worst case, a potential remote code execution.

Or just hash values computed incorrectly, which is always going to end in tears because you can imagine that either a good file might end up being condemned as bad, or a bad file might be misrecognised as good.


DOUG.  So if this is a reference implementation, is this something to panic about on a widespread basis, or is it more contained?


DUCK.  I think it is more contained, because most products, notably including OpenSSL, fortunately, don’t use the XKCP implementation.

But PHP *does* use the XKCP code, so you either want to make sure you have PHP 8.0.25 or later, or PHP 8.1.12 or later.

And the other confusing one is Python.

Now, Python 3.11, which is the latest, shifted to a brand new implementation of SHA-3, which is not this one, so that’s not vulnerable.

Python 3.9 and 3.10… some builds use OpenSSL, and some use the XKCP implementation.

And we’ve got some code in our article, some Python code, that you can use to determine which version your Python implementation is using.

It does make a difference: one can be reliably made to crash; the other can’t.

And Python 3.8 and earlier apparently does have this XKCP code in it.

So you’re going to either want to put mitigations in your own code to do the buffer length check correctly yourself, or to apply any needed updates when they come out.


DOUG.  OK, very good, we’ll keep an eye on that.

And now we’re going to round out the show with two really uplifting stories, starting with what happens when the very private and very personal contents of thousands of psychotherapy sessions get leaked online

Psychotherapy extortion suspect: arrest warrant issued


DUCK.  The backstory is what is now an infamous, and in fact bankrupt, psychotherapy clinic.

They had a data breach, I believe, in 2018, and another one in 2019.

And it turned out that these intimate sessions that people had had with their psychotherapists, where they revealed their deepest and presumably sometimes darkest secrets, and what they thought about their friends and their family…

…all this stuff that is so personal that you kind of hope it wouldn’t be recorded at all, but would just be listened to and the basics distilled.

But apparently the therapists would type up detailed notes, and then store them for later.

Well, maybe that’s OK if they’re going to store them properly.

But at some point, I guess, they had the “rush to the cloud”.

These things became available on the Internet, and allegedly there was a kind of ueberaccount whereby anybody could access everything if they knew the password.

And, apparently, it was a default.

Oh, dear, how can people still do this?


DOUG.  Oof!


DUCK.  So anybody could get in, and somebody did.

And the company didn’t really seem to do much about it, as far as I can tell, and it wasn’t disclosed or reported…

…because if they’d acted quickly, maybe law enforcement could have got involved early and closed this whole thing down in time.

But it only came out in the wash in October 2020, apparently, when the issue of the breach could be denied no longer.

Because somebody who had acquired the data, either the original intruder or someone who had bought it online, you imagine, started trying to do blackmail with it.

And apparently they first tried to blackmail the company, saying, “Pay us”… I think the amount was somewhere around half-a-million Euros.

“Pay us this lump sum in bitcoins and we’ll make the data go away.”

But, thwarted by the company, the person with the data then decided, “I know what, I’m going to blackmail each person of the tens of thousands in the database individually.”


DOUG.  Oh, boy…


DUCK.  So they started sending emails saying, “Hey, pay me €200 yourself, and I’ll make sure your data doesn’t get exposed.”

Anyway, it seems that the data wasn’t released… and trying to find the silver lining in this, Doug: [A] the Finnish authorities have now issued an arrest warrant, and [B] they are going to go after the CEO of the former company (as I said, it’s now bankrupt), saying that although the company was a victim of crime, the company itself was so far below par in how it dealt with the breach that it needs to face some kind of penalty.

They didn’t report the breach when it might have made a big difference, and they just simply, given the nature of the data that they know they’re holding… they just did everything too shabbily.

And this is not just, “Oh, you could get a regulatory fine.”

Apparently he could face up to twelve months in prison.


DOUG.  OK, well that’s something!

But not to be outdone, we’ve got a case study in cybersecurity ineptitude and a really, really poor post-breach response with this “See Tickets” thing:

Online ticketing company “See” pwned for 2.5 years by attackers


DUCK.  Yes, this is a very big ticketing company… That’s “See”, S-E-E, not “C” as in the programming language.

[GROANING] This also seems like such a comedy of errors, Doug…


DOUG.  It’s really breathtaking.

25 June 2019… by this date, we believe that cybercriminals had implanted data-stealing malware on the checkout pages run by the company.

So this isn’t that people are being phished or tricked, because when you went to check out, your data could have been siphoned.


DUCK.  So this is “malware on the website”?


DOUG.  Yes.


DUCK.  That is pretty intimately connected with your transaction, in real time!


DOUG.  The usual suspects, like name, address, zip code, but then your credit card number…

…so you say, “OK, you got my number, but did they also…?”

And, yes, they have your expiration date, and they have your CVV number, the little three-digit number that you type in to make sure that you’re legit with your credit card.


DUCK.  Yes, because you’re not supposed to store that after you’ve completed the transaction…


DOUG.  No, Sir!


DUCK.  …but you have it in memory *while you’re doing the transaction*, out of necessity.


DOUG.  And then almost two years later, in April of 2021 (two years later!), See Tickets was alerted to activity indicating potential unauthorised access, [IRONIC] and they sprung into action.


DUCK.  Oh, that’s like that SHEIN breach we spoke about a couple of weeks ago, isn’t it?

Fashion brand SHEIN fined $1.9m for lying about data breach

They found out from somebody else… the credit card company said, “You know what, there are a whole lot of dodgy transactions that seem to go back to you.”


DOUG.  They launch an investigation.

But they do not actually shut down all the stuff that’s going on until [DRAMATIC PAUSE] January of 2022!


DUCK.  Eight and a half months later, isn’t it?


DOUG.  Yes!


DUCK.  So that was their threat response?

They had a third party forensics team, they had all the experts in, and more than *eight months* later they said, “Hey, guess what guys, we think we’ve kicked the crooks out now”?


DOUG.  Then they went on to say, in October 2022, that “We’re not certain your information was affected”, but they finally notified customers.


DUCK.  So, instead of saying, “The crooks had malware on the server which aimed to steal everybody’s data, and we can’t tell whether they were successful or not”, in other words, “We were so bad at this that we can’t even tell how good the crooks were”…

…they actually said, “Oh, don’t worry, don’t worry, we weren’t able to prove that your data was stolen, so maybe it wasn’t”?


DOUG.  “This thing that’s been going on for two-and-a-half years under our nose… we’re just not sure.”

OK, so the email that See Tickets sends out to their customers includes some advice, but it’s actually not really advice applicable to this particular situation… [SOUNDING DEFEATED] which was ironic and awful, but sort of funny.


DUCK.  Yes.

Whilst I would agree with their advice, and it’s well worth taking into account, namely: always check your financial statements regularly, and watch out for phishing emails that try and trick you into handing over your personal data…

…you think they might have included a bit of a mea culpa in there, and explained what *they* were going to do in future to prevent what *did* happen, which neither of those things could possibly have prevented, because checking your statements only shows you that you’ve been breached after it happens, and there was no phishing in this case.


DOUG.  So that raises a good question.

The one that a reader brings up… and our comment here on this little kerfuffle is that Naked Security reader Lawrence fairly asks: “I thought PCI compliance required safeguards on all this stuff. Were they never audited?”


DUCK.  I don’t know the answer to that question…

But even if they were compliant, and were checked for compliance, that doesn’t mean that they couldn’t have got a malware infection the day after the compliance check was done.

The compliance check doesn’t involve a complete audit of absolutely everything on the network.

My analogy, which people in the UK will be familiar with, is that if you have a car in the UK, it has to have an annual safety check.

And it’s very clear, when you pass a test, that *this is not a proof that the car is roadworthy*.

It’s passed the statutory tests, which test the obvious stuff that if you haven’t done correctly, means your car is *dangerously* unsafe and shouldn’t be on the road, such as “brakes do not work”, “one headlight is out”, that kind of thing.

Back when PCI DSS was first becoming a thing, lots of people criticised it, saying, “Oh man, it’s too little, too late.”

And the response was, “Well, you have to start somewhere.”

So it’s perfectly possible that they did have the PCI DSS tick of approval, but they still got breached.

And then they just didn’t notice… and then they didn’t respond very quickly… and then they didn’t send a very meaningful email to their customers, either.

My personal opinion is that if I were a customer of theirs, and I received an email like that, given the length of time over which this had unfolded, I would consider that almost nonchalance.

And I don’t think I would be best pleased!


DOUG.  Alright, and I agree with you.

We’ll keep an eye on that – the investigation is still ongoing, of course.

And thank you very much, Lawrence, for sending in that comment.

If you have an interesting story, comment or question you’d like to submit, we’d love to read it on the podcast.

You can email tips@sophos.com, or you can comment on any one of our articles, or you can hit us up on social: @NakedSecurity.

That’s our show for today; thanks very much for listening.

For Paul Ducklin, I’m Doug Aamoth, reminding you to next time to…


BOTH.  Stay secure!

[MUSICAL MODEM]


The OpenSSL security update story – how can you tell what needs fixing?

Yesterday, we wrote about the waited-for-with-bated-breath OpenSSL update that attracted many column-kilometres of media attention last week.

The OpenSSL team announced in advance, as it usually does, that a new version of its popular cryptographic library would soon be released.

This notification stated that the update would patch against a security hole with a CRITICAL severity rating, the project’s highest.

Unlike companies such as Apple, who deliberately announce forthcoming security patches simply by releasing them, claiming that this is the best way to protect users, OpenSSL thinks that some sort of advance warning is useful, even though it often can’t say exactly what fixes are coming for fear of giving cybercriminals a head start.

Organisations including Microsoft, Adobe, Oracle and Mozilla also believe in advance notification of patches, albeit that theirs are implicit warnings created by sticking to a well-known schedule that you can plan your life around, such as Microsoft’s Patch Tuesday, Oracle’s Quarterly Updates, and Mozilla’s Every Fourth Tuesdays.

However, when there’s an unspecified OpenSSL bugfix that gets a CRITICAL rating, there’s always the risk of provoking panic, like the difference between knowing that it will probably be rainy next week, and wondering whether there might be a wildly destructive storm.

One reason for that, fairly or unfairly, is lots of IT teams have long memories that go back to an OpenSSL CRITICAL patch, back in 2014, that closed off the legendary Heartbleed vulnerability:

Heartbleed, unfortunately, was a data leakage bug in OpenSSL that could be triggered by clients, such as random people browsing the internet, against servers almost anywhere.

Worse still, the bug became a sort of countercultural cause célèbre, and it was triggered fast and often by cybercriminals, troublemakers and self-proclaimed “researchers” all round the globe.

Heartbleed attackers went to town trying to take advantage of a bug that was trivial to exploit and that could lead to embarrassment or worse for companies caught out with leaky servers because they hadn’t patched.

Ever since, every time the words CRITICAL and OpenSSL have appeared predictively in the same sentence, the cybersecurity industry has drawn a deep and collective breath, and wondered, “Could this be another XxxxxBleed moment?”

One reason to worry and three reasons to relax

Fortunately, the latest update, once it came out, brought just one piece of mildly worrying news, along with three reasons to feel relieved.

Although what was originally reported as one bug turned out to be two (the second hole was found while researching the first, given that bugs of a similar type often clump together), their impact wasn’t as dramatic as first thought, because:

  • They were downgraded from CRITICAL to HIGH. Both bugs allowed stack buffer overflows, almost certainly exploitable for denial of service (DoS) attacks where an affected program crashes suddenly. But a reliable exploit that could pull of remote code execution feels unlikely, given that one overflow only allows an attacker to alter four bytes in memory, and the other allows overwrites that contain only “dot” characters.
  • The bugs are much more likely to affect clients than servers. Although that’s cold comfort to anyone whose browser, email client or software downloader might crash if they get lured to a booby-trapped server, it’s a huge relief to IT teams running rafts of OpenSSL-secured content servers that are deliberately open to the internet in order to invite and attract visitors.
  • These HIGH-severity bugs exist only in OpenSSL 3.0, not in 1.1.1. The legacy 1.1.1 version is still much more widely used than version 3.0, which reduces the number of servers that these bugs will directly affect.

Nevertheless, the only sensible advice we can give at this stage is, “Update OpenSSL if you have it.”

Where to start?

For SecOps teams and IT staff, that sort of advice makes sense, even if it raises the immediate question, “Where and how to start?”

For everyone else, like Naked Security commenter none, there’s an even more perplexing concern, namely, “I have no idea what I’m supposed to update. Chrome? Firefox? Windows? Help!”

Sadly, there’s no easy answer to that question, because the relationship between Windows and OpenSSL is complicated.

Windows has its own independently developed and maintained encryption library with the wacky name Cryptography API: Next Generation (CNG), so in theory you would not expect to have to worry about OpenSSL on Windows at all.

Yet our default install of Windows 11 has a DLL file called libcrypto.dll in its System folder, which is a filename typically associated with OpenSSL.

Intriguingly, that one turns out to be a false alarm, because it was compiled from the LibreSSL code, a similar but alternative cryptographic library from the OpenBSD team that is loosely compatible with OpenSSL, but doesn’t have these bugs in it.

But even if that Windows system file is nothing to worry about, you may have downloaded Windows apps, or have had them installed for you as part of the supply chain when installing other apps, that quietly brought along their own copies of OpenSSL.

So, even though (as far as we are aware, anyway) the most popular browsers on Windows, namely Edge, Chrome and Firefox, don’t rely on OpenSSL and therefore aren’t at risk…

…what about sysadmins and SecOps teams who want to find out which computers on the network have OpenSSL libraries installed by third-party products, so they can contact the relevant vendors for advice on whether patches are needed, and if so, when they’ll be ready?

Similarly, IT teams looking after Unix and Linux servers, will want to know which OpenSSL libraries, if any, are part of their operating system distro, and which products bring their own builds of OpenSSL along for the ride?

Tracking down OpenSSL libraries

Here are some low-level ways to help you answer those questions.

For software that relies on OpenSSL’s dynamically loaded libraries (many if not most programs use OpenSSL this way), you can quickly identify likely OpenSSL code on your system by searching for the most likely names used by the library files.

On Linux, that’s usually libcrypto*.so* and libssl*.so*, and on Windows it’s usually libcrypto*.dll and libssl*.dll. (On macOS, shared libraries sometimes have names with .so, but many have a .dylib extension, so search for both forms.)

Often the filenames will be suffixed (in the places where the wildcard * characters appear above) with some sort of version identifier, e.g. 1.1 or 3, which can help you determine which files are vulnerable to these bugs, and therefore need their updates prioritising.

On Linux, we used a command like this to look for OpenSSL libraries:

$ find / -name 'libcrypto*.so*' 2>/dev/null
/usr/lib64/libcrypto.so.1.1
/usr/lib64/openssl-1.0/libcrypto.so.1
/usr/lib64/openssl-1.0/libcrypto.so.1.0.0
/usr/lib64/openssl-1.0/libcrypto.so
/usr/lib64/libcrypto.so
/lib64/libcrypto.so.1
/lib64/libcrypto.so.1.1
/lib64/libcrypto.so.1.0.0
/opt/mapping/lib/libcrypto.so.1.1
/opt/mapping/lib/libcrypto.so
/home/duck/Builds/openssl-3.0.5/libcrypto.so
/home/duck/Builds/openssl-3.0.5/libcrypto.so.3
/home/duck/Tools/zerobrane/bin/linux/x86/libcrypto.so.1.1
/home/duck/Tools/zerobrane/bin/linux/x64/libcrypto.so.1.1

As you can see, we found a bunch of libraries almost certainly looked after by the distro, in /lib64 and /usr/lib64, plus a bunch of other copies that were apparently brought along with apps we use.

Although we could, in theory, patch our distro and then temporarily copy the centrally updated libcrypto.so.1.1 files over those in the app-specific directories mapping and zerobrane, that might not work well, given that the app might never have been tested with the latest OpenSSL library.

It would also would leave us prone to inadvertent downgrades later on if either product noticed it had an interloper file in its midst, and reinstalled what it thought was the right one.

Asking your vendor directly is a good way to ensure you get the most reliable, long-term fix.

(As an aside, we compiled the files in the Builds/openssl-3.0.5 directory specially for this test, in order to ensure we had a recent but not-yet-updated set of OpenSSL 3.0 libraries for completeness.)

On Windows, we used the DIR /S command in a command prompt, and we got this:

C:\Users\duck> dir C:\libcrypto.* /S Volume in drive C has no label.
Volume Serial Number is C001-C0DE Directory of C:\Program Files\OpenSSL-Win64 01/11/2022 10:14 5,140,992 libcrypto-3-x64.dll 1 File(s) 5,140,992 bytes Directory of C:\Program Files\OpenSSL-Win64\bin
01/11/2022 10:14 5,140,992 libcrypto-3-x64.dll 1 File(s) 5,140,992 bytes Directory of C:\Program Files (x86)\Nmap 07/08/2021 18:57 2,564,304 libcrypto-1_1.dll
01/09/2022 22:36 3,755,152 libcrypto-3.dll 2 File(s) 6,319,456 bytes Directory of C:\Windows\System32 06/05/2022 14:15 1,783,296 libcrypto.dll 1 File(s) 1,783,296 bytes Directory of C:\Windows\WinSxS\amd64_libressl-components-onecore_31bf3856ad364e35_10.0.22621.1_none_50c3f139c84e05e7 06/05/2022 14:15 1,783,296 libcrypto.dll 1 File(s) 1,783,296 bytes Total Files Listed: 9 File(s)

This was a recent Windows Enterprise Edition 11 2022H2 install, on which we’d deliberately installed the Shining Light Productions build of OpenSSL for Windows, to ensure we had at least one 64-bit copy of OpenSSL 3.0 in place.

We’d also installed the popular network scanning tool Nmap, which brought with it 32-bit versions of both OpenSSL 1.1.1 and OpenSSL 3.0.

As mentioned above, we found a libcrypto.dll file in the System folder that we didn’t expect, although the long name of its identical companion in the system WinSxS repository suggested that this wasn’t an OpenSSL-style libcrypto, but a LibreSSL one, which doesn’t have these bugs.

Verifying version numbers on Windows

Now we need to work out which libcrypto files have what version numbers.

On Windows, it’s sometimes enough simply to browse to a libcrypto*.dll sample using File Explorer, right-click on it, and view Properties in order to determine the version details:

But we’ve noticed in the past that some apps insert the version details of the main app into third-party DLLs instead, as a useful way of helping you keep track of which software brought those DLLs along in the first place.

So we devised a more precise way of interrogating a DLL for its OpenSSL version, namely by actually loading the library into a test program and calling the OpenSSL_version() function, if there is one:

#include <windows.h>
#include <stdio.h>
#include <stdlib.h> void bail(char* msg) { fprintf(stderr,"%s\n",msg); exit(1);
} int main(int argc, char** argv) { /* Use DLL name on command line, or a likely default. */ char* libname = argc > 1 ? argv[1] : "C:\\Windows\\System32\\libcrypto.dll"; printf("Using library file: %s\n",libname); /* Try to load the specified DLL (note: executes DLLmain() code). */ HMODULE testlib = LoadLibrary(libname); if (testlib == NULL) { fprintf(stderr,"Error: %d\n",GetLastError()); bail("LoadLibrary() failed on that file"); } /* See if this DLL has an OpenSSL_version() function, which */ /* should exist in both the OpenSSL 1.1.1 and 3.0 series. */ FARPROC getver = GetProcAddress(testlib,"OpenSSL_version"); if (getver == NULL) { bail("Can't find OpenSSL_version() function"); } /* See what it says. String 0 should come out something like this: */ /* OpenSSL X.Y.Za Day Month Year, giving full build ID and date. */ const char* ver = (const char *)getver(0); printf("Version function said: %s\n",ver==NULL?"<no answer>":ver); return 0;
}

Note that activating a DLL with LoadLibrary() doesn’t just load it, but also runs its startup code, which is found in the function DllMain() inside any Windows DLL.

In other words, don’t use this technique blindly on untrusted DLLs, because it’s equivalent in risk to running an EXE file directly.

If you don’t have a C compiler installed, you can get a fantastic, free, ready-to-use, minimalistic Windows 64-bit compiler toolkit (under 400KB, including program, headers and libraries!) based on Fabrice Bellard’s Tiny C Compiler (TCC) from here:

https://github.com/pducklin/minimalisti-C/releases

Save the above C source file as cryptochk.c, download and unzip the petcc64-winbin.zip file anywhere on your Windows computer (the program will locate its own include and library files) and run…

C:\Users\duck> petcc64 -stdinc -stdlib cryptochk.c

…to generate cryptchk.exe. (Note that it’s just 2560 bytes in size.)

Now you can check the version data of libcrypto files like this:

C:\Users\duck> cryptchk.exe
Using library file: C:\Windows\System32\libcrypto.dll
Version function said: LibreSSL 3.4.3 C:\Users\duck> cryptchk.exe "C:\Program Files\OpenSSL-Win64\libcrypto-3-x64.dll"
Using library file: C:\Program Files\OpenSSL-Win64\libcrypto-3-x64.dll
Version function said: OpenSSL 3.0.7 1 Nov 2022

As you can now see, the system DLL that we guessed above wasn’t OpenSSL at all is indeed revealed as a LibreSSL component, which isn’t affected by these bugs.

The newly-installed OpenSSL for Windows is confirmed as up to date.

Other output you may see might look like this:

C:\Users\duck\CODE>cryptchk.exe "C:\Windows\System32\kernel32.dll"
Using library file: C:\Windows\System32\kernel32.dll
Can't find OpenSSL_version() function

That’s not an OpenSSL 1.1.1 or OpenSSL 3.0 DLL, so we wouldn’t expect it to have the necessary function to show us its version number.

Or like this:

C:\Users\duck\CODE>wincry.exe "C:\Program Files (x86)\Nmap\libcrypto-3.dll"
Using library file: C:\Program Files (x86)\Nmap\libcrypto-3.dll
Error: 193
LoadLibrary() failed on that file

Error 193 is ERR_BAD_EXE_FORMAT, denoting a file that is “not a valid Win32 application”, because petcc64 is stripped down specifically to build 64-bit Windows executables only, and 64-bit code can’t load 32-bit DLLs.

But all 64-bit Windows versions still support apps compiled in 32-bit mode, which some vendors supply for both platform types so that they can provide just one build that runs on old and new flavours of Windows.

However, if you have access to Visual Studio (the Community Edition is free for individual use, but takes up many gigabytes), you can compile the above code in 32-bit mode, like this:

C:\Users\duck> cl -Fe:cryptchk32.exe cryptchk.c
Microsoft (R) C/C++ Optimizing Compiler Version 19.33.31630 for x86
Copyright (C) Microsoft Corporation. All rights reserved. cryptchk.c
Microsoft (R) Incremental Linker Version 14.33.31630.0
Copyright (C) Microsoft Corporation. All rights reserved. /out:cryptchk32.exe
cryptchk.obj C:\Users\duck> cryptchk32.exe "C:\Program Files (x86)\Nmap\libcrypto-1_1.dll"
Using library file: C:\Program Files (x86)\Nmap\libcrypto-1_1.dll
Version function said: OpenSSL 1.1.1k 25 Mar 2021 C:\Users\duck> cryptchk32.exe "C:\Program Files (x86)\Nmap\libcrypto-3.dll"
Using library file: C:\Program Files (x86)\Nmap\libcrypto-3.dll
Version function said: OpenSSL 3.0.5 5 Jul 2022

Those versions do need updating, so if you’re an NMap for Windows users, keep your eyes out for the next official release.

Verifying version numbers on Linux

On Unix and Linux, you can use this code in your cryptchk.c file to achieve a similar result:

#include <stdio.h>
#include <stdlib.h>
#include <dlfcn.h> void bail(char* msg) { fprintf(stderr,"%s\n",msg); exit(1);
} int main(int argc, char** argv) { /* Use the command argument as the library name, */ /* otherwise pick a sensible default for your distro. */ char* libname = argc>1 ? argv[1] : "/lib64/libcrypto.so.1.1"; printf("Using library file: %s\n",libname); /* Try to load the library (note: runs code in .so file) */ void* testlib = dlopen(libname,RTLD_LAZY); if (testlib == NULL) { bail("Can't dlopen() that file"); } /* See if this library has an OpenSSL_version() function, which */ /* should exists in both the OpenSSL 1.1.1 and 3.0 series. */ const char* (*getver)(int t) = dlsym(testlib,"OpenSSL_version"); if (getver == NULL) { bail("Can't find OpenSSL_version() function"); } /* See what it says. String 0 should give something like this: */ /* OpenSSL X,Y,Za Day Month Year, giving full build ID and date. */ const char* ver = getver(0); printf("Version function said: %s\n",ver==NULL?"<no answer>":ver); return 0;
}

Where Windows uses LoadLibrary() and GetProcAddress(), the Unix coding style uses dlopen() and dlsym() instead, where dl is short for dynamic library.

Here is some of the output we got on our own Linux system:

$ clang -o cryptchk cryptchk.c # You can use gcc instead if you don't have clang $ ./cryptchk /usr/lib64/libcrypto.so.1.1
Using library file: /usr/lib64/libcrypto.so.1.1
Version function said: OpenSSL 1.1.1q 5 Jul 2022 $ ./cryptchk /home/duck/Builds/openssl-3.0.5/libcrypto.so.3
Using library file: /home/duck/Builds/openssl-3.0.5/libcrypto.so.3
Version function said: OpenSSL 3.0.5 5 Jul 2022 $ ./cryptchk /lib64/libcrypto.so.1.0.0
Using library file: /lib64/libcrypto.so.1.0.0
Can't find OpenSSL_version() function

Both the 1.1.1 and 3.0 versions need updating, the former by the distro and the latter by us, while the legacy 1.0.0 library (no, we’re not sure why it’s there, and will now consider removing it) doesn’t support the contemporary OpenSSL_version() function.

What else might be there?

Unfortunately, the OpenSSL code can be statically linked into Windows and Linux/Unix executable files, leaving no obvious .dll or .so files to guide you to potentially buggy packages.

Static linking means that the OpenSSL code is built right into the main .EXE or binary file, mixed in along with everything else.

In theory, you could search binary program files for identifying text strings that typically appear in OpenSSL’s code when it’s compiled, hoping to find the version number at the same time, but that’s an error-prone process so we shan’t cover it here.

Ideally, software that incorporates OpenSSL should declare that it’s using the project’s code somewhere in its installer, documentation or website.

This should help you to track down products that use OpenSSL, but in a way that doesn’t show up obviously, at which point we suggest contacting the vendor for further information.

Happy hunting!

If you have any questions, you can leave them in the comments below, anonymously if you wish.

If you want to contact us privately, you can email tips@sophos.com.

We can’t promise to answer every question, but we’ll give it a good go…

…and if you’d like to see more articles like this, with sample code in a do-it-yourself, “learn by trying” spirit, please let us know.


go top