As you probably know (or, at least, as you know now!), October is Cybersecurity Awareness Month, which means it’s a great opportunity to do three things: Stop. Think. Connect.
Those three words were chosen many years ago by the US public service as a short and simple motto for cybersecurity awareness.
As we’ve said many times before on Sophos Naked Security, the only thing worse than being hacked is realising, after you’ve been hacked, that you could have spotted the attack before it unfolded – if only you’d taken the time to look.
That’s why the theme of the opening week of the 2021 Cybersecurity Awareness Month focuses on what we can all do to help: Do your part. #BeCyberSmart.
“Cybersecurity is ultimately about protecting lives and keeping people secure,” said CISA Director Jen Easterly.
Ten security misperceptions
To start with, take a look at our Top Ten security misperceptions, written by Peter Mackenzie, who leads the Sophos Incident Response Team:
Then, read through our short-and-sharp series of Cybersecurity Hindsight tips by Rob Collins of the Sophos Systems Engineering team.
There are still plenty of obvious preventative cybersecurity measures that we are all perfectly well aware of, but still haven’t implemented for all of our users.
That’s a bit like going to the trouble of locking the front door of your house whenever you go out, but leaving the back door wide open.
Tools, techniques and procedures
Unfortunately, a typical computer network has plenty of entranceways, and cybercriminals have dozens of different TTPs at their disposal – that’s contemporary cybersecurity jargon that refers to tools, techniques and procedures.
We need not only to apply hindsight security to stop threats that we’ve known about for years, but also to keep abreast of new cybercrime TTPs and defend against them, too.
Read our Active Adversary Playbook to understand your enemies, and how to protect against them proactively:
And once you know what to look out for, and how to defend against it proactively, take a listen to one of our own in-house cybersecurity experts explaining how to build an effective cybersecurity team of your own:
[embedded content]
Awareness isn’t just for October
Remember that Cybersecurity Awareness Month isn’t a special month for throwing more time and money than usual into defending against cybercriminality in the hope of tiding yourself over until next year…
…but rather a month to look at what you’re already doing, and how you can improve it for the whole year ahead.
Do your part. #BeCyberSmart.
DEFENDING AGAINST RANSOMWARE: WHAT WORKED (AND WHAT DIDN’T)
We also recommend our State of Ransomware 2021 report, where last year’s ransomware victims talk publicly but anonymously about what worked when they landed in trouble, and what didn’t:
In case you’re wondering, paying the blackmail generally doesn’t work out as well as you might think:
You might be forgiven for thinking that cybercrime is almost all about ransomware and cryptocoins these days.
In a ransomware attack, the crooks typically blackmail you to send them cryptocurrency in return for giving you your stolen data back (or for not selling it on to someone else).
In a cryptocoin attack, the crooks typically take your cryptocurrency for themselves, perhaps by exploiting a bug in the trading software you use, or by stealing your private keys so they have direct access to your cryptocurrency wallet.
This sort of criminality sometimes involves amounts reaching tens of millions of dollars, or even hundreds of millions of dollars, in a single attack.
But gift card fraud still fills a distressing niche in the cybercrime ecosystem, where a gang of crooks redeem gift cards that you paid for, either because you were convinced that those cards were earmarked for something else, or because the crooks got temporary access to one of your online accounts that allowed them to buy gift cards on your dime.
Indeed, the US Department of Justice announced this week the indictment of four suspected gift card scammers, and alleges that that these four had ended up with more than 5000 fradulently obtained cards to spend on themselves.
…but if we reasonably assume an average of $200 a gift card (we know that in many scams, crooks come away with more than that on each card), we’re still looking at $1,000,000 of ill-gotten gains in this court case alone.
And the people who lose money in these scams aren’t multinational companies, or cyberinsurers, or megacorporations with financial reserves to tide them over.
The victims here are almost always people just like you, or your grandmother, or your favourite aunt, or your innocent and well-meaning friends.
Gift cards – always for someone else
Buying or acquiring gift cards with someone else’s money is a sneaky trick, because gift cards are generally intended to be sent to someone else rather than to show up at the purchaser’s house.
Cybercriminals who had a few minutes of access to the online account you have with your favourite consumer goods retailer, for example, might not be able to make much money out of you by directly ordering a bunch of brand new smart TVs or games consoles.
Sure, jobbing crooks love products of that sort because they’re easy to “flip” as second-hand items on online trading sites. (We’ve heard of crooks boasting that they can “sell” hot items like phones and widescreen TVs online before they actually steal them, thus not only matching supply to demand but also minimising the time needed to “hold” the hooky items.)
But blindly ordering such products online using someone else’s account leaves the crooks with a tricky problem: how to effect delivery?
If the delivery service will only supply items to the address that the card is registered to, the crooks have to hang around your property in the hope of intercepting the delivery before you notice it yourself and realise something is afoot.
If the delivery service will accept alternative addresses, then the crooks are still stuck with using a location at which they can be caught in the act of acquiring property that they can’t reasonably account for.
Gift cards, however, are intended to be bought by person X and then transmitted, typically electronically, to recipient Y for them to spend on themselves as they choose, perhaps even in another country.
These days, you typically just receive a “here’s a gift for you” email containing a magic code or web URL you can use to redeem the card, with the expectation that you’ll spend it on yourself, either online or in a store of your choice in a location that suits you.
Gift card scammers and how they work
Indeed, some artisan cybergangs seem to specialise in gift card scams, like the group that the Sophos Rapid Reponse Team came across in the runup to Christmas last year.
In this scam, the crooks got into a company network, but rather than scouring the servers for data to steal or automatically launching a ransomware attack across the whole network, they logged in manually but systematically to computer after computer, as end user after end user.
As they tried out each computer, they fired up the local user’s browser to check whether they’d left themselves logged into their email account.
If so, the crooks attempted to access a wide range of likely personal accounts for that user, either getting straight in because the user hadn’t logged out from those accounts either, or doing an immediate password reset and capturing the reply via the already-compromised email account.
Then, for each user, hundreds in all, the crooks attempted to buy gift card after gift card, for which they needed to supply little more than an email address for the recipient of the “gift”.
Fortunately, in this case, few of the users thus hacked had left credit card details on file for the e-commerce sites involved, so the crooks didn’t get away with much…
…and thus the trick was rumbled (and Sophos Rapid Response called in) because numerous users noticed suspicious uncompleted purchases in their virtual shopping carts, and raised the alarm.
Romance scammers also like to arrange for gift card “payments”, luring their victims – who have often been tragically tricked into thinking they’ve found a genuine friend, or even their future spouse, via a fraudulent profile on a dating site – to remit them money this way.
Asking for gift cards no doubt feels more intimate, and is perhaps less widely linked with fraud in victims’ minds, than the old-school approach of demanding cash money paid via a wire transfer service.
LEARN MORE ABOUT ROMANCE SCAMMERS
[embedded content]
Video not visible above? Watch directly on YouTube, or read the transcript. Click on the cog to speed up playback or turn on subtitles.
What happens to the gift cards?
In this recent DOJ indictment, the scam was operated using the sort of network of “affiliates” or “associates” that commonly crop up in modern cybercriminality, everywhere from malware-as-a-service gangs to mobile phone fleeceware scammers.
The DOJ alleges that:
[Three of the defendants] obtained over 5,000 gift cards from a group known as the “Magic Lamp.” [These defendants] caused the gift cards to be distributed to “runners” like [the fourth defendant], who used the funds on the cards at Target stores in Los Angeles and Orange County and elsewhere to purchase, among other items, consumer electronics and other gift cards. Through the purchases, returns and other transactions at multiple Target stores, the defendants and their co-conspirators sought to conceal the fact that the gift cards had been originally funded with fraudulent proceeds. [. . .]
[The perpetrators] induced victims to send proceeds to defendants’ associates, and defendants then conspired to launder the proceeds.
What to do?
If you haven’t watched our “romance scammers” video above, please do so – not just to stop yourself from getting waylaid by golden-tongued false friends, but also to learn some tips for how to approach any friend or family member who gets sucked in by these manipulative criminals.
Scammers of the “send me a gift” sort aren’t just slick at parting their fake sweethearts from their money, but also well-practised in coaching their victims on how to reject any suggestions from their genuine friends that they are part of a fraud.
In some cases, this ultimately results in the victim not only being drained of money but also alienated from their friends and family.
And never use gift cards as a payment option for non-personal matters, no matter how convincing the person at the other end might sound about how gift cards are a convenient way of saving time, avoiding bank fees, speeding up payment, circumventing possible corruption at a specific government office, or any of a number of excuses that are commonly trotted out by crooks.
In the words of Acting US Attorney Tracy Wilkison from California:
This case offers an important reminder to consumers that gift cards are for presents to friends and loved ones – they should never be used for payments to any government or corporate entity. Don’t be fooled by callers claiming to be with a government agency, a bank or any other institution demanding that you purchase gift cards. There is no reason to purchase a gift card to resolve a problem with an account, your Social Security number or a supposed criminal case.
This advice seems so obvious when it’s written down in plain English, but don’t forget that if you or one of your more vulnerable friends or family members get into the habit of talking to one of these scamming “associates” on a regular basis, it’s easy to end up yielding to their blandishments when they act lovingly, or feeling threatened when they pile on the verbal pressure.
This sort of scammer works at this sort of crime all day, every day as if it were a regular job, so you can be sure that they not only have the gift of the gab, but also know all the social engineering tricks that lure people into doing things they usually never would.
Simply put: if in doubt, don’t give it out.
LEARN MORE ABOUT SOCIAL ENGINEERING
Click-and-drag on the soundwaves below to skip to any point in the podcast. You can also listen directly on Soundcloud.
A not-yet-published paper from researchers in the UK has been making media headlines because of its dramatic claims about Apple Pay.
Apple-centric publication 9to5Mac covered it with a headline that was almost a story in itself:
Apparent flaw allows hackers to steal money from a locked iPhone, when a Visa card is set up with Apple Pay Express Transit.
If you haven’t heard of Express Transit (known as Express Travel in the UK, where the word “transit” is not used to refer public transport), it’s one of those clever ideas that unavoidably trades off cybersecurity against convenience.
Simply put, it lets you complete some types of touch-to-pay transaction, even when your phone is locked.
You can tell where this is going.
(Express Transit is not enabled by default, so unless you have deliberately set it up on your device, the risks in this story don’t apply to you.)
Tin foil hats
Express Transit makes Apple Pay and your iPhone work a bit like a regular credit card, which doesn’t need unlocking with a PIN code for low-value transactions (in the UK, the limit is currently £45, or about $60).
Just tapping your credit card on or near a payment terminal – any terminal, whether it’s at a supermarket, in a newsagent, or at a coffee shop – triggers a rapid and entirely automated cryptographic exchange via the chip in your card that bills your account for the amount shown on the terminal’s screen.
The technology used in this process is known as NFC, short for near-field communication, which relies on a magnetic field emitted by the reader that generates a minusucle current via a metal coil looped inside the credit card.
That tiny burst of electrical energy produces just enough power to activate the chip for long enough to authorise a single transaction and transmit the verification data wirelessly back to the terminal.
In theory, anyone who has a payment terminal of their own, and a payment provider willing to process their dodgy transactions, could wave the terminal close enough to your credit or debit card, for example while you are waiting in a queue or jammed into a train or bus, and trick your card into making a payment you weren’t aware of.
Likewise, anyone who steals your card, even temporarily, could use it to approve a payment for their goods against your card, because they don’t need to know your PIN, merely to have your card in their hand for long enough to wave it at a legitimate terminal.
In practice, however, the use of bogus payment terminals to extract fradulent transactions by bringing the terminal to the card (rather than the other way around) seems to be extremely unusual.
So, for all the horror stories you may have heard about “payment leeches” prowling urban trains stealing money from unsuspecting commuters, we have never seen credible reports of successful, systematic frauds carried out this way.
We’re guessing that the stakes are simply too high for prospective crooks, given the obviously intrusive and anti-social behaviour required to pull off the occasional low-value fake transaction by rubbing up against total strangers on a crowded train.
Back in 2018, we tested several different types of credit card “shielding wallets”, from simple metal-coated cardboard covers, through to bulky metal card holders, and while we weren’t convinced you really needed to use one, we found that they all worked as advertised. Even our home-made shield hastily folded from tin foil (which is actually made of aluminium, of course) stopped our cards being activated, no matter how close we came to the reader or how many times we tried.
When “locked” means “sort of”
But what about NFC payment transations via your mobile phone?
Unlike your credit card, which you mostly keep in your wallet and only fish out when you are actually at a payment terminal, your phone is almost certainly often on public display, sometimes held in front of you, sometimes just sitting temptingly nearby on a desk, table or counter-top.
We’re much more likely to leave our phones in a bus, train, taxi, someone else’s office, shop, hotel or even at the beach than we are to lose our credit card in the same way.
It’s this “time exposed to danger” that persuades us to put lock codes on our mobile devices, so that they can’t instantly be used and abused by anyone who happens to pick them up and turn them on.
Even if we don’t like typing in a full-on lock code every time we want to use our devices throughout the day, most of us will configure some sort of alternative authentication mechanism such as fingerprint or facial recognition, to keep total strangers out.
Most of us also set a timeout period that locks our devices automatically after a few minutes, even if we forget to press the lock button before we put the device down.
Unfortunately, if rather obviously, every phone feature that you activate on the lock screen works directly against the security that the lock screen is supposed to provide in the first place.
Whether it’s allowing notifications and personal messages to appear while your phone is locked, or using the Apple Pay Express Transit feature to authorise tap-and-go payments while your phone is locked…
…anything that you authorise on the lock screen makes a bit of a mockery of the concept of “locking” your phone in the first place.
Speed is of the essence
Despite the risks, we understand the motivation behind Express Transit.
Public transport is often a crowded, high-pressure environment where the availability and convenience of tap-to-pay terminals has made your fellow commuters even less patient, and where fumbling with your unlock code or not getting your face recognised first time when you are in a crush at the ticketing machine or at the platform gate…
…can lead to delay, unpleasantness, insults, jostling, aggression or worse from the swarming multitudes around you.
Thus the selective Express Transit, also known as Express Travel, option in Apple Pay.
As we understand it, this “automatic unlock” doesn’t open up your account totally, so it doesn’t turn your phone into an instantly-pay-for-anything device like the credit card tucked up in your wallet.
The feature is only supposed to work with selected public transport services, so that (in the UK, for instance) you can make Express Transit payments at Transport for London terminals and to the First Group bus company, but no one else.
In the relaxed environment of your favourite Camden Town coffee shop, you’ll still need to unlock your phone to make a payment, but in the rush hour squish at Mornington Crescent underground railway station, you won’t.
What’s the deal?
As explained above, enabling Express Transit doesn’t, in theory, open up your locked phone to abuse while you are wandering through a department store, for example, or going to a movie, or paying for fuel at a service station.
In practice, however, the researchers behind this yet-to-be-published paper claim thay they were able to trick iPhones into making fraudulent payments under carefully prepared circumstances, by setting up their own payment terminal and passing it off as belonging to a public transport company that was part of the Express Transit payment scheme.
Apparently, they only managed to pull this off with Visa card accounts (presumably, other payment providers were stricter about deciding whether payment terminal X really belonged to company Y), but it wasn’t limited by the usual NFC credit card payment limit.
Indeed, the researchers claim that by using a fraudulent payment terminal they could get transactions approved up to £1000, well beyond the £45 tap-and-pay limit that currently applies to regular credit cards in the UK.
What to do?
Should you be worried?
We don’t think so, but that’s because we avoid all “make things work at the lockscreen” features on all our mobile devices.
We have embraced the minor irritation of having to type in our (long!) lock code every time we want to use our phone to do anything more than check the time.
We’ve worked the it-only-takes-a-few-seconds unlock process into the way we conduct our digital lifestyle, even if that occasionally means standing aside from a quickly moving queue for a few moments to get our phone ready to use at the pinch point.
So, in this case, our recommendations are as follows:
Avoid Express Transit and any other “active on the lockscreen” features if you can. These options unavoidably sacrifice cybersecurity for convenience. Practise unlocking your phone so you’re comfortable doing it regularly, and you may find that you can do without Express Transit altogether.
Avoid using Visa cards with Express Transit if you are worried. To be fair to Visa, we’re assuming that, with enough effort, similar bypass tricks could be found to target other payment providers. So, merely avoiding Visa can be considered “security through obscurity”. But these researchers have apparently already found a way to bypass Visa’s checks, so you might as well defend against what seems to be the known part of the problem so far. If you’re really worried, and you simply can’t live without Express Transit, consider setting it up with a prepaid debit card, and keeping a modest balance that you spend only on public transport.
Don’t leave your phone unattended if you can avoid it. Try treating your phone a bit more like your credit card, which you almost certainly don’t get out unless you need it, and put away once you’ve finished with it. If you like to have your phone handy at all times, keep it close (and, ideally, keep it in your hand).
Set the longest lock code and the shortest auto-lock timeout you can tolerate. A locked phone is a minor inconvenience for you, but a major barrier against crooks, even technically knowledgeable ones. An unlocked phone, on the other hand, is an open target for anyone, including unsophisticated, opportunist criminals.
Check your bank and payment card statements regularly. If you use Express Transit for regular and predictable commuting payments, you should be able to spot any anomalous debits fairly easily, because they probably won’t fit your regular pattern.
You’ve probably heard of Let’s Encrypt, an organisation that makes it easy and cheap (in fact, free) to get HTTPS certificates for your web servers.
HTTPS, short for secure HTTP, relies on the encryption protocol known as TLS, which is short for transport layer security.
TLS encrypts and protects the data you send back and forth during a network session so that it can’t easily be snooped on in transit, and so it can’t sneakily be altered along the way.
Because of these features, protecting both the confidentiality of your browsing and the integrity of the data you download, most of us agree these days that HTTPS is vitally important when we use the web.
Even if the data you’re looking at is neither private nor secret, crooks can learn a lot about you by keeping an eye on your interests, so why make it easy for them learn to more than they need to know?
Likewise, if you’re downloading a report (or an app) from a site you trust, why make it easy for cybercriminals to switch out your download along the way with a fake news document (or to poke malware-laden content into the middle of an otherwise harmless program)?
Why not use simply use HTTPS for everything, just in case, in the same way that you wear your seatbelt every time you travel in a car, instead of using it only when you think road conditions are at their most dangerous?
When HTTPS “was a hassle”
If you go back a decade or so, HTTPS was not universally or even widely used, and there were two main reasons:
TLS certificates were a hassle to acquire and use, and cost money. Sites such as charities, hobbyists and small businesses resented having to pay what they saw as a “web tax”, especially given that certificates need renewing regularly.
TLS network connections were slower than unencrypted ones. Many high-traffic sites were afraid of HTTPS because of the extra time taken by the “cryptographic dance” demanded by the protocol every time a visitor arrived at the site, and because of the need to encrypt and decrypt every byte sent and received thereafter.
Indeed, until about 2010, our attitudes to HTTPS were much more lax than today, to the point that even mainstream websites such as social media, webmail and online shopping only bothered with TLS at so-called “critical moments”.
Before 2010, the page at the start of a browsing session that asked for your password, for example, might use HTTPS (on well-known websites, at any rate), to prevent your password being sniffed out.
Likewise, the page that took your credit card details at the end would probably (though sadly not always) be encrypted, too.
But the vast bulk of your browsing would skip HTTPS altogether, because a snappy browsing experience was considered much more important than a secure one.
This minimalist approach to online security so that it protected only “truly personal” data such as passwords and payment card details was widely considered satisfactory at the time.
Following the sheep
If high-traffic, big-name sites could get away with using HTTPS only some of the time, it was unsurprising that many other sites followed suit, or didn’t bother with HTTPS at all.
This provocative toolkit was created in the hope of convincing us not to be a bunch of sheep, and not blindly to follow the crowd in accepting unencrypted web content as an unavoidable necessity.
With Firesheep, anyone who felt like having a go at being a network hacker could sit in a coffee shop, open up an innocent looking Firefox browsing session, and let the plugin monitor all the unencrypted traffic on the network.
Firesheep’s goal was to watch out for data going to and from other users on the network who had already completed their “secure” login to a site such as Facebook or Twitter.
Those unencrypted network packets didn’t reveal the user’s actual password, but they did expose the current authentication token, or secret session cookie, that was added to each request to prove that this user had already logged in.
Firesheep would automatically slurp up these authentication tokens and inject them into fake Facebook and Twitter traffic in order to compromise other people’s accounts.
By this means, a wannabe attacker – who could already read your posts anyway, even if they were not intended for public broadcast, because they were not using HTTPS – could subvert your account by making posts in your name.
Directly from the click-to-hack simplicity of a browser window.
The cost of security
This dramatic reminder that we ought to be using web encryption all the time, not just some of the time, resulted in increasingly strident calls for “HTTPS everywhere”.
(Naked Security’s contribution to the early debate was an open letter to Facebook, penned back in 2011; similar pressure from voices across the internet gradually led to networking giants such as Facebook, Microsoft and Google adopting HTTPS for everything.)
Indeed, those early adopters of “HTTPS for everything” showed that on modern computing hardware, the computational overhead imposed by TLS was much less dramatic than many people had feared…
…but this didn’t solve the problem of cost, notably for websites run by enthusiasts and small businesses.
Each server certificate you needed might cost $100 a year, and you had to make sure you didn’t forget to renew the certificates in time, or else your visitors would start getting scary looking “untrusted site – certificate has expired” warnings every time.
Why the cost?
Creating your own TLS certificates, as it happens, takes seconds and can be done for free.
For example, these OpenSSL commands will generate a public/private keypair in the file key.pem, and then create a TLS certificate for the website naksec.test, signed with the newly created private key:
# Generate new Edwards-448 keypair into key.pem $ openssl genpkey -algorithm ED448 -out key.pem # Self-sign webcert for CN (certificate name) 'naksec.test' into cert.pem $ openssl req -x509 -key key.pem -subj '/CN=naksec.text' -day 1 -out cert.pem # Print out details of new certificate in text format $ openssl x509 -in cert.pem -noout -text
Certificate: Data: Version: 3 (0x2) Serial Number: [. . .] Signature Algorithm: ED448 Issuer: CN = naksec.text Validity Not Before: Sep 28 17:14:31 2021 GMT Not After : Sep 29 17:14:31 2021 GMT Subject: CN = naksec.text Subject Public Key Info: Public Key Algorithm: ED448 ED448 Public-Key: [. . .] X509v3 extensions: X509v3 Subject Key Identifier: [. . .] X509v3 Authority Key Identifier: [. . .] X509v3 Basic Constraints: critical CA:TRUE Signature Algorithm: ED448 [. . .]
$
The problem is that a self-signed key (in the example above, the matching Subject and Issuer fields make it clear that we signed this key ourselves) won’t get you very far if you want to run a webserver in the real world.
You need to get someone (an organisation known as a CA, short for certificate authority) to carry out at least a basic check that you actually operate the website you’re claiming to represent, and then to sign your certificate in order to vouch for you.
And you need to use a CA that is already trusted by the vast majority of browsers out there, so that your newly signed certificate will automatically work unimpeded for the vast majority of users.
Otherwise your visitors would be confronted with scary looking “Attackers might be trying to steal your information” or “Warning: potential security risk ahead” messages every time.
Default Firefox warning for website with self-signed certificate.
Certificates for free
Started back in 2014, a non-profit organisation called Let’s Encrypt set out to change the HTTPS landscape not only by acting as a CA that offered TLS certificates for free, but also by automating and therefore greatly simplifying the process of acquiring and renewing them.
(Let’s Encrypt wasn’t the first project to do free certificates, but it has been one of the most successful at making its free certificates widely accepted and easy to use.)
The only slightly unusual thing about Let’s Encrypt certificates at the outset was that it couldn’t easily act as a CA to itself, given that it was brand new, making it tricky to gain interest and acceptance.
Most HTTPS certificates have a “chain of trust” three links long:
The certificate you generated for your website.
A digital signature made by the key-holder of what’s called an intermediate certificate.
A digital signature made by the key-holder of a root CA certificate trusted by major browsers.
Here’s the verification chain for the website of Digicert, for example, a well-known CA:
The certificate chain-of-trust runs from left to right at the top. DigiCert signing keys have been used at the intermediate and the root level.
In contrast, we’ve shown below how an older browser might decide to trust letsencrypt.com.
In the command below, we’ve use a home-made utility that excludes all of our operating system’s trusted root CAs and relies entirely on a CA from Digital Signature Trust Co., also known as IdenTrust, a company that helped Let’s Encrypt get started by acting as a “CA for the new kid on the block”, starting back in 2015.
(The letters CN=... below are standard nomenclature for Certificate Name is ..., and the IdenTrust root certificate that gave Let’s Encrypt its leg-up is the final one in the verification chain, denoted CN=DST Root CA X3, where the letters DST are short for Digital Signature Trust Co.)
The HTTPS interaction below simulates the situation back when Let’s Encrypt was unknown, and needed the injection of faith from IdenTrust in order for its certificates to be trusted automatically by browsers:
$ manualverify.lua -nobuiltincas -mycas=dst-root-x3.cert letsencrypt.com +++ Preloaded 0 CA certificates
+++ Added command-line CA from: dst-root-x3.cert --- Trying: letsencrypt.com:443 --- Chain-of-trust claimed by server
--- 1 /CN=lencr.org Valid for: lencr.org letsencrypt.com letsencrypt.org www.lencr.org www.letsencrypt.co www.letsencrypt.org
--- 2 /C=US/O=Let's Encrypt/CN=R3
--- 3 /C=US/O=Internet Security Research Group/CN=ISRG Root X1 --- Chain-of-trust as resolved in OpenSSL verify()
--- 1 /CN=lencr.org
--- 2 /C=US/O=Let's Encrypt/CN=R3
--- 3 /C=US/O=Internet Security Research Group/CN=ISRG Root X1
--- 4 /O=Digital Signature Trust Co./CN=DST Root CA X3
In this example, similar to what an outdated operating system or browser might experience, the final “your computer found a root CA to vouch for everything below it” step (step 4 above), required the system to refer to Intrust’s DST Root CA X3 to put the stamp of approval on Let’s Encrypt’s certificate chain from that point downwards.
The good news
The good news is that most browsers and operating systems now directly trust the third certificate in the verification chain above.
If we re-run the command, but get it to use our operating system’s built-in CAs automatically (in this case, by omitting the -nobuiltincas option), we get this:
+++ Preloaded 130 CA certificates --- Trying letsencrypt.org:443 --- Chain-of-trust claimed by server
--- 1 /CN=lencr.org [. . .]
--- 2 /C=US/O=Let's Encrypt/CN=R3
--- 3 /C=US/O=Internet Security Research Group/CN=ISRG Root X1 --- Chain-of-trust as resolved in OpenSSL verify()
--- 1 /CN=lencr.org
--- 2 /C=US/O=Let's Encrypt/CN=R3
--- 3 /C=US/O=Internet Security Research Group/CN=ISRG Root X1
And that’s just as well, because the DST Root CA X3 certificate that started it all will expire shortly after 3pm UK time tomorrow, Wednesday 30 September 2021 [2021-09-30T14:01:15Z].
The data to look for is labelled Validity Not After:
$ openssl x509 -in dst-root-x3.cert -noout -text
Certificate: Data: Version: 3 (0x2) Serial Number: 44:af:b0:80:d6:a3:27:ba:89:30:39:86:2e:f8:40:6b Signature Algorithm: sha1WithRSAEncryption Issuer: O = Digital Signature Trust Co., CN = DST Root CA X3 Validity Not Before: Sep 30 21:12:19 2000 GMT Not After : Sep 30 14:01:15 2021 GMT Subject: O = Digital Signature Trust Co., CN = DST Root CA X3 [. . .] X509v3 extensions: X509v3 Basic Constraints: critical CA:TRUE X509v3 Key Usage: critical Certificate Sign, CRL Sign X509v3 Subject Key Identifier: C4:A7:B1:A4:7B:2C:71:FA:DB:E1:4B:90:75:FF:C4:15:60:85:89:10 [. . .]
Why does this matter?
Given that Let’s Encrypt’s root CA is just one of more than 140 currently trusted by Firefox, for instance, why focus on this one “magic” expiry date?
We felt this was a timely story because it’s a good reminder of why cryptography and cryptographic progress is often so slow, sometimes taking years to achieve by earned consensus what a dictatorial pronouncement that did nothing to build collective trust could achieve overnight.
Simply put, trust is understandably hard and time-consuming to acquire, but easy to lose.
So, well done to Let’s Encrypt for sticking to its plan of making HTTPS easy and cheap to add even to the tiniest website, and thanks to IdenTrust for vouching for Let’s Encrypt back in the early days.
And thanks to everyone who decided to bite the bullet and adopt HTTPS back when there were still plenty of detractors out there suggesting that HTTPS was just a needless complexity thrust on us by online giants such as Facebook and Google, who clearly had the staff and the budget to do it easily.
Lastly, to those who still claim that HTTPS is an unnecessary evil that plays into the hands of cybercriminals because they, too, can now easily get HTTPS certificates if they want, don’t forget that crooks who wanted to use HTTPS were perfectly able to do so long before Let’s Encrypt came onto the scene.
After all, in the days when cybercriminals still had to stump up credit card details to pay $99 for a TLS certificate to make their sites look “mainstream”…
…do you think for a moment that they were using their own credit cards?
Or do you think they were using card details that they’d slurped up with ease from websites that refused to use HTTPS “because to do so would play into the hands of cybercriminals”?