HOW MANY CRYPTOGRAPHERS?
No audio player below? Listen directly on Soundcloud.
With Doug Aamoth and Paul Ducklin. Intro and outro music by Edith Mudge.
READ THE TRANSCRIPT
DOUG. Leaky light bulbs, WinRAR bugs, and “Airplane mode, [HIGH RISING TONE] question mark?”
All that and more on the Naked Security podcast.
Welcome to the podcast, everybody.
I am Doug Aamoth; he is Paul Ducklin.
Paul, your thoughts?
DUCK. My thoughts are, Doug, that…
…that was a very good representation of an interrogation mark.
DOUG. Yeah, I turned my head almost into landscape mode.
DUCK. [LAUGHS] And then one little woodpecker blow just at the bottom, PLOCK, just for full effect.
DOUG. Well, speaking of questions, we have a great one… I am so excited for This Week in Tech History.
DUCK. Very good one there!
The Seguemeister is back!
DOUG. If anyone has ever heard of Miss Manners, she is advice columnist Judith Martin.
She’s 84 years young and still doling out advice.
So in her 26 August 1984 column, she answers a very important question.
Now, I need to read this verbatim because the write up is too good: this is from computerhistory.org, which is a great site if you’re into tech history.
Miss Manners confronts a new realm of etiquette in her August 26 column…
Remember, this is 1984!
…as she responded to a reader’s concern about typing personal correspondence on a personal computer.
The concerned individual said that using the computer was more convenient, but that they were worried about the poor quality of their dot matrix printer and about copying parts of one letter into another.
Miss Manners replied that computers, like typewriters, generally are inappropriate for personal correspondence.
The recipient may confuse the letter for a sweepstakes entry.
DUCK. [LOUD LAUGHTER] Do you have four aces?
Here are three… scratch off your lucky letter and see. [MORE LAUGHTER]
DOUG. And she noted:
If any of your friends ever sees that your letter to another contains identical ingredients, you will have no further correspondence problems.
As in, you’re done corresponding with this friend because the friendship is over.
DUCK. Yes, the question will answer itself. [LAUGHTER]
Alright, let’s get into it.
Here we have a pair of WinRAR bugs… remember WinRAR?
One is, “A security issue involving an out-of-bounds write.”
And number two, “WinRAR could start a wrong file after a user double-clicked an item in a specially crafted archive.”
Paul, what’s going on here with WinRAR?
DUCK. Well, WinRAR… lots of people will remember that from the old days, when archives typically came on multiple floppies, or they came as lots and lots of separate small text-encoded posts in an internet forum.
WinRAR, if you like, set the standard for making it easy to collate lots of separate sources, putting them back together for you and having what I believe it refers to as a “recovery volume”.
That was one or more additional parts so that if one or more of the original parts is damaged, corrupted or even (as you imagine in the case of floppy disks or uploaded chunks in an online forum) missing completely, the program could automatically reconstruct the missing part based on error correction data in this recovery volume.
And, unfortunately, in (I believe) the older code in the product that dealt with the old-style error recovery system…
…as far as I can understand it (obviously they’re not giving away the exact details of this), you send someone an archive that has a corrupt part which forces WinRAR to go and use its recovery volume to try and deal with the bit that’s been damaged.
And in handling the recovery data, there’s a buffer overflow which writes beyond the end of the buffer, which could cause remote code execution.
This is CVE-2023-40477, where trying to recover from a fault causes a fault that can be exploited for remote code execution.
So if you are a WinRAR user, make sure that you have patched.
Because there was a coordinated disclosure of this by the Zero Day Initiative and by WinRAR recently; everyone knows that this bug is out there by now.
DOUG. The second bug is less serious, but still a bug nonetheless…
DUCK. Apparently this one was used by crooks for tricking people into installing data-stealing malware or cryptocurrency roguery, who would have thought?
Given that I’m not a WinRAR user, I couldn’t test this, but my understanding is that you can open an archive and when you go to access something in the archive, *you get the wrong file* by mistake.
DOUG. OK, so version 6.23 if you’re still using WinRAR.
Our next story is from the “how in the world did they find this bug?” file.
Researchers have discovered how to trick you into thinking your iPhone is in Airplane mode while actually leaving mobile data turned on.
DUCK. I was minded to write this up because it is a fascinating reminder that when you are relying on visual indicators provided by the operating system or by an app, say in a status bar or, on the iPhone, in the so called Control Center, which is the buttons you get when you swipe up from the bottom of the screen…
There’s a little icon of an aircraft, and if you tap it, you go into Aeroplane mode.
And so researchers at Jamf figured, given that that’s the workflow that most people do if they temporarily want to make sure their phone is offline, “How strongly can you rely on indicators like that Control Center that you swipe up on your iPhone?”
And they discovered that you can actually trick most of the people most of the time!
They found a way that, when you tap on the aircraft icon, it’s supposed to go orange and all the other icons that show radio connection are supposed to dim out… well, they found that they could get that aircraft to become orange, but they could suppress the mobile data bit being turned off.
So it looks like you’re in Aeroplane mode, but in fact your mobile data connection is still valid in the background.
And then they reasoned that if someone really was serious about security, they’d figure, “Well, I want to make sure that I am disconnected.”
And I would have followed exactly the workflow that they suggest in their research article, namely: I would open my browser, and I’d browse to a site (nakedsecurity.sophos.com, for example), and I would check that the system gave me an error saying, “You’re in Aeroplane mode. You can’t get online.”
I would have been inclined, at that point, to believe that I really had disconnected my phone from the network.
But the researchers found a way of tricking individual apps into convincing you that you were in Aeroplane mode when in fact all they’d done is deny mobile data access to that specific app.
Normally, when you go into Safari and you’ve said that Safari is not allowed to use my mobile data, what you’re supposed to get is an error message along the lines of, “Mobile data is turned off for Safari.”
If you saw that message when you were testing connectivity, you would realise, “Hey, that means mobile data is still on in general; it’s only off for this specific app. That’s not what I want: I want it off for everybody.”
So they found a way of faking that message.
It displays the one that says, “You’re in Aeroplane mode. You can’t get online.”
It is a great reminder that sometimes you can’t believe what you see on the screen.
It helps to have two ways of checking that your computer is in the security status, or at the security level, that you want it to be in.
Just in case someone is pulling the wool over your eyes.
DOUG. Alright, it gives me great pleasure to announce that we will keep an eye on that.
And last, but certainly not least, anyone who set up a smart device knows the process by now.
The device transmits itself as an access point.
You connect to that access point with your phone, tell it what *your* access point is, complete with Wi-Fi password.
And what could possibly go wrong?
Well, several things, it turns out, Paul, could go wrong!
In this particular paper, the researchers focused on a product called the TP-Link Tapo L530E.
Now, I don’t want to point fingers particularly at TP-Link here… in the paper, they said they chose that one because, as far as they could see (and the researchers are all, I think, Italian), that was the most widely sold so-called smart light bulb via Amazon in Italy.
DOUG. Well, that’s what’s interesting, too… we talk about these IoT devices and all the security problems they have, because not a lot of thought goes into securing them.
But a company like TP-Link is big and reasonably well regarded.
And you would assume that, of the IoT device companies, this would be one that would be putting a little extra wood behind security.
DUCK. Yes, there were definitely some coding blunders that should not have been made in these vulnerabilities, and we’ll get to that.
And there are some authentication-related issues that are somewhat tricky to solve for a small and simple device like a light bulb.
The good news is that, as the researchers wrote in their paper, “We contacted TP-Link via their vulnerability research program, and they’re now working on some sort of patch.”
Now, I don’t know why they chose to disclose it and publish the paper right now.
They didn’t say whether they’d agreed on a disclosure date, and they didn’t say when they told TP-Link and how long they’ve given them so far, which I thought was a bit of a pity.
If they were going to disclose because they thought TP-Link had taken too long, they could have said that.
If it hasn’t been very long, they could have waited a little while.
But they didn’t give any copy-and-paste code that you can use to exploit these vulnerabilities, so there are nevertheless some good lessons to learn from it.
The main one seems to be that when you’re setting up the light bulb for the first time, there is some effort put into making sure that the app and the light bulb each reason that they are communicating with the right sort of code at the other end.
But even though there’s some effort to do that, it relies on what we might jokingly call a “keyed cryptographic hash”… but the key is hard-wired and, as the researchers found, they didn’t even need to go and disassemble the code to find the key, because it was only 32 bits long.
So they were able to recover it by brute force in 140 minutes.
DOUG. To be clear, an attacker would need to be within range of you, and set up a rogue access point that looks like your light bulb, and have you connect to it.
And then they’d be able to get you to type in your Wi-Fi password, and your password to your TP-Link account, and they’d get that stuff.
But they would need to be physically within range of you.
DUCK. The attack can’t be mounted remotely.
It’s not like somebody could just send you some dubious link from the other side of the world and get all that data.
But there were some other bugs as well, Doug.
DOUG. Yes, several things went wrong, as mentioned.
It seems that this lack of authentication carried through to the setup process as well.
Obviously what’s really important when the setup actually starts is that the traffic between the app and the device gets encrypted.
The way it works in this case is that the app sends an RSA public key to the light bulb, and the light bulb uses that to encrypt and send back a one-time 128-bit AES key for the session.
The problem is that, once again, just like with that initial exchange, the light bulb makes no effort to communicate to the app, “Yes, I really am a light bulb.”
By creating that fake access point in the first place, and knowing the magic key for the “are you there?/yes, I am here” exchange… by exploiting that hole, an imposter could lure you to the wrong access point.
And then there’s no further authentication.
An imposter light bulb can come back and say, “Here’s the super-secret key that only you know and I know.”
So you are communicating securely…
…with the imposter!
DOUG. Surely, by now, we’re done with the problems, right?
DUCK. Well, there were two further vulnerabilities they found, and in a way, the third of these is the one that worried me the most.
Once you’d established this session key for the secure communication, you’d assume that you would get the encryption process right.
And my understanding is that the coders at TP-Link made a fundamental cryptographic implementation blunder.
They used AES in what’s called CBC, or “cipher block chaining” mode.
That’s a mode that is meant to ensure that if you send a packet with exactly the same data two, three, four or more times, you can’t recognise that it’s the same data.
With repeated data, even if an attacker doesn’t know what the data is, they can see that the same thing is happening over and over.
When you’re using AES in CBC mode, the way you do that is you prime the encryption process with what’s called an IV or an “initialization vector” before you start encrypting each packet.
Now, the key has to be a secret.
But the initialization vector doesn’t: you actually put it in the data at the start.
The important thing is it needs to be different every time.
Otherwise, if you repeat the IV, then when you encrypt the same data with the same key, you get the same ciphertext every time.
That produces patterns in your encrypted data.
And encrypted data should never display any patterns; it should be indistinguishable from a random stream of stuff.
It seems that what these programmers did was to generate the key and the initialisation vector right at the start, and then whenever they had data to send, they would reuse the same key and the same initialisation vector.
[VERY SERIOUS] Don’t do that!
And a good aid memoire is to remember another word in cryptographic jargon: “nonce”, which is short for “number used once.”
And the hint is right there in the name, Doug
DOUG. OK, have we covered everything now, or is there still one more problem?
DUCK. The last problem that the researchers found, which is a problem whether or not initialisation vectors are used correctly (although it’s a more acute problem if they are not), is that none of the requests and replies being sent back and forth were timestamped reliably, which meant that it was possible to re-send an old data packet without knowing what it was all about.
Remember, it’s encrypted; you can’t read inside it; you can’t construct one of your own… but you could take an old packet, say from yesterday, and replay it today, and you can see (even when an attacker doesn’t know what that data packet is likely to do) why that is likely to create havoc.
DOUG. All right, so it sounds like the TP-Link engineering team has a fun challenge on their hands the next couple of weeks or months.
And speaking of fun, Richard chimes in on this story and asks a new version of an old question:
How many cryptographers does it take to update a light bulb?
That question tickled me greatly.
DUCK. Me, too. [LAUGHS]
I thought, “Oh, I should have foreseen that.”
DOUG. And your answer:
At least 280 for legacy fittings and up to 2256 for contemporary lighting.
Beautifully answered! [LAUGHTER]
DUCK. That’s an allusion to current cryptographic standards, where you’re supposed to have what’s broadly known as 128 bits of security at least for current implementations.
But, apparently, in legacy systems, 80 bits of security, at least for the time being, is just about enough.
So that was the background to that joke.
Alright, thank you very much, Richard, for sending that in.
If you have an interesting story, comment, or question you’d like to submit, we’d love to read on the podcast.
You can email email@example.com, you can comment on any one of our articles, or you can hit us up on social: @nakedsecurity.
That’s our show for today; thanks very much for listening.
For Paul Ducklin, I’m Doug Aamoth, reminding you until next time to…
BOTH. Stay secure!