Update on Naked Security

Dear Naked Security readers,

Firstly, thank you for your interest, your time, and your contributions to the Naked Security community. Your invaluable engagement and expertise have helped improve cybersecurity for everyone.

We have recently added the extensive catalog of Naked Security articles to the Sophos News blog platform, enabling us to provide all Sophos security research, insights, and intelligence in a single location.

We are redirecting articles from Naked Security to Sophos News and you can continue to access the Naked Security article library whenever you need it.

As adversary behaviors evolve, we’re seeing increased demand for deep threat research to help defenders keep pace with today’s fast-changing, innovative attacks. In response, we’re increasing our content focus on proprietary research by our Sophos X-Ops team.

Sophos X-Ops is a cross-functional team of more than 500 threat intelligence and security experts with visibility into attacker behaviors and other important topics including ransomware attacker trends, cluster attacks, software vulnerabilities, supply chain attacks, zero days, and much more. You can find their articles in the Security Operations, Threat Research and AI Research sections of this blog.

We’re extremely proud of Naked Security’s contributions to the security community over the last decade. Paul Ducklin will continue to share his security expertise on X (formerly Twitter), Facebook, Instagram, and LinkedIn

If you don’t already, we invite you to subscribe to receive our Sophos News blog email (sign-up at the bottom of this page) and follow Sophos X-Ops on X (Twitter) and Mastodon.

Whether you’re a threat hunter, security administrator, IT/security generalist, home user or more, we know you will find Sophos X-Ops a valuable resource.

Thank you for your support.

Sophos

Mom’s Meals issues “Notice of Data Event”: What to know and what to do

US food delivery compeny PurFoods, which trades as Mom’s Meals, has just admitted to a cyberintrusion that took place from 2023-01-16 to 2023-02-22.

The company stated officially that:

[The] cyberattack […] included the encryption of certain files in our network.

Because the investigation identified the presence of tools that could be used for data exfiltration (the unauthorized transfer of data), we can’t rule out the possibility that data was taken from one of our file servers.

PurFoods says it has contacted everyone whose was affected, or at least everyone whose data appeared in one or more of the scrambled files, which we assume are the files that the company thinks the attackers would have stolen, if indeed any data was exfiltrated.

What’s at risk

The company didn’t say how many people were caught up in this incident, but a recent report on IT news site The Register puts the total at more than 1,200,000 individuals.

PurFoods listed those affected as:

Clients of PurFoods who received one or more meal deliveries, as well as some current and former employees and independent contractors.

The information in the files included date of birth, driver’s license/state identification number, financial account information, payment card information, medical record number, Medicare and/or Medicaid identification, health information, treatment information, diagnosis code, meal category and/or cost, health insurance information, and patient ID number.

Social Security numbers [SSNs] were involved for less than 1% of the [individuals], most of which are internal to PurFoods.

We’re guessing that the company didn’t collect SSNs for customers, though we’d expect them to need SSN data for employees, which is why the at-risk SSNs are listed as “internal”.

But if you’re wondering why a food delivery company would need to collect customers’ medical details, including health and treatment information…

…well, we wondered that, too.

It seems that the company specialises in providing meals for people with specific dietary needs, such as those with diabetes, kidney problems and other medical conditions, for whom food ingredients need to be chosen carefully.

Mom’s Meals therefore needs medical details for some, if not all, of its customers, and that data was mixed in with plenty of other personally identifiable information (PII) that may now be in the hands of cybercriminals.

What to do?

If you’re one of the more than a million affected customers:

  • Consider replacing your payment card if yours was listed as possibly stolen. Most banks will issue new payment cards promptly, thus automatically invalidating your old card and making the old card details useless to anyone who has them now or buys them up later on the dark web.
  • Watch your statements carefully. You should do this anyway, so that you spot anomalies as soon as you can, but it’s worth keeping a closer eye on what’s happening with your financial accounts if there’s evidence you might be at a greater-than-usual risk of identity theft or card abuse.
  • Consider implementing a credit freeze. This adds an extra layer of authorisation from you that’s needed before anything in your credit report can be released to anyone. This makes it harder for crooks to acquire loans, credit cards and the like in your name (although this obviously makes it harder – and thus takes longer – for you to get a new loan, credit card or mortgage, too). Unfortunately, activating a credit freeze means you need to send a large amount of PII, including a copy of your photo ID and your SSN, to one of three main credit bureaus.

If you’re a company that handles vital PII of this sort:

  • Act immediately when any anomalies are detected in your network. In this attack, the criminals were apparently inside the PurFoods network for more than a month, but were only spotted after they’d got as far as scrambling files, presumably as a basis for extorting money from the company.
  • Consider using a Managed Detection and Response (MDR) service if you can’t keep up on your own. Good threat hunting tools not only search for and prevent the activation of malware, but also help you to detect weak spots in your network such as unprotected or unpatched computers, and to identify and isolate behaviour that’s commonly seen in the build-up to a full-blown attack. Having threat hunting experts on hand all the time makes it much more likely that you’ll spot any danger signals before it’s too late.
  • Be as quick and as transparent as you can in any data breach notifications. Despite the suggestion that this was a two-pronged steal-data-and-then-scramble-it attack, known in the jargon as double extortion, PurFoods hasn’t made it clear what really happened, even though the company tooks several months to investigate and publish its report. For example, we still don’t know whether the company received any blackmail demands, whether there was any “negotiation” with the attackers, or whether any money changed hands in return for hushing up the incident or for buying back decryption keys to recover the scrambled files.

According to the data in the latest Sophos Active Adversary report, the median average dwell time in ransomware attacks (the time it takes between the crooks first breaking into your network and getting themselves into a position to compromise all your files in one simultaneous strike) is now down to just five days.

That means that if your company does get “chosen” by ransomware criminals for their next money-grabbing attack, there’s a better than 50% chance that you’ll have less than a week to spot the crooks sneaking around getting ready for your network doomsday event.

Worse still, the final hammer blow unleashed by ransomware attackers is likely to be at a deeply inconvenient time for your own IT team, with the file-scrambling denouement typically unleashed between 21:00 and 06:00 (9pm to 6am) in your local timezone.

To counter-paraphrase Mr Miagi of Karate Kid fame: Best way to avoid punch is to be there all the time, monitoring and reacting as soon as you can.


Short of time or expertise to take care of cybersecurity threat response? Worried that cybersecurity will end up distracting you from all the other things you need to do?

Learn more about Sophos Managed Detection and Response:
24/7 threat hunting, detection, and response  ▶


S3 Ep149: How many cryptographers does it take to change a light bulb?

HOW MANY CRYPTOGRAPHERS?

No audio player below? Listen directly on Soundcloud.

With Doug Aamoth and Paul Ducklin. Intro and outro music by Edith Mudge.

You can listen to us on Soundcloud, Apple Podcasts, Google Podcasts, Spotify and anywhere that good podcasts are found. Or just drop the URL of our RSS feed into your favourite podcatcher.


READ THE TRANSCRIPT

DOUG.  Leaky light bulbs, WinRAR bugs, and “Airplane mode, [HIGH RISING TONE] question mark?”

All that and more on the Naked Security podcast.

[MUSICAL MODEM]

Welcome to the podcast, everybody.

I am Doug Aamoth; he is Paul Ducklin.

Paul, your thoughts?


DUCK.  My thoughts are, Doug, that…

…that was a very good representation of an interrogation mark.


DOUG.  Yeah, I turned my head almost into landscape mode.


DUCK.  [LAUGHS] And then one little woodpecker blow just at the bottom, PLOCK, just for full effect.


DOUG.  Well, speaking of questions, we have a great one… I am so excited for This Week in Tech History.


DUCK.  Very good one there!

The Seguemeister is back!


DOUG.  If anyone has ever heard of Miss Manners, she is advice columnist Judith Martin.

She’s 84 years young and still doling out advice.

So in her 26 August 1984 column, she answers a very important question.

Now, I need to read this verbatim because the write up is too good: this is from computerhistory.org, which is a great site if you’re into tech history.

Miss Manners confronts a new realm of etiquette in her August 26 column…

Remember, this is 1984!

…as she responded to a reader’s concern about typing personal correspondence on a personal computer.

The concerned individual said that using the computer was more convenient, but that they were worried about the poor quality of their dot matrix printer and about copying parts of one letter into another.

Miss Manners replied that computers, like typewriters, generally are inappropriate for personal correspondence.

The recipient may confuse the letter for a sweepstakes entry.


DUCK.  [LOUD LAUGHTER] Do you have four aces?

Here are three… scratch off your lucky letter and see. [MORE LAUGHTER]


DOUG.  And she noted:

If any of your friends ever sees that your letter to another contains identical ingredients, you will have no further correspondence problems.

As in, you’re done corresponding with this friend because the friendship is over.


DUCK.  Yes, the question will answer itself. [LAUGHTER]


DOUG.  Exactly.

Alright, let’s get into it.

Here we have a pair of WinRAR bugs… remember WinRAR?

One is, “A security issue involving an out-of-bounds write.”

And number two, “WinRAR could start a wrong file after a user double-clicked an item in a specially crafted archive.”

Paul, what’s going on here with WinRAR?

Using WinRAR? Be sure to patch against these code execution bugs…


DUCK.  Well, WinRAR… lots of people will remember that from the old days, when archives typically came on multiple floppies, or they came as lots and lots of separate small text-encoded posts in an internet forum.

WinRAR, if you like, set the standard for making it easy to collate lots of separate sources, putting them back together for you and having what I believe it refers to as a “recovery volume”.

That was one or more additional parts so that if one or more of the original parts is damaged, corrupted or even (as you imagine in the case of floppy disks or uploaded chunks in an online forum) missing completely, the program could automatically reconstruct the missing part based on error correction data in this recovery volume.

And, unfortunately, in (I believe) the older code in the product that dealt with the old-style error recovery system…

…as far as I can understand it (obviously they’re not giving away the exact details of this), you send someone an archive that has a corrupt part which forces WinRAR to go and use its recovery volume to try and deal with the bit that’s been damaged.

And in handling the recovery data, there’s a buffer overflow which writes beyond the end of the buffer, which could cause remote code execution.

This is CVE-2023-40477, where trying to recover from a fault causes a fault that can be exploited for remote code execution.

So if you are a WinRAR user, make sure that you have patched.

Because there was a coordinated disclosure of this by the Zero Day Initiative and by WinRAR recently; everyone knows that this bug is out there by now.


DOUG.  The second bug is less serious, but still a bug nonetheless…


DUCK.  Apparently this one was used by crooks for tricking people into installing data-stealing malware or cryptocurrency roguery, who would have thought?

Given that I’m not a WinRAR user, I couldn’t test this, but my understanding is that you can open an archive and when you go to access something in the archive, *you get the wrong file* by mistake.


DOUG.  OK, so version 6.23 if you’re still using WinRAR.

Our next story is from the “how in the world did they find this bug?” file.

Researchers have discovered how to trick you into thinking your iPhone is in Airplane mode while actually leaving mobile data turned on.

“Snakes in airplane mode” – what if your phone says it’s offline but isn’t?


DUCK.  I was minded to write this up because it is a fascinating reminder that when you are relying on visual indicators provided by the operating system or by an app, say in a status bar or, on the iPhone, in the so called Control Center, which is the buttons you get when you swipe up from the bottom of the screen…

There’s a little icon of an aircraft, and if you tap it, you go into Aeroplane mode.

And so researchers at Jamf figured, given that that’s the workflow that most people do if they temporarily want to make sure their phone is offline, “How strongly can you rely on indicators like that Control Center that you swipe up on your iPhone?”

And they discovered that you can actually trick most of the people most of the time!

They found a way that, when you tap on the aircraft icon, it’s supposed to go orange and all the other icons that show radio connection are supposed to dim out… well, they found that they could get that aircraft to become orange, but they could suppress the mobile data bit being turned off.

So it looks like you’re in Aeroplane mode, but in fact your mobile data connection is still valid in the background.

And then they reasoned that if someone really was serious about security, they’d figure, “Well, I want to make sure that I am disconnected.”

And I would have followed exactly the workflow that they suggest in their research article, namely: I would open my browser, and I’d browse to a site (nakedsecurity.sophos.com, for example), and I would check that the system gave me an error saying, “You’re in Aeroplane mode. You can’t get online.”

I would have been inclined, at that point, to believe that I really had disconnected my phone from the network.

But the researchers found a way of tricking individual apps into convincing you that you were in Aeroplane mode when in fact all they’d done is deny mobile data access to that specific app.

Normally, when you go into Safari and you’ve said that Safari is not allowed to use my mobile data, what you’re supposed to get is an error message along the lines of, “Mobile data is turned off for Safari.”

If you saw that message when you were testing connectivity, you would realise, “Hey, that means mobile data is still on in general; it’s only off for this specific app. That’s not what I want: I want it off for everybody.”

So they found a way of faking that message.

It displays the one that says, “You’re in Aeroplane mode. You can’t get online.”

It is a great reminder that sometimes you can’t believe what you see on the screen.

It helps to have two ways of checking that your computer is in the security status, or at the security level, that you want it to be in.

Just in case someone is pulling the wool over your eyes.


DOUG.  Alright, it gives me great pleasure to announce that we will keep an eye on that.

And last, but certainly not least, anyone who set up a smart device knows the process by now.

The device transmits itself as an access point.

You connect to that access point with your phone, tell it what *your* access point is, complete with Wi-Fi password.

And what could possibly go wrong?

Well, several things, it turns out, Paul, could go wrong!

Smart light bulbs could give away your password secrets


DUCK.  Yes.

In this particular paper, the researchers focused on a product called the TP-Link Tapo L530E.

Now, I don’t want to point fingers particularly at TP-Link here… in the paper, they said they chose that one because, as far as they could see (and the researchers are all, I think, Italian), that was the most widely sold so-called smart light bulb via Amazon in Italy.


DOUG.  Well, that’s what’s interesting, too… we talk about these IoT devices and all the security problems they have, because not a lot of thought goes into securing them.

But a company like TP-Link is big and reasonably well regarded.

And you would assume that, of the IoT device companies, this would be one that would be putting a little extra wood behind security.


DUCK.  Yes, there were definitely some coding blunders that should not have been made in these vulnerabilities, and we’ll get to that.

And there are some authentication-related issues that are somewhat tricky to solve for a small and simple device like a light bulb.

The good news is that, as the researchers wrote in their paper, “We contacted TP-Link via their vulnerability research program, and they’re now working on some sort of patch.”

Now, I don’t know why they chose to disclose it and publish the paper right now.

They didn’t say whether they’d agreed on a disclosure date, and they didn’t say when they told TP-Link and how long they’ve given them so far, which I thought was a bit of a pity.

If they were going to disclose because they thought TP-Link had taken too long, they could have said that.

If it hasn’t been very long, they could have waited a little while.

But they didn’t give any copy-and-paste code that you can use to exploit these vulnerabilities, so there are nevertheless some good lessons to learn from it.

The main one seems to be that when you’re setting up the light bulb for the first time, there is some effort put into making sure that the app and the light bulb each reason that they are communicating with the right sort of code at the other end.

But even though there’s some effort to do that, it relies on what we might jokingly call a “keyed cryptographic hash”… but the key is hard-wired and, as the researchers found, they didn’t even need to go and disassemble the code to find the key, because it was only 32 bits long.

So they were able to recover it by brute force in 140 minutes.


DOUG.  To be clear, an attacker would need to be within range of you, and set up a rogue access point that looks like your light bulb, and have you connect to it.

And then they’d be able to get you to type in your Wi-Fi password, and your password to your TP-Link account, and they’d get that stuff.

But they would need to be physically within range of you.


DUCK.  The attack can’t be mounted remotely.

It’s not like somebody could just send you some dubious link from the other side of the world and get all that data.

But there were some other bugs as well, Doug.


DOUG.  Yes, several things went wrong, as mentioned.

It seems that this lack of authentication carried through to the setup process as well.


DUCK.  Yes.

Obviously what’s really important when the setup actually starts is that the traffic between the app and the device gets encrypted.

The way it works in this case is that the app sends an RSA public key to the light bulb, and the light bulb uses that to encrypt and send back a one-time 128-bit AES key for the session.

The problem is that, once again, just like with that initial exchange, the light bulb makes no effort to communicate to the app, “Yes, I really am a light bulb.”

By creating that fake access point in the first place, and knowing the magic key for the “are you there?/yes, I am here” exchange… by exploiting that hole, an imposter could lure you to the wrong access point.

And then there’s no further authentication.

An imposter light bulb can come back and say, “Here’s the super-secret key that only you know and I know.”

So you are communicating securely…

…with the imposter!


DOUG.  Surely, by now, we’re done with the problems, right?


DUCK.  Well, there were two further vulnerabilities they found, and in a way, the third of these is the one that worried me the most.

Once you’d established this session key for the secure communication, you’d assume that you would get the encryption process right.

And my understanding is that the coders at TP-Link made a fundamental cryptographic implementation blunder.

They used AES in what’s called CBC, or “cipher block chaining” mode.

That’s a mode that is meant to ensure that if you send a packet with exactly the same data two, three, four or more times, you can’t recognise that it’s the same data.

With repeated data, even if an attacker doesn’t know what the data is, they can see that the same thing is happening over and over.

When you’re using AES in CBC mode, the way you do that is you prime the encryption process with what’s called an IV or an “initialization vector” before you start encrypting each packet.

Now, the key has to be a secret.

But the initialization vector doesn’t: you actually put it in the data at the start.

The important thing is it needs to be different every time.

Otherwise, if you repeat the IV, then when you encrypt the same data with the same key, you get the same ciphertext every time.

That produces patterns in your encrypted data.

And encrypted data should never display any patterns; it should be indistinguishable from a random stream of stuff.

It seems that what these programmers did was to generate the key and the initialisation vector right at the start, and then whenever they had data to send, they would reuse the same key and the same initialisation vector.

[VERY SERIOUS] Don’t do that!

And a good aid memoire is to remember another word in cryptographic jargon: “nonce”, which is short for “number used once.”

And the hint is right there in the name, Doug


DOUG.  OK, have we covered everything now, or is there still one more problem?


DUCK.  The last problem that the researchers found, which is a problem whether or not initialisation vectors are used correctly (although it’s a more acute problem if they are not), is that none of the requests and replies being sent back and forth were timestamped reliably, which meant that it was possible to re-send an old data packet without knowing what it was all about.

Remember, it’s encrypted; you can’t read inside it; you can’t construct one of your own… but you could take an old packet, say from yesterday, and replay it today, and you can see (even when an attacker doesn’t know what that data packet is likely to do) why that is likely to create havoc.


DOUG.  All right, so it sounds like the TP-Link engineering team has a fun challenge on their hands the next couple of weeks or months.

And speaking of fun, Richard chimes in on this story and asks a new version of an old question:

How many cryptographers does it take to update a light bulb?

That question tickled me greatly.


DUCK.  Me, too. [LAUGHS]

I thought, “Oh, I should have foreseen that.”


DOUG.  And your answer:

At least 280 for legacy fittings and up to 2256 for contemporary lighting.

Beautifully answered! [LAUGHTER]


DUCK.  That’s an allusion to current cryptographic standards, where you’re supposed to have what’s broadly known as 128 bits of security at least for current implementations.

But, apparently, in legacy systems, 80 bits of security, at least for the time being, is just about enough.

So that was the background to that joke.


DOUG.  Excellent.

Alright, thank you very much, Richard, for sending that in.

If you have an interesting story, comment, or question you’d like to submit, we’d love to read on the podcast.

You can email tips@sophos.com, you can comment on any one of our articles, or you can hit us up on social: @nakedsecurity.

That’s our show for today; thanks very much for listening.

For Paul Ducklin, I’m Doug Aamoth, reminding you until next time to…


BOTH.  Stay secure!

[MUSICAL MODEM]


Using WinRAR? Be sure to patch against these code execution bugs…

The venerable RAR program, short for Roshal’s Archiver after its original creator, has been popular in file sharing and software distribution circles for decades, not least because of its built-in error recovery and file reconstruction features.

Early internet users will remember, with little fondness, the days when large file transfers were shipped either as compressed archives split across multiple floppy disks, or uploaded to size-conscious online forums as a series of modestly-sized chunks that were first compressed to save space and then expanded into an ASCII-only text-encoded form.

If one floppy went missing or wouldn’t read back properly, or if one chunk of a 12-part archive upload got deleted from the server by mistake, you were out of luck.

RAR, or WinRAR in its contemporary Windows form, helped to deal with this problem by offering so-called recovery volumes.

These stored error correction data such that multi-part archives could be recovered automatically and completely even if one entire chunk (or more, depending on how much recovery information was kept) ended up lost or irretrievable.

Keeping a spare wheel in the boot/trunk

Apparently, RAR archives up to and including version 4 used so-called parity correction; newer versions use a computationally more complex but more powerful error correction system known as Reed-Solomon codes.

Parity-based correction relies on the XOR operation, which we’ll denote here with the symbol ⊕ (a plus sign inside a circle).

XOR is short for exclusive OR, which denotes “either X is true or Y is true, but not both at the same time”, thus following this truth table, which we construct by assuming that X and Y can only have the values 0 (false) or 1 (true):

If X=0 and Y=0 then X ⊕ Y = 0 (two falses make a false)
If X=1 and Y=0 then X ⊕ Y = 1 (one can be true, but not both)
If X=0 and Y=1 then X ⊕ Y = 1 (one can be true, but not both)
If X=1 and Y=1 then X ⊕ Y = 0 (it's got to be one or other)

The XOR function works a bit like the question, “Would you like coffee or tea?”

If you say “yes”, you then have to choose coffee alone, or choose tea alone, because you can’t have one cup of each.

As you can work out from the truth table above, XOR has the convenient characteristics that X ⊕ 0 = X, and X ⊕ X = 0.

Now imagine that you have three data chunks labelled A, B, and C, and you compute a fourth chunk P by XORing A and B and C together, so that P = (A ⊕ B ⊕ C).

Given the truth table above, and given that XOR is what’s known as commutative, meaning that the order of the values in a calculation can be swapped around if you like, so that X ⊕ Y = Y ⊕ Z, or A ⊕ B ⊕ C = C ⊕ B ⊕ A = B ⊕ C ⊕ A and so on, we can see that:

A ⊕ B ⊕ C ⊕ P = A ⊕ B ⊕ C ⊕ (A ⊕ B ⊕ C) = (A⊕A) ⊕ (B⊕B) ⊕ (C⊕C) = 0 ⊕ 0 ⊕ 0 = 0

Now look what happens if any one of A, B or C is lost:

A ⊕ B ⊕ P = A ⊕ B ⊕ (A ⊕ B ⊕ C) = (A⊕A) ⊕ (B⊕B) ⊕ C = 0 ⊕ 0 ⊕ C = C <--the missing chunk returns! A ⊕ C ⊕ P = A ⊕ C ⊕ (A ⊕ B ⊕ C) = (A⊕A) ⊕ (C⊕C) ⊕ B = 0 ⊕ 0 ⊕ B = B <--the missing chunk returns! B ⊕ C ⊕ P = B ⊕ C ⊕ (A ⊕ B ⊕ C) = (B⊕B) ⊕ (C⊕C) ⊕ A = 0 ⊕ 0 ⊕ A = A <--the missing chunk returns!

Also, if P is lost, we can ignore it because we can compute A ⊕ B ⊕ C anyway.

Simply put, having the parity data chunk P means we can always reconstruct any missing chunk, regardless of which one it is.

The error recovery error

Well, after what we assume is many years unnoticed, a bug now dubbed CVE-2023-40477 has surfaced in WinRAR.

This bug can be triggered (ironically, perhaps) when the product makes use of this data recovery system.

As far as we can see, a booby-trapped parity data chunk inserted into an archive can trick the WinRAR code into writing data outside of the memory area allocated to it.

This leads to an exploitable buffer overflow vulnerability.

Data written where it doesn’t belong ends up being treated as program code that gets executed, rather than as plain old data to be used in the dearchiving process.

This bug didn’t get a 10/10 severity score on the CVSS “danger scale”, clocking in at 7.8/10 on the grounds that the vulnerability can’t be exploited without some sort of assistance from the user who’s being targeted.

Bug the second

Interestingly, a second security bug was patched in the latest WinRAR release, and although this one sounds less troublesome than the CVE-2023-40477 flaw mentioned above, TechCrunch suggests that it has been exploited in real life via booby-trapped archives “posted on at least eight public forums [covering] a wide range of trading, investment, and cryptocurrency-related subjects.”

We can’t find a CVE number for this one, but WinRAR describes it simply as:

 WinRAR could start a wrong file after a user double- clicked an item in a specially crafted archive.

In other words, a user who opened up an archive and decided to look at an apparently innocent file inside it (a README text file, for example, or a harmless-looking image) might unexpectedly launch some other file from the archive instead, such as an executable script or program.

That’s a bit like receiving an email containing a safe-looking attachment along with a risky-looking one, deciding to start by investigating only the safe-looking one, but unknowingly firing up the risky file instead.

From what we can tell, and in another irony, this bug existed in WinRAR’s code for unpacking ZIP files, not in the code for processing its very own RAR file format.

Two-faced ZIP files have been a cybersecurity problem for years, because the index of files and directories in any ZIP archive appears twice, once in a series of data blocks interleaved throughout the file, and then again in a single chunk of data at the end. Code that verifies files based on one index but extracts and uses them based on the other, without checking that the two indices are consistent, has led to numerous exploitable vulnerabilites over the years. We don’t know whether this double-index issue is the root cause of the recent WinRAR bug, but it’s a reminder that unpacking archive files can be a complex and error-prone process which needs careful attention to security, even at the cost of extra processing and reduced performance.

What to do?

If you’re a WinRAR user, make sure you’re on the latest version, which is 6.23 at the time of writing [2023-08-23T16:30Z]

Apparently, there’s no automatic update system in the WinRAR software, so you need to download the new installer and run it yourself to replace an old version.

If you’re a programmer, remember to review legacy code that’s still in your software but looked upon as “retired” or “no longer recommended for new users”.

As far as we can see, WinRAR doesn’t generate old-style recovery data any more, and has used smarter error correction algotithms since version 5, but for reasons of backwards compatibility still processes old-style files if they’re presented.

Remember that when attackers create booby-trapped files hoping to trip up your software, they’re generally not using your software to create those files anyway, so testing your own input routines only against files that your own output routines originally created is never enough.

If you haven’t considered fuzzing, a jargon term that refers to a testing technique in which millions of permuted, malformed and deliberately incorrect inputs are presented to your software while monitoring it for misbehaviour…

…then now might be the time to think about it.

Good fuzzers not only run your code over and over again, but also try to adapt the tweaks, hacks and modifications they make to their fake input data so that as much of your code as possible gets tried out.

This helps you get what’s known as good code coverage during testing, including forcing your program down rare and unsual code paths that hardly ever get triggered in regular use, and where unexplored vulnerabilities may have lurked unnoticed for years.


Smart light bulbs could give away your password secrets

A trio of researchers split between Italy and the UK have recently published a paper about cryptographic insecurities they found in a widely-known smart light bulb.

The researchers seem to have chosen their target device, the TP-Link Tapo L530E, on the basis that it is “currently [the] best seller on Amazon Italy,” so we don’t know how other smart bulbs stack up, but their report has plenty to teach us anyway.

The researchers say that:

We dutifully contacted TP-Link via their Vulnerability Research Program (VRP), reporting all four vulnerabilities we found.

They acknowledged all of them and informed us that they started working on fixes both at the app and at the bulb firmware levels, planning to release them in due course.

For better or for worse (the authors of the paper don’t say whether any disclosure dates were agreed with TP-Link, so we don’t know how long the company has been working on its patches), the researchers have now revealed how their attacks work, albeit without providing any copy-and-pastable attack code for wannabe home-hackers to exploit at will.

We therefore thought that the paper was worth looking into.

Wireless setup

Like many so-called “smart” devices, the Tapo L530E is designed so it can be set up quickly and easily over Wi-Fi.

Although wireless-based configuration is common even for battery-powered devices that can be charged and set up via built-in USB ports, such as cameras and bike accessories, light bulbs generally don’t have USB ports, not least for space and safety reasons, given that they’re designed to be plugged into and left in a mains light socket.

By turning a Tapo L530E light bulb on and off repeatedly at the wall switch for one second at a time, you can force it into setup mode (apparently, the bulb automatically blinks three times to tell you when it’s ready for configuration).

Like most automatically configurable devices, this causes the smart bulb to turn itself into a Wi-Fi access point with an easy-to-recognise network name of the form Tapo Bulb XXXX, where the X’s form a string of digits.

You then connect to that temporary access point, which isn’t password protected, from an app on your smartphone.

Then you tell the bulb how to connect both to your password-protected home Wi-Fi network and to your TP-Link cloud account in future, after which the bulb’s firmware can reboot and connect itself up to the internet, allowing you to manage it from the app on your phone.

The bulb can join home network, which means means you can contact it directly over via your own Wi-Fi when you’re at home, even if your ISP is offline at the time.

And the bulb can connect over the internet to your cloud account, so you can also send commands to it indirectly via your cloud account while you’re on the road, for example to turn lights on and off if you’re late getting back in order to give the impression that there’s someone at home.

Beware imposters

You can probably guess where this is going.

If the app on your phone doesn’t have any cryptographically strong way of figuring out that it really has connected to a genuine light bulb when you go through the setup process…

…then a nearby attacker who just happens to start up a fake Tapo Bulb XXXX access point at the right moment could lure you into sending those important setup secrets to their “imposter bulb” device instead of to the real thing, thus capturing both your Wi-Fi password and your TP-Link account details.

The good news is that the researchers noticed that both the Tapo app and the L530E firmware included a basic safety check to help the app and your bulbs to find each other retliably, thus reducing the risk that the app would blurt out your passwords when it shouldn’t.

But the bad news is that protocol used for this are you really a light bulb? exchange was clearly designed to avoid mistakes rather than to prevent attacks.

Loosely put, the app locates any light bulbs on its network by broadcasting special UDP packets to port 20002 and seeing which devices reply, if any.

To help any listening light bulbs decide that an are you there? request came from the Tapo app, rather than from some other unknown product or service that just happens to use port 20002 as well, the request includes what’s known in the jargon as a keyed hash.

The I am here! reply from the light bulb includes the same sort of keyed checksum to help the app filter out unexpected and unwantwed UDP replies.

Simply put, the keyed hash is a checksum based not only on the data in the UDP packet but also some additional key bytes that are folded into the checksum as well.

Unfortunately, the Tapo protocol uses fixed key bytes for its checksum, with the same “key” hard-wired into the app and into the firmware of every Tapo bulb.

In other words, once someone has decompiled either the app, or the light bulb firmware, or both, and recovered this “key”, you should assume that anybody and everybody will know what it is, making those are you there?/I am here! messages trivial to forge.

Worse, the researchers found that they didn’t need to decompile anything, because this not-so-secret “key” is only 32 bits long, which means that by setting your own Tapo bulb into setup mode and then feeding it are you there? messages using all 232 possible checksum keys, you’ll eventually hit on the right key by what’s known as brute force.

That’s the cryptographic equivalent of spinning the dials to try every combination on a bike lock, say from 000 to 999, until you get lucky and the lock pops open. (On average, you’ll open the lock after trying half the possible combinations, but it will never take you more than 1000 goes.)

In fact, they didn’t need to send 232 messages from the app to a light bulb to crack the key.

By capturing just one known-genuine message with a valid keyed hash in it, they could then test all possible keys offline until they produced a message that had the same keyed hash as the one they’d saved.

That means the brute force attack could proceed at CPU speed, not merely at Wi-Fi network packet speed, and the researchers state that “in our setup, the brute force attack always succeeded in 140 minutes on average.”

(We’re assuming they tried it repeatedly just to test that their cracking code was working correctly, although with a hard-wired key shared by all Tapo bulbs, just their first crack would have been enough.)

As long as you’ll speak securely, I don’t care you who are

The next cryptographic problem turned up in the next stage of the light bulb setup process, and was a similar sort of mistake.

After accepting a light bulb as genuine based on a keyed-hash-that-doesn’t-actually-have-a-key, the app agrees on a session key to encrypt its traffic with the “genuine” bulb…

…but once again has no way of checking whether the key agreement took place with a real bulb or an imposter.

Agreeing on a session key is important, because it ensures that no one else on the network can snoop on the Wi-Fi and Tapo passwords when they are subsequently sent from the Tapo app to what it thinks is a Tapo light bulb.

But having no verification process for the key agreement itself is a bit like connecting to a website over HTTPS, and then not bothering to perform even the most basic check on the web certificate that it sends back: your traffic will be secure in transit, but could nevertheless be going straight into the hands of a crook.

The Tapo app identifies itself to the light bulb (or what it thinks is a light bulb) by sending it an RSA public key, which the other end uses to encrypt a randomly generated AES key to secure the data exchanged during the session.

But the light bulb device doesn’t provide any sort of identification, not even a checksum with a hard-wired 32-bit key, back to the Tapo app.

So, the app has no choice but to accept the session key without knowing whether it came from a real light bulb or an imposter device.

The combined outcome of these two flaws is that an attacker on your network could first convince you that their rogue access point was a genuine light bulb waiting to be configured, and thus lure you to the wrong place, and then convince you to to send it an encrypted copy of your own Wi-Fi and Tapo passwords.

Ironically, those leaked passwords really would be secure against everyone… except the imposter with the rogue access point.

Number-used-once that’s used over and over

Unfortunately, there’s more.

When we said above that “those leaked passwords really would be secure,” that wasn’t entirely correct.

The session key that’s established during the key agreement process we described earlier isn’t handled correctly, because the programmers made a blunder in their use of AES.

When the app encrypts each request that it sends to a light bulb, it uses an encryption mode called AES-128-CBC.

We won’t explain CBC (cipher-block chaining) here, but we’ll just mention that CBC mode is designed so that if you encrypt the same chunk of data more than once (such as repeated requests to turn light on and turn light off, where the raw data in the request is the same each time), you don’t get the same output every time.

If every light on and light off request came out identically, then once an attacker had guessed what a turn it off packet looked like, they could not only recognise those packets in future without decrypting them, but also replay those same packets without needing to know how to encrypt them in the first place.

As it happens, CBC-based encryption effectively relies on “seeding” the encryption process for each chunk of data by first mixing a unique, randomly-chosen block of data into the encryption process, thus creating a unique sequence of encrypted data in the rest of the chunk.

This “seed” data is known in the jargon an IV, short for initialisation vector, and although it isn’t meant to be secret, it does need to be unpredictably different every time.

Simply put: same key + unique IV = unique ciphertext output, but same key + same IV = predictable encryption.

The TP-Link coders, unfortunately, generated an IV at the same time that they created their AES session key, and then used the same IV over and over again for every subsequent data packet, even when previous data was repeated exactly.

That’s a cryptographic no-no.

Did I send six packets, or only five?

The last cryptographic problem that the researchers found is one that could still harm security even if the initialisation vector problem were fixed, namely that old messages, whether an attacker knows what they mean or not, can be played back later as if they were new.

Typically, this type of replay attack is handled in cryptographic protocols by some sort of sequence number, or timestamp, or both, that’s included in each data packet in order to limit its validity.

Like the date on a train ticket that will give you away if you try to use it two days in a row, even if the ticket itself never gets cancelled by a ticket machine or punched by a ticket inspector, sequence numbers and timestamps in data packets serve two important purposes.

Firstly, attackers can’t record traffic today and easily play it back later and potentially create havoc.

Secondly, buggy code that sends requests repeatedly by mistake, for example due to dropped replies or missing network acknowledgements, can reliably be detected and controlled.

What to do?

If you’re a Tapo light bulb user, keep your eyes open for firmware updates from TP-Link that address these issues.

If you’re a programmer responsible for securing network traffic and network-based product setups, read through the research paper to ensure that you haven’t made any similar mistakes.

Remember the following rules:

  • Cryptography isn’t only about secrecy. Encryption is just one part of the cryptological “holy trinity” of confidentiality (encrypt it), authenticity (verify who’s at the other end), and integrity (make sure no one tampered with it along the way).
  • Ensure any one-time keys or IVs are truly unique. The common jargon term nonce, used for for this sort of data, is short for number used once, a word that clearly reminds you that IVs must never be re-used.
  • Protect against replay attacks. This is a special aspect of ensuring the authenticity and integrity we mentioned above. An attacker should not be able to capture a request you’re making now and blindly replay it later without getting spotted. Remember that an attacker doesn’t need to be able to understand a message if they can replay it and potentially create havoc.

go top