Category Archives: News

S3 Ep102: Cutting through cybersecurity news hype [Audio + Transcript]

CUTTING THROUGH CYBERSECURITY NEWS HYPE

With Paul Ducklin and Chester Wisniewski

Intro and outro music by Edith Mudge.

Click-and-drag on the soundwaves below to skip to any point. You can also listen directly on Soundcloud.

You can listen to us on Soundcloud, Apple Podcasts, Google Podcasts, Spotify, Stitcher and anywhere that good podcasts are found. Or just drop the URL of our RSS feed into your favourite podcatcher.


READ THE TRANSCRIPT

[MUSICAL MODEM]

DUCK.  Hello, everybody.

Welcome to another episode of the Naked Security podcast.

I’m Paul Ducklin, and I’m joined by my friend and colleague Chester Wisniewski from Vancouver.

Hello, Chet!


CHET.  Hello Duck.

Good to be back on the podcast.


DUCK.  Unfortunately, the reason you’re back on this particular one is that Doug and his family have got the dreaded lurgy…

..they’re having a coronavirus outbreak in their household.

Thank you so much for stepping up at very short notice, literally this afternoon: “Chet, can you jump in?”

So let’s crack straight on to the first topic of the day, which is something that you and I discussed in part in the mini-podcast episode we did last week, and that’s the issue of the Uber breach, the Rockstar breach, and this mysterious cybercrime group known as LAPSUS$.

Where are we now with this ongoing saga?


CHET.  Well, I think the answer is that we don’t know, but certainly there have been things that I will say have been perceived to be developments, which is…

…I have not heard of any further hacks after the Rockstar Games hack or Take-Two Interactive hack that occurred just over a week ago, as of the time of this recording.

An underage individual in the United Kingdom was arrested, and some people have drawn some dotted lines saying he’s sort of the linchpin of the LAPSUS$ group, and that that person is detained by the UK police.

But because they’re a minor, I’m not sure we really know much of anything.


DUCK.  Yes, there were a lot of conclusions jumped to!

Some of them may be reasonable, but I did see a lot of articles that were talking as though facts had been established when they hadn’t.

The person who was arrested was a 17-year-old from Oxfordshire in England, and that is exactly the same age and location of the person who was arrested in March who was allegedly connected to LAPSUS$.

But we still don’t know whether there’s any truth in that, because the main source for placing a LAPSUS$ person in Oxfordshire is some other unknown cybercriminal that they fell out with who doxxed them online:

So I think we have to be, as you say, very careful about claiming as facts things that may well be true but may well not be true…

…and in fact don’t really affect the precautions you should be taking anyway.


CHET.  No, and we’ll talk about this again in one of the other stories in a minute.

But when the heat gets turned up after one of these big attacks, a lot of times people go to ground whether anyone’s been arrested or not.

And we certainly saw that before – I think in the other podcast we mentioned the Lulzsec hacking group that was quite famous ten years or so ago for doing similar… “stunt hacks”, I would call them – just things to embarrass companies and publish a bunch of information about them publicly, even if they perhaps didn’t intend to extort them or do some other crime to gain any financial advantage for themselves.

Several times, different members of that group… one member would be arrested, but there clearly were, I think, in the end, five or six different members of that group, and they would all stop hacking for a few weeks.

Because, of course, the police were suddenly very interested.

So this is not unusual.

The fact is all of these organisations have succumbed to social engineering in some way, with the exception… I won’t say with “the exception” because, again, we don’t know -we don’t really understand how they got into Rockstar Games.

But I think this is an opportunity to go back and review how and where you’re using multi-factor authentication [MFA] and perhaps to turn the dial up a notch on how you might have deployed it.

In the case of Uber, they were using a push notification system which displays a prompt on your phone that says, “Somebody’s trying to connect to our portal. Do you want to Allow or Block?”

And it’s as simple as just tapping the big green button that says [Allow].

It sounds like, in this case, they fatigued someone into getting so annoyed after getting 700 of these prompts on their phone that they just said [Allow] to make it stop happening.

I wrote a piece on the Sophos News blog discussing a few of the different lessons that can be taken away from Uber’s lapse, and what Uber might be able to implement to prevent these same things from occurring again:


DUCK.  Unfortunately, I think the reason that a lot of companies go for that, “Well, you don’t have to put in a six-digit code, you just tap the button” is that it’s the only way that they could make employees willing enough to want to do 2FA at all.

Which seems a little bit of a pity…


CHET.  Well, the way we’re asking you to do it today beats the heck out of carrying an RSA token on your keychain like we used to do before.


DUCK.  One for every account! [LAUGHS]


CHET.  Yes, I don’t miss carrying the little fob on my key ring. [LAUGHS]

I think I have one around here somewhere that says “Dead bat” on the screen, but they didn’t spell “dead” with an A.

It was dEdbAt


DUCK.  Yes, it’s only six digits, right?


CHET.  Exactly. [LAUGHS]

But things have improved, and there’s a lot of very sophisticated multifactor tools out there now.

I always recommend using FIDO tokens whenever possible.

But outside of that, even in software systems, these things can be designed to work in different ways for different applications.

Sometimes, maybe you just need to click [OK] because it’s not something super-sensitive.

But when you’re doing the sensitive thing, maybe you do have to enter a code.

And sometimes the code goes in the browser, or sometimes the code goes into your phone.

But all of it… I’ve never spent more than 10 seconds authorising myself to get into something when multifactor has popped up, and I can spare 10 seconds for the safety and security of not just my company’s data, but our employees and our customers data.


DUCK.  Couldn’t agree more, Chester!

Our next story concerns a very large telco in Australia called Optus:

Now, they got hacked.

That wasn’t a 2FA hack – it was perhaps what you might call “lower-hanging fruit”.

But in the background, there was a whole lot of shenanigans when law enforcement got involved, wasn’t there?

So… tell us what happened there, to the best of your knowledge.


CHET.  Exactly – I’m not read-in on this in any detailed manner, because we’re not involved in the attack.


DUCK.  And I think they’re still investigating, obviously, aren’t they?

Because it was, what, millions of records?


CHET.  Yes.

I don’t know the precise number of records that were stolen, but it impacted over 9 million customers, according to Optus.

And that could be because they’re not quite sure which customers information may have been accessed.

And it was sensitive data, unfortunately.

It included names, addresses, email addresses, birthdates and identity documents, which is presumably passport numbers and/or Australian-issued driving licences.

So that is a pretty good trove for somebody looking to do identity theft – it is not a good situation.

The advice to victims that receive a notification from Optus is that if they had used their passport, they ought to replace it.

That is not a cheap thing to do!

And, unfortunately, in this case, the perpetrator is alleged to have gotten the data by using an unauthenticated API endpoint, which in essence means a programmatic interface facing the internet that did not require even a password…

…an interface that allowed him to serially walk through all of the customer records, and download and siphon out all that data.


DUCK.  So that’s like I go to example.com/user­record/000001 and I get something and I think, “Oh, that’s interesting.”

And then I go, -2, -3, -4, 5, -6… and there they all are.


CHET.  Absolutely.

And we were discussing, in preparation for the podcast, how this kind of echoed the past, when a hacker known as Weev had done a similar attack against AT&T during the launch of the original iPhone, enumerating many celebrities’ personal information from an AT&T API endpoint.

Apparently, we don’t always learn lessons, and we make the same mistakes again…


DUCK.  Because Weev famously, or infamously, was charged for that, and convicted, and went to prison…

…and then it was overturned on appeal, wasn’t it?

I think the court formed the opinion that although he may have broken the spirit of the law, I think it was felt that he hadn’t actually done anything that really involved any sort of digital “breaking and entering”.


CHET.  Well, the precise law in the United States, the Computer Fraud and Abuse Act, is very specific about the fact that you’re breaching that Act when you exceed your authority or you have unauthorised access to a system.

And it’s hard to say it’s unauthorised when it’s wide open to the world!


DUCK.  Now my understanding in the Optus case is that the person who is supposed to have got the data seemed to have expressed an interest in selling it…

…at least until the Australian Federal Police [AFP] butted in.

Is that correct?


CHET.  Yes. He had posted to a dark market forum offering up the records, which he claimed were on 11.2 million victims, offering it for sale for $1,000,000.

Well, I should say one million not-real-dollars… 1 million worth of Monero.

Obviously, Monero is a privacy token that is commonly used by criminals to avoid being identified when you pay the ransom or make a purchase from them.

Within 72 hours, when the AFP began investigating and made a public statement, he seems to have rescinded his offer to sell the data.

So perhaps he’s gone to ground, as I said in the previous story, in hopes that maybe the AFP won’t find him.

But I suspect that whatever digital cookie crumbs he’s left behind, the AFP is hot on the trail.


DUCK.  So if we ignore the data that’s gone, and the criminality or otherwise of accessing it, what’s the moral of the story for people providing RESTful APIs, web-based access APIs, to customer data?


CHET.  Well, I’m not a programming expert, but it seems like some authentication is in order… [LAUGHTER]

…to ensure that people are only accessing their own customer record if there’s a reason for that to be publicly accessible.

In addition to that, it would appear that a significant number of records were stolen before anything was noticed.

And no different than we should monitor, say, rate limiting on our own authentication against our VPNs or our web apps to ensure that somebody is not making a brute-force attack against our authentication services…

…you would hope that once you queried a million records through a service that seems to be designed for you to look up one, perhaps some monitoring is in order!


DUCK.  Absolutely.

That’s a lesson that we could all have learned from way back in the Chelsea Manning hack, isn’t it, where she copied, what was it?

30 years worth of State Department cables copied onto a CD… with headphones on, pretending it was a music CD?


CHET.  Britney Spears, if I recall.


DUCK.  Well, that was written on the CD, wasn’t it?


CHET.  Yes. [LAUGHS]


DUCK.  So it gave a reason why it was a rewriteable CD: “Well, I just put music on it.”

And at no point did any alarm bell go off.

You can imagine, maybe, if you copied the first month worth of data, well, that might be okay.

A year, a decade maybe?

But 30 years?

You’d hope that by then the smoke alarm would be ringing really loudly.


CHET.  Yes.

“Unauthorised backups”, you might call them, I guess.


DUCK.  Yes…

…and this is, of course, a huge issue in modern day ransomware, isn’t it, where a lot of the crooks are exfiltrating data in advance to give them extra blackmail leverage?

So when you come back and say, “I don’t need your decryption key, I’ve got backups,” they say, “Yes, but we have your data, so we’ll spill it if you don’t give us the money.”

In theory, you’d hope that it would be possible to spot the fact that all your data was being backed up but wasn’t following the usual cloud backup procedure that you use.

It’s easy to say that… but it is the kind of thing that you need to look out for.


CHET.  There was a report this week that, in fact, as bandwidth has become so prolific, one of the ransom groups is no longer encrypting.

They’re taking all your data off your network, just like the extortion groups have done for a while, but then they’re wiping your systems rather than encrypting it and going, “No, no, no, we’ll give you the data back when you pay.”


DUCK.  That’s “Exmatter”, isn’t it?


CHET.  Yes.


DUCK. &nbsp”Why bother with all the complexity of elliptic curve cryptography and AES?

There’s so much bandwidth out there that instead of [LAUGHING]… oh, dear, I shouldn’t laugh… instead of saying, “Pay us the money and we’ll send you the 16-byte decryption key”, it’s “Send us the money and we’ll give you the files back.”


CHET.  It emphasises again how we need to be looking for the tools and the behaviours of someone doing malicious things in our network, because they may be authorised to do some things (like Chelsea Manning), or they may be intentionally open, unauthenticated things that do have some purpose.

But we need to be watching for the behaviour of their abuse, because we can’t just watch for the encryption.

We can’t just watch for somebody password guessing.

We need to watch for these larger activities, these patterns, that indicate something malicious is occurring.


DUCK.  Absolutely.

As I think you said in the minisode that we did, it’s no longer enough just to wait for alerts to pop up on your dashboard to say something bad happened.

You need to be aware of the kind of behaviours that are going on in your network that might not yet be malicious, but yet are a good sign that something bad is about to happen, because, as always, prevention is an awful lot better than cure:

Chester, I’d like to move on to another item – that story is something I wrote up on Naked Security today, simply because I myself had got confused.

My newsfeed was buzzing with stories about WhatsApp having a zero-day:

Yet when I looked into all the stories, they all seemed to have a common primary source, which was a fairly generic security advisory from WhatsApp itself going back to the beginning of the month.

The clear and present danger that the news headlines led me to believe…

…turned out to be not at all true as far as I could see.

Tell us what happened there.


CHET.  You say, “Zero-day.”

I say, “Show me the victims. Where are they?” [LAUGHTER]


DUCK.  Well, sometimes you may not be able to reveal that, right?


CHET.  Well, in that case, you would tell us that!

That is a normal practice in the industry for disclosing vulnerabilities.

You’ll frequently see, on Patch Tuesday, Microsoft making a statement such as, “This vulnerability is known to have been exploited in the wild”, meaning somebody out there figured out this flaw, started attacking it, then we found out and went back and fixed it.

*That’s* a zero-day.

Finding a software flaw that is not being exploited, or there’s no evidence has ever been exploited, and proactively fixing it is called “Good engineering practice”, and it’s something that almost all software does.

In fact, I recall you mentioning the recent Firefox update proactively fixing a lot of vulnerabilities that the Mozilla team fortunately documents and reports publicly – so we know they’ve been fixed despite the fact no one out there was known to ever be attacking them.


DUCK.  I think it’s important that we keep back that word “zero-day” to indicate just how clear and present a danger is.

And calling everything a zero-day because it could cause remote code execution loses the effect of what I think is a very useful term.

Would you agree with that?


CHET.  Absolutely.

That’s not to diminish the importance of applying these updates, of course – anytime you see “remote code execution”, somebody may now go back and figure out how to attack those bugs and the people that haven’t updated their app.

So it’s still an urgent thing to make sure that you do get the update.

But because of the nature of a zero-day, it really does deserve its own term.


DUCK.  Yes.

Trying to make zero-day stories out of things that are interesting and important but not necessarily a clear and present danger is just confusing.

Particularly if the fix actually came out a month before, and you’re presenting it as a story as though “this is happening right now”.

Anyone going to their iPhone or their Android is going to be saying, “I’ve a version number way ahead of that. What is going on here?”

Confusion does not help when it comes to trying to do the right thing in cybersecurity.


CHET.  And if you find a security flaw that could be a zero-day, please report it, especially if there’s a bug bounty program offered by the organisation that develops the software.

I did see, this afternoon, somebody over the weekend discovered a vulnerability in OpenSea, which is a platform for trading non-fungible tokens or NFTs… which I can’t recommend to anyone, but somebody found an unpatched vulnerability that was critical in their system over the weekend, reported it, and received a $100,000 bug bounty today.

So it is worth being ethical and turning these things in when you do discover them, to prevent them from turning into a zero-day when somebody else finds them.


DUCK.  Absolutely.

You protect yourself, you protect everybody else, you do the right thing by the vendor… yet through responsible disclosure you do provide that “mini-Sword of Damocles” that means that unethical vendors, who in the past might have swept bug reports under the carpet, can’t do so because they know that they’re going to get outed in the end.

So they actually might as well do something about it now.

Chester, let’s move on to our last topic for this week, and that is the issue of what happens to data on devices when you don’t really want them anymore.

And the story I’m referring to is the $35,000,000 fine that was issued to Morgan Stanley for an incident going all the way back to 2016:

There are several aspects to the story… it’s fascinating reading, actually, the way it all unfolded, and the sheer length of time that this data lived on, floating around in unknown locations on the internet.

But the main part of the story is that they had… I think it was something like 4900 hard disks, including disks coming out of RAID arrays, server disks with client data on.

“We don’t want these anymore, so we’ll send them away to a company which will wipe them and then sell them, so we’ll get some money back.”

And in the end, the company may have wiped some of them, but some of them they just sent for sale on an auction site without wiping them at all.

We keep making the same old mistakes!


CHET.  Yes.

The very first HIPAA violation, I believe, that was found in the United States – the healthcare legislation about protecting patient information – was for stacks of hard disks in a janitorial closet that were unencrypted.

And that’s the key word to begin the process of what to do about this, right?

There’s not a disk in the world that should not be full-disk encrypted at this point.

Every iPhone has been for as long as I can remember.

Most all Androids have been for as long as I can remember, unless you’re still picking up Chinese burner phones with Android 4 on them.

And desktop computers, unfortunately, are not encrypted frequently enough.

But they should be no different than those server hard disks, those RAID arrays.

Everything should be encrypted to begin with, to make the first step in the process difficult, if not impossible…

…followed by the destruction of that device if and when it reaches the end of its useful life.


DUCK.  For me, one of the key things in this Morgan Stanley story is that five years after this started… it started in 2016, and in June last year, disks from that auction site that had gone into the great unknown were still being bought back by Morgan Stanley.

They were still unwiped, unencrypted (obviously), working fine, and with all the data intact.

Unlike bicycles that get thrown in the canal, or garden waste that you put in the compost bin, data on hard disks may not decay, possibly for a very long time.

So if in doubt, rub it out completely, eh?


CHET.  Yes, pretty much.

Unfortunately, that’s the way it is.

I like to see things get reused as much as possible to reduce our e-waste.

But data storage is not one of those things where we can afford to take that chance…


DUCK.  It could be a real data saver, not just for you, but for your employer, and your customers, and the regulator.

Chester, thank you so much for stepping up again at very, very, short notice.

Thank you so much for sharing with us your insights, particularly your look at that Optus story.

And, as usual, until next time…


BOTH.  Stay secure.

[MUSICAL MODEM]


Optus breach – Aussie telco told it will have to pay to replace IDs

Last week’s cyberintrusion at Australian telco Optus, which has about 10 million customers, has drawn the ire of the country’s government over how the breached company should deal with stolen ID details.

Darkweb screenshots surfaced quickly after the attack, with an underground BreachForums user going by the plain-speaking name of optusdata offering two tranches of data, alleging that they had two databases as follows:

 11,200,000 user records with name, date of birth, mobile nmber and ID 4,232,652 records included some sort of ID document number 3,664,598 of the IDs were from driving licences 10,000,000 address records with email, date of birth, ID and more 3,817,197 had ID document numbers 3,238,014 of the IDs were from driving licences

The seller wrote, “Optus if you are reading! Price for us to not sale [sic] data is 1,000,000$US! We give you 1 week to decide.”

Regular buyers, the seller said, could have the databases for $300,000 as a job lot, if Optus didn’t take up its $1m “exclusive access” offer within the week.

The seller said they expected payment in the form of Monero, a popular cryptocurrency that’s harder to trace than Bitcoin.

Monero transactions are mixed together as part of the payment protocol, making the Monero ecosystem into a sort-of cryptocoin tumbler or anonymiser in its own right.

What happened?

The data breach itself was apparently down to missing security on what’s known in the jargon as an API endpoint. (API is short for application programming interface, a predefined way for one part of an app, or collection of apps, to request some sort of service, or to retrieve data, from another.)

On the web, API endpoints normally take the form of special URLs that trigger specific behaviour, or return requested data, instead of simply serving up a web page.

For example, a URL like https://www.example.com/about might simply feed back a static web page in HTML form, such as:

 <HTML> <BODY> <H2>About this site</H2> <P>This site is just an example, as the URL implies. </BODY> </HTML> 

Visiting the URL with a browser would therefore result in a web page that looks as you would expect:

But a URL such as https://api.example.com/userdata?id=23de­6731­e9a7 might return a database record specific to the specified user, as though you had done a function call in a C program along the lines of:

 /* Typedefs and prototypes */ typedef struct USERDATA UDAT; UDAT* alloc_new_userdata(void); int get_userdata(UDAT* buff, const char* uid); /* Get a record */ UDAT* datarec = alloc_new_userdata(); int err = get_userdata(datarec,"23de6731e9a7");

Assuming the requested user ID existed in the database, calling the equivalent function via an HTTP request to the endpoint might produce a reply in JSON format, like this:

 { "userid" : "23de6731e9a7", "nickname" : "duck", "fullname" : "Paul Ducklin", "IDnum" : "42-4242424242" }

In an API of this sort, you’d probably expect several cybersecurity precautions to be in place, such as:

  • Authentication. Each web request might need to include an HTTP header specifying a random (unguessable) session cookie issued to a user who had recently proved their identity, for example with a username, password and 2FA code. This sort of session cookie, typically valid for a limited time only, acts as a temporary access pass for lookup requests subsequently performed by the pre-authenticated user. API requests from unauthenticated or unknown users can therefore instantly be rejected.
  • Access restrictions. For database lookups that might retrieve personally identifiable data (PII) such as ID numbers, home addresses or payment card details, the server accepting API endpoint requests might impose network-level protection to filter out requests coming directly from the internet. An attacker would therefore need to compromise an internal server first, and wouldn’t be able to probe for data directly over the internet.
  • Hard-to-guess database identifiers. Although security through obscurity (also known as “they’ll never guess that”) is a poor underlying basis for cybersecurity, there’s no point in making things easier than you have to for the crooks. If your own userid is 00000145, and you know that a friend who signed up just after you got 00000148, then it’s a good guess that valid userid values start at 00000001 and go up from there. Randomly-generated values make it harder for attackers who have already found a loophole in your access control to run a loop that tries over and over to retrieve likely userids.
  • Rate limiting. Any repetitive sequence of similar requests can be used a a potential IoC, or indicator of compromise. Cybercriminals who want to download 11,000,000 database items generally don’t use a single computer with a single IP number to do the entire job, so bulk download attacks aren’t always immediately obvious just from traditional network flows. But they will often generate patterns and rates of activity that simply don’t match what you’d expect to see in real life.

Apparently, few or none of these protections were in place during the Optus attack, notably including the first one…

…meaning that the attacker was able to access PII without ever needing to identify themselves at all, let alone to steal a legitimate user’s login code or authentication cookie to get in.

Somehow, it seems, an API endpoint with access to sensitive data was opened up to the internet at large, where it was discovered by a cybercriminal and abused to extract information that should have been behind some sort of cybersecurity portcullis.

Also, if the attacker’s claim to have retrieved a total of more than 20,000,000 database records from two databases is to be believed, we’re assuming [a] that Optus userid codes were easily computed or guessed, and [b] that no “database access has hit unusual levels” warnings went off.

Unfortunately, Optus hasn’t been terribly clear about how the attack unfolded, saying merely:

Q. How did this happen?

A. Optus was the victim of a cyberattack. […]

Q. Has the attack been stopped?

A. Yes. Upon discovering this, Optus immediately shut down the attack.

In other words, it looks as though “shutting down the attack” involved closing the loophole against further intrusion (e.g. by blocking access to the unauthenticated API endpoint) rather than intercepting the initial attack early on after only a limited number of records had been stolen.

We suspect that if Optus had detected the attack while it was still under way, the company would have stated in its FAQ just how far the crooks had got before their access was shut down.

What next?

What about customers whose passport or driving licence numbers were exposed?

Just how much of a risk does leaking an ID document number, rather than more complete details of the document itself (such as a high-resolution scan or certified copy), pose to the victim of a data breach like this?

How much identification value should we give to ID numbers alone, given how widely and frequently we share them these days?

According to the Australian government, the risk is significant enough that victims of the breach are being advised to replace affected documents.

And with possibly millions of affected users, the document renewal charges alone could run to hundreds of millions of dollars, and necessitate the cancellation and reissuing of a significant proportion of the country’s driving licences.

We estimate than about 16 million Aussies have licences, and are inclined to use them as ID inside Australia instead of carrying round their passports. So, if the optusdata BreachForum poster was telling the truth, and close to 4 million licence numbers were stolen, close to 25% of all Australian licences might need replacing. We don’t know how useful this might actually be in the case of Australian driving licences, which are issued by individual states and territories. In the UK, for instance, your driving licence number is quite obviously derived algorithmically from your name and date of birth, with a very modest amount of shuffling and just a few random characters inserted. A new licence therefore gets a new number that is very similar to the previous one.

Those without licences, or visitors who had bought SIM cards from Optus on the basis of a foreign passport, would need to replace their passports instead – an Australia passport replacement costs close to AU$193, a UK passport is £75 to £85, and a US renewal is $130 to $160.

(There’s also the question of waiting times: Australia currently advises that replacement passport will take at least 6 weeks [2022-09-28T13:50Z], and that’s without a sudden surge caused by breach-related processing; in the UK, due to existing backlogs, His Majesty’s Government is presently telling applicants to allow 10 weeks for passport renewal.)

Who carries the cost?

Of course, if replacing all potentially compromised IDs is deemed necessary, the burning question is, “Who will pay?”

According to the Australian Prime Minister, Anthony Albanese, there’s no doubt where the money to replace passports should come from:

There’s no word from the federal legislature on on replacing driving licences, that being a matter handled by State and Territory governments…

…and no word on whether “replace all documents” will become a routine reaction whenever a breach involving ID document is reported, something that could easily swamp the public service, given that licences and passports are usually expected to last 10 years each.

Watch this space – this looks set to get interesting!


WhatsApp “zero-day exploit” news scare – what you need to know

For the last day or two, our news feed has been buzzing with warnings about WhatsApp.

We saw many reports linking to two tweets that claimed the existence of two zero-day security holes in WhatsApp, giving their bug IDs as CVE-2022-36934 and CVE-2022-27492.

One article, apparently based on those tweets, breathlessly insisted not only that these were zero-day bugs, but also that they’d been discovered internally and fixed by the WhatsApp team itself.

By definition, however, a zero-day refers to a bug that attackers discovered and figured out how to exploit before a patch was available, so that there were zero days on which even the most proactive sysadmin with the most progressive attitude to patching could have been ahead of the game.

In other words, the whole idea of stating that a bug is a zero-day (often written with just a digit, as 0-day) is to persuade people that the patch is at least as important as ever, and perhaps more important than that, because installing the patch is more of a question of catching up with the crooks that of keeping in front of them.

If developers uncover a bug themselves and patch it of their own accord in their next update, it’s not a zero-day, because the Good Guys got there first.

Likewise, if security researchers follow the principle of responsible disclosure, where they reveal the details of a new bug to a vendor but agree not to publish those details for an agreed period of time to give the vendor time to create a patch, it’s not a zero-day.

Setting a responsible disclosure deadline for publishing a writeup of the bug serves two purposes, namely that the researcher ultimately gets to to take credit for the work, while the vendor is prevented from sweeping the issue under the carpet, knowing that it will be outed anyway in the end.

So, what’s the truth?

Is WhatsApp currently under active attack by cyercriminals? Is this a clear and current danger?

How worried should WhatsApp users be?

If in doubt, consult the advisory

As far as we can tell, the reports circulating at the moment are based on information directly from WhatsApp’s own 2022 security advisory page, which says [2022-09-27T16:17:00Z]:

WhatsApp Security Advisories 2022 Updates September Update CVE-2022-36934 An integer overflow in WhatsApp for Android prior to v2.22.16.12, Business for Android prior to v2.22.16.12, iOS prior to v2.22.16.12, Business for iOS prior to v2.22.16.12 could result in remote code execution in an established video call. CVE-2022-27492 An integer underflow in WhatsApp for Android prior to v2.22.16.2, WhatsApp for iOS v2.22.15.9 could have caused remote code execution when receiving a crafted video file.

Both the bugs are listed as potentially leading to remote code execution, or RCE for short, meaning that booby-trapped data could force the app to crash, and that a skilled attacker might be able to rig up the circumstances of the crash to trigger unauthorised behaviour along the way.

Typically, when an RCE is involved, that “unauthorised behaviour” means running malicious program code, or malware, to subvert and take some form of remote control over your device.

From the descriptions, we assume that the first bug required a connected call before it could be triggered, while the second bug sounds as though it could be triggered at other times, for example while reading a message or viewing a file already downloaded to your device.

Mobile apps are usually regulated much more strictly by the operating system than apps on laptops or servers, where local files are generally accessible to, and commonly shared between, multiple programs.

This, in turn, means that the compromise of a single mobile app generally poses less of a risk than a similar malware attack on your laptop.

On your laptop, for example, your podcast player can probably peek at your documents by default, even if none of them are audio files, and your photo program can probably rootle around in your spreadsheet folder (and vice versa).

On your mobile device, however, there’s typically a much stricter separation between apps, so that, by default at least, your podcast player can’t see documents, your spreadsheet program can’t browse your photos, and your photo app can’t see audio files or docments.

However, even access to a single “sandboxed” app and its data can be all that an attacker wants or needs, especially if that app is the one you use for communicating securely with your colleagues, friends and family, like WhatsApp.

WhatsApp malware that could read your past messages, or even just your list of contacts, and nothing else, could provide a treasure trove of data for online criminals, especially if their goal is to learn more about you and your business in order to sell that inside information on to other crooks on the dark web.

A software bug that opens up cybersecurity holes is known as a vulnerability, and any attack that makes practical use of a specific vulnerablity is known as an exploit.

And any known vulnerability in WhatsApp that might be exploitable for snooping purposes is well worth patching as soon as possible, even if no one ever figures out a working exploit for stealing data or implanting malware.

(Not all vulnerabilities end up being exploitable for RCE – some bugs turn out to be sufficiently capricious that even if they can reliably be triggered to provoke a crash, or denial of service, they can’t be tamed well enough to take over the crashed app completely.)

What to do?

The good news here is that the bugs listed here were apparently patched close to a month ago, even though the latest reports we’ve seen imply that these flaws represent a clear and current danger to WhatsApp users.

As the WhatsApp advisory page points out, these two so-called “zero-day” holes are patched in all flavours of the app, for both Android and iOS, with version numbers 2.22.16.12 or later.

According to Apple’s App Store, the current version of WhatsApp for iOS (both Messenger and Business flavours) is already 2.22.19.78, with at five intervening updates released since the first fix that patched the abovementioned bugs, which already dates back a month.

On Google Play, WhatsApp is already up to 2.22.19.76 (version don’t always align exactly between different operating systems, but are often close).

In other words, if you have set your device to autoupdate, then you ought to have been patched against these WhatsApp threats for about a month already.

To check the apps you have installed, when they last updated, and their version details, ppen the App Store app on iOS, or Play Store on Android.

Tap on your account icon to access the list of apps your installed on your device, including details of when they last updated and the current version number you’ve got.


Uber and Rockstar – has a LAPSUS$ linchpin just been busted (again)?

The curious name LAPSUS$ made huge headlines in March 2022 as the nickname of a hacking gang, or, in unvarnished words, as the label for a notorious and active collective of cybercriminals:

The name was somewhat unusual for a cybercrime crew, who commonly adopt soubriquets that sound edgy and destructive, such as DEADBOLT, Satan, Darkside, and REvil.

As we mentioned back in March, however, lapsus is as good a modern Latin word as any for “data breach”, and the trailing dollar sign signifies both financial value and programming, being the traditional way of denoting that BASIC variable is a text string, not a number.

The gang, team, crew, posse, collective, gaggle, call it what you will, of attackers apparently presented a similar sort of ambiguity in their cybercriminality.

Sometimes, they seemed to show that they were serious about extorting money or ripping off cryptocurrency from their victims, but at but at other times they seemed simply to be showing off.

Microsoft admitted at the time that it had been infiltrated by LAPSUS$, though the software giant referred to the group as DEV-5037, with the criminals apparently stealing gigabytes of source code.

Okta, a 2FA service provider, was another high-profile victim, where the hackers acquired RDP access to an support techie’s computer, and were therefore able to access a wide range of Okta’s internal systems as if they were logged in directly to Okta’s own network.

That support techie didn’t work for Okta, but for a company contracted by Okta, so that the attackers were essentially able to breach Okta’s network without breaching Okta itself.

Intriguingly, even though Okta’s breach happened in January 2022, neither Okta nor its contractor made any public admission of the breach for about two months, while a forensic examination took place…

…until LAPSUS$ apparently decided to pre-empt any official announcement by dumping screenshots to “prove” the breach, ironically on the very same day that Okta received the final forensic report from the contractor (how, or if, LAPSUS$ got advance warning of the report’s delivery is unknown):

Next on the attack docket was graphics chip vendor Nvidia, who apparently also suffered a data heist, followed by one of the weirdest ransomware-with-a-difference extortion demands on record – open-source your graphics driver code, or else:

As we said in the Naked Security podcast (S3 Ep73):

Normally, the connection between cryptocurrency and ransomware is the crooks figure, “Go and buy some cryptocurrency and send it to us, and we’ll decrypt all your files and/or delete your data.” […]

But in this case, the connection with cryptocurrency was they said, “We’ll forget all about the massive amount of data we stole if you open up your graphics cards so that they can cryptomine at full power.”

Because that goes back to a change that Nvidia made last year [2021], which was very popular with gamers [by discouraging cryptominers from buying up all the Nvidia GPUs on the market for non-graphics purposes].

A different sort of cybercriminal?

For all that the online activities attributed to LAPSUS$ have been seriously and unashamedly criminal, the group’s post-exploitation behaviour often seemed rather old-school.

Unlike today’s multimillion-dollar ransomware attackers, whose primary motivations are money, money and more money, LAPSUS$ apparently aligned more closely with the virus-writing scene of the late 1980s and 1990s, where attacks were commonly conducted simply for bragging rights and “for the lulz”.

(The phrase for the lulz translates roughly as in order to provoke insultingly mirthful laughter, based on the acronym LOL, short for “laughing out loud”.)

So, when the City of London Police announced, just two days after the not-so-mirthful-at-all screenshots of the Okta attack appeared, that it had arrested what sounded like a motley bunch of youngsters in the UK for allegedly being members of a hacking group…

…the world’s IT media quickly made a connection with LAPSUS$:

As far as we’re aware, UK law enforcement has never used the word LAPSUS$ in connection with the suspects in that arrest, noting back in March 2022 simply that “our enquiries remain ongoing.”

Nevertheless, an apparent link with LAPSUS$ was inferred from the fact that one of the youngsters busted was said to be 17 years old, and to hail from Oxfordshire in England.

Fascinatingly, a hacker of that age who allegedly lived in a town just outside Oxford, the city from which the surrounding county gets its name, had been outed by a disgruntled cybercrime rival not long before, in what’s known as a doxxing.

Doxxing is where a cybercriminal releases stolen personal documents and details on purpose, often in order to put an individual at risk of arrest by law enforcement, or in danger of retribution by ill-informed or malevolent opponents.

The doxxer leaked what he claimed was his rival’s home address, together with personal details and photos of him and close family members, as well as a bunch of allegations that he was some kind of linchpin in the LAPSUS$ crew.

LAPUS$ back in the spotlight

As you can imagine, the recent Uber hacking stories revived the name LAPSUS$, given that the attacker in that case was widely claimed to be 18 years old, and was apparently only interested in showing off:

As Chester Wisniewski explained in a recent podcast minisode:

[I]n this case, […] it seems to be “for the lulz”. […T]he person who did it was mostly collecting trophies as they bounced through the network – in the form of screenshots of all [the] different tools and utilities and programs that were in use around Uber – and posting them publicly, I guess for the street cred.

Shortly after the Uber hack, nearly an hour’s worth of what seemed to be video clips from the forthcoming game GTA6, apparently screen captures made for debugging and testing purposes, were leaked following an intrusion at Rockstar games.

Once again, the same young hacker, with the same presumed connection to LAPSUS$, was implicated in the attack.

This time, reports suggest that the hacker had more in mind merely than bragging rights, allegedly saying that they were “looking to negotiate a deal.”

So, when City of London Police tweeted earlier this week that they had “arrested a 17-year-old in Oxfordshire on suspicion of hacking”

…you can imagine what conclusions the Twittersphere quickly reached.

It must be the same person!

After all, what’s the chance that we’re talking about two different and unconnected suspects here?

The only thing we don’t know is quite where the LAPSUS$ moniker comes into it, if indeed it’s involved at all.

O, what a tangled web we weave/When first we practise to deceive.


LEARN HOW TO AVOID LAPSUS$-STYLE ATTACKS

Click-and-drag on the soundwaves below to skip to any point. You can also listen directly on Soundcloud.


Morgan Stanley fined millions for selling off devices full of customer PII

Morgan Stanley, which bills itself in its website title tag as the “global leader in financial services”, and states in the opening sentence of its main page that “clients come first”, has been fined $35,000,000 by the US Securities and Exchange Commission (SEC)…

…for selling off old hardware devices online, including thousands of disk drives, that were still loaded with personally identifiable information (PII) belonging to its clients.

Strictly speaking, it’s not a criminal conviction, so the penalty isn’t technically a fine, but it’s “not a fine” in much the same sort of way that car owners in England no longer get parking fines, but officially pay penalty charge notices instead.

Also, strictly speaking, Morgan Stanley didn’t directly sell off the offending devices itself.

But the company contracted someone else to do the work of wiping-and-selling-off the superannuated equipment, and then didn’t bother to keep its eye on the process to ensure that it was done properly.

The full story

The SEC’s official document on the matter, Administrative Proceeding File Number 3-21112, actually makes really useful reading for anyone in SecOps or cybersecurity.

At 11 pages, it’s not too long to read in full, and the story it tells is a fascinating one, revealing numerous twists and turns, unauthorised switches in subcontractors, lack of oversight and follow-up, and reckless shortcuts.

If you have anything to do with the secure disposal of redundant equipment, be sure to read the SEC’s final document, and make sure that your own policies and procedures take into account the failings described in the report.

Notably, ensure that you have done, are doing, and will do a better job than Morgan Stanley with:

  • The equipment retirement and data destruction policies you adopt up front.
  • The way you choose your data-destruction contractors for old devices.
  • The procedures you follow to keep tabs on progress.

As you will see from the SEC’s tales of woeful wilfulness (the second word is one that the SEC uses officially and formally in respect of Morgan Stanley), there’s an awful lot that can go wrong when you are getting rid of old IT kit.

Nevertheless, the main points of the story are simply told in the SEC’s summary, namely that Morgan Stanley, via a contractor:

  • Sold approximately 4,900 information technology assets containing client PII, many of which still had that PII on them when they reached their new owners.
  • Decommissioned 500 network caching devices containing client PII that were at best partially encrypted, of which 42 were unaccounted for after their alleged “disposal”.

Dirty deeds and they’re done dirt cheap

In the first case, dating back to 2016, it seems that the contractor chosen by Morgan Stanley, perhaps realising that the company wasn’t checking up on how faithfully the wiping-and-selling-on process was being followed, decided to switch to a new (and unapproved) subcontractor who apparently skipped the “wipe it first” part, and directly put the retired devices up for sale on an on-line auction site.

Someone in Oklahoma bought a few of the old drives, presumably as hot spares for their own IT operation, and realised that they were still full of Morgan Stanley client data.

According to the SEC, the purchaser contacted Morgan Stanley and said, “[y]ou are a major financial institution and should be following some very stringent guidelines on how to deal with retiring hardware. Or at the very least getting some kind of verification of data destruction from the vendors you sell equipment to.”

Morgan Stanley ultimately bought back those drives, but that didn’t deal with any of the other disks that had been sold on elsewhere.

Indeed, the SEC notes that 14 more data-tainted disks were bought back from someone else by Morgan Stanley as recently as June 2021, still unwiped, still working fine, and still containing “at least 140,000 pieces of customer PII”.

As the SEC wryly notes, “the vast majority of the hard drives from the 2016 Data Center Decommissioning remain missing.”

We are certain that we may have encrypted something

In the second case, the retired devices were WAN (wide area network) caching servers used by branch offices to optimise internet bandwidth in order to accelerate access to common documents.

Ironically, these devices had an encrypt-any-stored-data-packets option that would have simplified decommissioning greatly.

After all, if you can show that you turned the encryption option on, and that you wiped all known copies of the decryption key, then data protection regulators in many countries will treat the encrypted data as wiped, too.

Data that’s considered undecryptable is no more meaningful than digital shredded cabbage.

But Morgan Stanley apparently didn’t activate the decryption option until at least one year after the devices went into use…

…and the encryption only applied to new data subsequently written to the device, not to anything that was there before.

So all that Morgan Stanley can “prove”, for the 42 devices that are still out there somewhere, is that each device almost certainly contains at least some client PII that definitely isn’t encrypted.

What to do?

  • You can outsource your cybersecurity, but you can’t outsource your responsibility. Make sure that you comply with data protection regulations by keeping track of how your contractors are complying with them, too. Part of the SEC’s complaint against Morgan Stanley is that it should have been obvious that that their chosen operator had deviated from the official plan, and thus that the company could easily have avoided becoming non-compliant and putting their clients at risk.
  • Full-device encryption can help you comply with data protection rules. Properly-scrambled data without the decryption key is effectively just random noise, so many data protection regulators treat “undecryptable” disks as if they’d been wiped, or never contained any data at all. But you need to be able to show both that you activated the encryption correctly in the first place, and that anyone who acquires the disk in future will be unable to acquire the decryption key.
  • If in doubt, go for device destruction, not for wiping-and-selling-on. There are sound environmental reasons for not blindly destroying and recycling every computing device that you retire from service, but there are diminishing returns from reusing old kit. Even large devices can be physically “shredded”, leaving their metals open to recovery but not their data. If you can’t usefully reuse it, don’t bother selling it on to someone else who might not ultimately dispose of it as soundly as you. Dispose of it responsibly yourself.
  • Mishandled PII can show up years after you lost it. Unlike garden waste in the compost bin or old bicycles dumped in the canal, misplaced data storage devices can show up in perfect working order, with all their original data intact, for years after you might have assumed they were lost without trace, or degraded beyond repair.

We can’t resist ending with the rhyme we often use to warn people about the risks of oversharing on social media, because it applies equally well to data stored by the biggest IT department.

If in doubt / Don’t give it out.


WATCH THE SPARKS FLY – A DISK SHREDDER IN ACTION

[embedded content]

(Watch directly on YouTube if the video won’t play here.)


go top