Category Archives: News

Morgan Stanley fined millions for selling off devices full of customer PII

Morgan Stanley, which bills itself in its website title tag as the “global leader in financial services”, and states in the opening sentence of its main page that “clients come first”, has been fined $35,000,000 by the US Securities and Exchange Commission (SEC)…

…for selling off old hardware devices online, including thousands of disk drives, that were still loaded with personally identifiable information (PII) belonging to its clients.

Strictly speaking, it’s not a criminal conviction, so the penalty isn’t technically a fine, but it’s “not a fine” in much the same sort of way that car owners in England no longer get parking fines, but officially pay penalty charge notices instead.

Also, strictly speaking, Morgan Stanley didn’t directly sell off the offending devices itself.

But the company contracted someone else to do the work of wiping-and-selling-off the superannuated equipment, and then didn’t bother to keep its eye on the process to ensure that it was done properly.

The full story

The SEC’s official document on the matter, Administrative Proceeding File Number 3-21112, actually makes really useful reading for anyone in SecOps or cybersecurity.

At 11 pages, it’s not too long to read in full, and the story it tells is a fascinating one, revealing numerous twists and turns, unauthorised switches in subcontractors, lack of oversight and follow-up, and reckless shortcuts.

If you have anything to do with the secure disposal of redundant equipment, be sure to read the SEC’s final document, and make sure that your own policies and procedures take into account the failings described in the report.

Notably, ensure that you have done, are doing, and will do a better job than Morgan Stanley with:

  • The equipment retirement and data destruction policies you adopt up front.
  • The way you choose your data-destruction contractors for old devices.
  • The procedures you follow to keep tabs on progress.

As you will see from the SEC’s tales of woeful wilfulness (the second word is one that the SEC uses officially and formally in respect of Morgan Stanley), there’s an awful lot that can go wrong when you are getting rid of old IT kit.

Nevertheless, the main points of the story are simply told in the SEC’s summary, namely that Morgan Stanley, via a contractor:

  • Sold approximately 4,900 information technology assets containing client PII, many of which still had that PII on them when they reached their new owners.
  • Decommissioned 500 network caching devices containing client PII that were at best partially encrypted, of which 42 were unaccounted for after their alleged “disposal”.

Dirty deeds and they’re done dirt cheap

In the first case, dating back to 2016, it seems that the contractor chosen by Morgan Stanley, perhaps realising that the company wasn’t checking up on how faithfully the wiping-and-selling-on process was being followed, decided to switch to a new (and unapproved) subcontractor who apparently skipped the “wipe it first” part, and directly put the retired devices up for sale on an on-line auction site.

Someone in Oklahoma bought a few of the old drives, presumably as hot spares for their own IT operation, and realised that they were still full of Morgan Stanley client data.

According to the SEC, the purchaser contacted Morgan Stanley and said, “[y]ou are a major financial institution and should be following some very stringent guidelines on how to deal with retiring hardware. Or at the very least getting some kind of verification of data destruction from the vendors you sell equipment to.”

Morgan Stanley ultimately bought back those drives, but that didn’t deal with any of the other disks that had been sold on elsewhere.

Indeed, the SEC notes that 14 more data-tainted disks were bought back from someone else by Morgan Stanley as recently as June 2021, still unwiped, still working fine, and still containing “at least 140,000 pieces of customer PII”.

As the SEC wryly notes, “the vast majority of the hard drives from the 2016 Data Center Decommissioning remain missing.”

We are certain that we may have encrypted something

In the second case, the retired devices were WAN (wide area network) caching servers used by branch offices to optimise internet bandwidth in order to accelerate access to common documents.

Ironically, these devices had an encrypt-any-stored-data-packets option that would have simplified decommissioning greatly.

After all, if you can show that you turned the encryption option on, and that you wiped all known copies of the decryption key, then data protection regulators in many countries will treat the encrypted data as wiped, too.

Data that’s considered undecryptable is no more meaningful than digital shredded cabbage.

But Morgan Stanley apparently didn’t activate the decryption option until at least one year after the devices went into use…

…and the encryption only applied to new data subsequently written to the device, not to anything that was there before.

So all that Morgan Stanley can “prove”, for the 42 devices that are still out there somewhere, is that each device almost certainly contains at least some client PII that definitely isn’t encrypted.

What to do?

  • You can outsource your cybersecurity, but you can’t outsource your responsibility. Make sure that you comply with data protection regulations by keeping track of how your contractors are complying with them, too. Part of the SEC’s complaint against Morgan Stanley is that it should have been obvious that that their chosen operator had deviated from the official plan, and thus that the company could easily have avoided becoming non-compliant and putting their clients at risk.
  • Full-device encryption can help you comply with data protection rules. Properly-scrambled data without the decryption key is effectively just random noise, so many data protection regulators treat “undecryptable” disks as if they’d been wiped, or never contained any data at all. But you need to be able to show both that you activated the encryption correctly in the first place, and that anyone who acquires the disk in future will be unable to acquire the decryption key.
  • If in doubt, go for device destruction, not for wiping-and-selling-on. There are sound environmental reasons for not blindly destroying and recycling every computing device that you retire from service, but there are diminishing returns from reusing old kit. Even large devices can be physically “shredded”, leaving their metals open to recovery but not their data. If you can’t usefully reuse it, don’t bother selling it on to someone else who might not ultimately dispose of it as soundly as you. Dispose of it responsibly yourself.
  • Mishandled PII can show up years after you lost it. Unlike garden waste in the compost bin or old bicycles dumped in the canal, misplaced data storage devices can show up in perfect working order, with all their original data intact, for years after you might have assumed they were lost without trace, or degraded beyond repair.

We can’t resist ending with the rhyme we often use to warn people about the risks of oversharing on social media, because it applies equally well to data stored by the biggest IT department.

If in doubt / Don’t give it out.


[embedded content]

(Watch directly on YouTube if the video won’t play here.)

S3 Ep101: Uber and LastPass breaches – is 2FA all it’s cracked up to be? [Audio + Text]


With Doug Aamoth and Paul Ducklin.

Intro and outro music by Edith Mudge.

Click-and-drag on the soundwaves below to skip to any point. You can also listen directly on Soundcloud.

You can listen to us on Soundcloud, Apple Podcasts, Google Podcasts, Spotify, Stitcher and anywhere that good podcasts are found. Or just drop the URL of our RSS feed into your favourite podcatcher.


DOUG.  Uber hacked, more on the LastPass breach, and Firefox 105.

All that, and more, on the Naked Security Podcast.


Welcome to the podcast, everybody, I am Doug Aamoth.

With me, as always, is Paul Ducklin…

[DRAMATIC VOICE] …the host of Security SOS Week, a star-studded lineup of interviews with security experts running from 26-29 September 2022.

DUCK.  I like the sound of that, Doug. [LAUGHS]

DOUG.  Yes!

DUCK.  Please join us next week, folks.

It’s the last week of September.

We chose that because it’s the week leading up to Cybersecurity Awareness Month – that’s not a coincidence.

So, 26, 27, 28, and 29 September 2022.

Each day there’s a 30-minute interview with one of four different experts at Sophos.

We’ve got Fraser Howard, malware expert extraordinaire.

We’ve got Greg Rosenberg, who will explain the challenges of detecting that someone is in your network to start with, so you can head them off before it goes wrong.

There’s Peter Mackenzie from our Incident Response Team, who will tell you some fascinating, scary, but very educational stories about attackers that he’s been sent into bat against.

And we wrap it all up with Craig Jones, who will tell you how to set up a SecOps team of your own.

Craig is the Senior Director of Security Operations *at Sophos itself*, Doug, so he does cybersecurity in a cybersecurity company.

He’s a lovely chap, and well worth listening to.

The URL is:

DOUG.  Can’t wait… I will be there!

Please join me, everyone – it will be a rollicking good time.

And speaking of a rollicking good time, it’s time for our This Week in Tech History segment.

Something that’s near and dear to my heart – this week, on 23 September 2008, the world’s first Android phone was released.

It was called the T-Mobile G1, and it featured a 3.2-inch flip-out screen that revealed a full hardware keyboard.

It also had a trackball and no standard headphone jack.

Early reviews were mixed, but hopeful.

Thanks to Android’s relatively open nature, G1 went on to sell a million handsets in six months, and at one point accounted for two-thirds of devices on T-Mobile’s 3G network.

I had one of those devices… it was one of my favorite phones of all time.

DUCK.  Aaaaah, trackballs on phones, eh?

Remember the BlackBerries?

It was the thing, wasn’t it… that trackball was really great.

DOUG.  It was good for scrolling.

DUCK.  Then they went, “Out with moving parts,” and it was an infrared sensor or something.

DOUG.  Yes.

DUCK.  How times change, Doug.

DOUG.  Yes… I miss it.

DUCK.  Like you, I liked those slide-out keyboards that the early phones had.

There’s something reassuring about actually hearing the click-click-click.

I think what I really liked about it is that when you popped out the keyboard, it didn’t obscure half the screen.

DOUG.  Exactly!

DUCK.  It wasn’t like half the email you’re reading disappeared when you wanted to reply.

Which I guess we’ve just got used to now… that’s the way of the world, I suppose.

DOUG.  That was a long time ago – simpler times.

Let’s talk about the Firefox 105 release.

What is new from a security standpoint here, Paul?

DUCK.  Fortunately, nothing that appears to be in the wild and nothing that rates a critical level of vulnerability.

But there are a few intriguing vulnerabilities.

One in which an individual web page that’s split into a bunch of separate IFRAMES could have security permission leakage between those components.

So, you might have a less-privileged frame from a subdomain on your site, for example, that isn’t supposed to be able to access the webcam (because this bug is about device permissions), yet it looks as though you might actually be able to do so.

And another similar sounding bug, where a subdomain of your website – a blog or a microsite or something like that – could actually mess with cookies in the parent site.

Oh, and a good old “stack buffer overflow when initialising graphics”… just a reminder that memory bugs are still a problem!

And of course, there’s the usual “memory safety bugs fixed in Firefox 105”, and in the Extended Support Release, which is 102.3.

Remember that in the Extended Support Release, the two numbers add together: 102+3 = 105.

So, the Extended Support Release is everything from the main version number, plus three updates worth of security fixes, but with the feature fixes held back.

So get it while it’s fresh.

DOUG.  Please do!

Let’s move on to the story of the century, breathlessly reported: “Uber has been hacked.”

Looking a little closer at it… yes, it’s bad, it’s embarrassing, but it could have been much, much worse.

DUCK.  Yes, Uber has come out with a follow up report, and it seems that they’re suggesting that a hacking group like LAPSUS$ was responsible.

We’ve spoken about LAPSUS$ on the podcast before.

It’s a sort of a “let’s do it for the lulz” kind of thing, where it doesn’t look as though they’re actually after selling the data, although they might give it away for free or certainly embarrass the company with it.

As I say, the embarrassment comes from the apparent extent of the breach, fortunately, rather than its depth.

It seems like the attackers wanted to wander around through the network as quickly as possible, grabbing screenshots, saying, “Hey, look, here’s me in all sorts of things”…

…including Slack workspaces; Uber’s threat protection software (in old language, the anti-virus); an AWS console; company travel and expenses.

There was a screenshot that I saw published that showed who’d put in the biggest T&E [travel and expenses] claims in recent times. [LAUGHTER]

We laugh, but there are employee names in there, so that’s a bad look because it’s implying that the person could have got at employee data.

A vSphere virtual server console; Google workspaces; and the place where it seems the hacker actually put in the “UBER HAS BEEN HACKED” in capital letters that made the headlines (it even made the Naked Security headline).

Apparently that was posted to… (oh, dear, Doug [LAUGHS] – it’s not funny, yet it is)

…to Uber’s own bug bounty service, which is a very embarrassing look.

DOUG.  It feels like someone got a hold of an Uber polo shirt and put it on, and sweet-talked their way past the reception desk saying, “Oh, my badge isn’t working,” or something, got into the headquarters and then just started taking pictures of stuff.

Then they wrote on the bulletin board in the employee break room that they’ve been hacked.

It feels like this person could have been an Initial Access Broker [jargon term for hacker who steals passwords and sells them on] if they really wanted to.

They could have done so many additional bad things while they were in there.

But they just took pictures, and it was an embarrassment to Uber.

DUCK.  Yes.

I think the key detail that we could add to your analogy of “getting through the main security checkpoint” is that, on the way in, it also seems that they were able to reach into the super-secure secret cabinet where the access-all-areas passes are kept, and purloin one.


DUCK.  In other words, they found a password in a PowerShell script, on an openly visible network share…

…so they only needed low level access, and that allowed them to get into what was essentially the password manager for Uber’s networks.

DOUG.  Yes.

So it’s not that this wasn’t unavoidable.

If we get to the advice in your article here, we have several things that Uber could have done differently.

Starting with: “Password managers and two-factor authentication are not a panacea.”

Just because you have those… that’s a security gate, but it’s not the end-all and be-all to keeping someone out.

DUCK.  Absolutely.

We’ll be talking about the LastPass breach in a minute, where it seems that the attackers didn’t actually need to bother with the 2FA side of things.

They just waited until the user that they were shadowing had gone through that exercise themselves, and then “borrowed their pass”.

So, indeed, 2FA doesn’t mean, “Oh, now I don’t have to worry about outsiders getting in.”

It does make that initial access a bit harder, and may make the social engineering more complicated and more likely to stand out.

But as you say, it’s an additional security gate.

DOUG.  And the next one, on the same note, is: “Once you’re in, you can’t just let people wander around.”

Security belongs everywhere in the network, not just at the edge.

DUCK.  Do I hear you saying the words Zero Trust, Douglas?

DOUG.  [LAUGHS] I was going to…

DUCK.  I know that sounds like a bit of a sales schpiel, and (surprise, surprise) Sophos has a Zero Trust Network Access product.

But we have that product because I think it’s something that’s demanded by the way that modern networks operate, so that you only get the access you actually need for the task in hand.

And, if you think about it, that doesn’t just benefit the company that’s dividing up its network.

It’s also good for users, because it means they can’t make unfortunate blunders even though they think they’re trying to do the right thing.

DOUG.  And we also talk about: “Regular cybersecurity measurement and testing”.

If you’re not able to do that in-house, consider hiring it out, because you need eyes on this around the clock.

DUCK.  Yes.

Two cliches, if I may, Doug?

DOUG.  You may. [LAUGHS]

DUCK.  Cybersecurity is a journey, not a destination.

You continually have to revisit to make sure [A] that you did correctly what you intended, and [B] that what you planned to do yesterday is still a valid and useful defence today.

And the idea of having somebody to help you review what’s happening, particularly when you think something bad has just happened, is it means that you don’t end up with security incidents being major distractions to your regular IT and Security Operations team.

Distractions could actually be deliberately seeded by the crooks to act as a distraction for the attack that they’ve got planned for later…

DOUG.  And then finally, we round ited out with a couple of tips for your staff: “Set up a cyber security hotline for your staff to report incidents”, and trust them to help you out by reporting such incidents.

DUCK.  Yes.

A lot of people have decided that people are the biggest problem.

I think that’s the wrong way to look at it.

People are, in fact, one of the best ways that you can notice things that you didn’t expect.

It’s always the things that you didn’t expect that will catch you out, because if you had expected them, you would probably have prevented them in the first place!

Take the goal of turning everyone in your organisation into eyes and ears for your own security team.

DOUG.  Very good!

And we’ve got more Uber coverage.

Paul, you and Chester Wisniewski did a great minisode, S3 Ep100.5.

Pure thunder, if I may.

It’s called: Uber breach – An expert speaks.

You can hear Paul and Chet talking about this particular breach in a little bit more depth:

DUCK.  I think the most important thing that came out of that minisode of the podcast is what you alluded to earlier, “What if this had been an Initial Access Broker?”

In other words, if they went in specifically to get the passwords and got out quietly.

This kind of broad-but-shallow attack is actually surprisingly common, and in many cases, as you suggested, the problem is that you don’t realise it’s happened.

These crooks go out of their way to keep as quiet as possible, and the idea is they take all those access passwords, access tokens, information they’ve got…

…and sell it on the darkweb for other crooks who want to do very specific things in specific parts of your network.

DOUG.  All right, we will stay on the breach train, but we’re just going to switch cars on the train.

We’re going to walk across and be careful not to fall out onto the platform… but we’re going to get into the LastPass car.

They’ve got a post mortem out.

They still don’t know how the criminals got in, but at least they admitted it.

And it seems like it wasn’t necessarily for the lulz… thus similar but different to the Uber breach.

DUCK.  Indeed, it seems that this one, you might say, was deeper but narrower.

I think the report is a good example of how to provide information that’s actually useful after an attack.

As you say, they seem to have come out with information that makes it clear what they think happened.

They admitted to the “known unknowns”.

For example, they said, “It looks as though the crooks implanted malware that was able to masquerade as a developer who had already logged in with their password and 2FA code.”

They figured that out, but they don’t know how that implant happened in the first place, and they were decent enough to say they didn’t know.

And I think that’s quite good, rather than just going, “Oh, well, we’ve definitely fixed all the problems and this won’t happen again.”

If I were a LastPass user, it would make me more inclined to believe the other things that I have to rely on them to state…

…namely that the development network where their code was stolen is separate from their other networks, so that the attackers were not able to reach out and get things like customer data or password hashes.

And I’m also inclined to accept LastPass’s explanation (because they’re able to justify it) that even if the crooks *had* been able to jump from the developer network to the cloud storage parts of the network, and even if they had been able to run off with password hashes, it would have been very difficult for them to do anything with it.

Because LastPass simply doesn’t know your master password.

And they have a little diagram that explains why they believe that to be the case.

So, I think, if I were a Last Pass user, that I would be inclined to believe them.

DOUG.  I *am* a Last Pass user, and I found this to be more reassuring than not.

I wasn’t too worried about this before, and now I’m slightly less worried, and certainly not worried enough to dump it wholesale, change all my passwords, and that kind of stuff.

So I thought it was pretty good.

DUCK.  Indeed, one of the concerns that people came out with when we first reported on this breach is, “Well, the crooks got into the source code control system. If they were able to download all this intellectual property, what if they were able to upload some sneaky and unauthorised changes at the same time?”

Maybe they ran off with the code so they could sell the intellectual property, so industrial espionage was their primary vehicle…

…but what if there was a supply chain attack as well?

And LastPass did attempt to answer that question by saying, “We’ve reviewed source code changes and as far as we can see, the attackers were not able to, or did not, make any.”

Also, they explain how even if the crooks had made changes, there are checks and balances that prevent those changes just flowing automatically into the software that you might download, or that their own cloud services might use.

In other words, they have a physical separation between the developer network and the production network, and a full-and-proper code review and testing process is required each time for something essentially to jump across that gap.

I found that reassuring.

They’ve taken precautions that make it less likely that a supply chain attack in the development network could reach customers.

And they appear to have gone out of their way to verify that no such changes were made anyway.

DOUG.  Alright, there’s more on that on, including a link to the LastPass briefing itself.

Let us now turn to one of our listeners!

Naked Security Podcast listener Jonas writes in…

…and this is an oldie-but-a-goodie.

I wouldn’t have believed this myself – I’ve heard this story before in different contexts, and I actually witnessed this as I was working as a computer technician back in the early 2000s.

This is a real problem, and it happens.

Jonas writes:

“In in the early 1990s, our computer classroom had a lot of Apple Macintosh Classics with the 3.5-inch floppy drives.

In those days, when you needed to install things, you did so with a bunch of disks – Insert disk 1; Insert disk 2; and so on.

Well, one of my classmates took the installation instructions too literally.

She started with the first diskette, and after a while the installation process instructed her to ‘Please insert disk 2’, and she did.”

Just let that sit there for a little bit…

DUCK.  [LAUGHS A BIT TOO LOUDLY] We shouldn’t laugh, eh?

The instructions could have been clearer!

DOUG.  Jonas continues:

“When retelling the story, she said, ‘The second disk was a bit harder to get in, but I managed to force it in. But it still kept asking for the second disk.’

So she didn’t understand why it still needed disk 2 when she had already inserted disk 1 *and* disk 2… and it was quite hard to get the two disks out.

And even then, the floppy drive never worked again on that Mac anyway.

It needed to be replaced, but the whole class learned you needed to remove the previous disk before inserting the next one.”

DUCK.  Well, there you have it!

DOUG.  I always remember my days as a technician at CompUSA.

We had a counter.

People would lug their desktop computers in, put the desktop up on the counter, and tell us what was wrong.

I saw a customer come in and immediately saw a diskette wedged in the 3.5-inch floppy drive, and I thought. “Oh my God. I’ve heard this story. I’ve read about it on the internet and I’m finally experiencing it in real life.”

It didn’t get all the way in, but they managed to halfway jam a second diskette into the floppy drive, and they couldn’t get it out.

So we had to open the case of the computer, disconnect and unscrew the floppy drive, pull the floppy drive out of the front of the computer, and then it took a couple of us to dislodge that diskette.

And, of course, the disk drive had to be replaced…

Thank you very much, Jonas, for sending that in.

If you have an interesting story, comment or question you’d like to submit, we’d love to read it on the podcast.

You can email, you can comment on any one of our articles, or you can hit us up on social: @NakedSecurity.

That’s our show for today.

Thanks very much for listening.

For Paul Ducklin, I’m Doug Aamoth, reminding you until next time to…

BOTH.  Stay secure!


Interested in cybersecurity? Join us for Security SOS Week 2022!

Sophos Security SOS Week is back by popular demand, from 26-29 September 2022!

Four top security experts are once again stepping up to share their expertise in a series of daily 30-minute interviews.

This year, for the first time, we’re filming the interviews, giving you the option to watch our experts in action.

Of course, you don’t have to watch if you don’t want to – there are no PowerPoint decks, no pie charts, no pew-pew graphs and no sales schpiel…

…so you can minimise the window if you like, and simply sit back (or carry on working) and listen:

[embedded content]

The theme for 2022 is Prevention, Detection and Response, and we’ll be talking to:

  • Fraser Howard from Sophos Labs. Fraser is a world-renowned malware researcher whom we describe quite unashamedly here on Naked Security as a “specialist in everything”. Learn how to handle the complexities of contemporary malware prevention.
  • Greg Rosenberg from Sophos Sales Engineering. Learn how to take on the challenges of detecting in advance that something bad is about to happen to your network, so you can head it off at the pass.
  • Peter Mackenzie from Sophos Incident Response. Peter is a threat-hunter extraordinaire who’ll be sharing a range of fascinating, scary but educational tales of attackers that he himself has been sent in to bat against.
  • Craig Jones from Sophos Security Operations. On the final day, Craig will explain how you can put together prevention, detection and response to build a successful SecOps team of your own.

To sign up now, simply visit

There are two time slots – one perfect for Europe, Africa and the Middle East, and a second slot that aligns with the Americas.

Sophos Security SOS Week, 26-29 September 2022:

Four short but fascinating talks with world experts.

Learn about protection, detection and reponse,
and how to set up a successful SecOps team of your own:

LastPass source code breach – incident response report released

If the big story of this month looks set to be Uber’s data breach, where a hacker was allegedly able to roam widely through the ride-sharing company’s network…

..the big story from last month was the LastPass breach, in which an attacker apparently got access to just one part of the LastPass network, but was able to make off with the company’s proprietary source code.

Fortunately for Uber, their attacker seemed determined to make a big, quick PR splash by grabbing screenshots, spreading them liberally online, and taunting the company with shouty messages such as UBER HAS BEEN HACKED, right in its own Slack and bug bounty forums:

The attacker or attackers at LastPass, however, seem to have operated more stealthily, apparently tricking a LastPass developer into installing malware that the cybercriminals then used to hitch a ride into the company’s source code repository:

LastPass has now published an official follow-up report on the incident, based on what it has been able to figure out about the attack and the attackers in the aftermath of the intrusion.

We think that the LastPass article is worth reading even if you aren’t a LastPass user, because we think it’s a reminder that a good incident response report is as useful for what it admits you were unable to figure out as for what you were.

What we now know

The boldface sentences below provide an outline of what LastPass is saying:

  • The attacker “gained access to the [d]evelopment environment using a developer’s compromised endpoint.” We’re assuming this was down to the attacker implanting system-snooping malware on a programmer’s computer.
  • The trick used to implant the malware couldn’t be determined. That’s disappointing, because knowing how your last attack was actually carried out makes it easier to reassure customers that your revised prevention, detection and response procedures are likely to block it next time. Many potential attack vectors spring to mind, including: unpatched local software, “shadow IT” leading to an insecure local configuration, a phishing click-through blunder, unsafe downloading habits, treachery in the source code supply chain relied on by the coder concerned, or a booby-trapped email attachment opened in error. Hats off to LastPass for admitting to what amounts to a “known unknown”.
  • The attacker “utilised their persistent access to impersonate the developer once the developer had successfully authenticated using multi-factor authentication.” We assume this means that the hacker never needed to acquire the victim’s password or 2FA code, but simply used a cookie-stealing attack, or extracted the developer’s authentication token from genuine network traffic (or from the RAM of the victim’s computer) in order to piggy-back on the programmer’s usual access:
  • LastPass didn’t notice the intrusion immediately, but did detect and expel the attacker within four days. As we noted in a recent article about the risks of timestamp ambiguity in system logs, being able to determine the precise order in which events occurred during an attack is a vital part of incident reponse:
  • LastPass keeps its development and production networks physically separate. This is a good cybersecurity practice because it prevents an attack on the development network (where things are inevitably in an ongoing state of change and experimentation) from turning into an immediate compromise of the official sofware that’s directly available to customers and the rest of the business.
  • LastPass doesn’t keep any customer data in its development environment. Again, this is good practice given that developers are, as the job name suggests, generally working on software that has yet to go through a full-on security review and quality assurance process. This separation also makes it believable for LastPass to claim that no password vault data (which would have been encrypted with users’ private keys anyway) could have been exposed, which is a stronger claim than simply saying “we couldn’t find any evidence that it was exposed.” Keeping real-world data out of your development network also prevents well-meaning coders from inadvertently grabbing data that’s meant to be under regulatory protection and using it for unofficial test purposes.
  • Although source code was stolen, no unauthorised code changes were left behind by the attacker. Of course, we only have LastPass’s own claim to go on, but given the style and tone of rest of the incident report, we can see no reason not to take the company at its word.
  • Source code moving from the development network into production “can only happen after the completion of rigorous code review, testing, and validation processes”. This makes it believable for LastPass to claim that no modified or poisoned source code would have reached customers or the rest of the business, even if the attacker had managed to implant rogue code in the version control system..
  • LastPass never stores or even knows its users’ private decryption keys. In other words, even if the attacker had made off with password data, it would have ended up as just so much shredded digital cabbage. (LastPass also provides a public explanation of how it secures password vault data against offline cracking, including using client-side PBKDF2-HMAC-SHA256 for salting-hashing-and-stretching your offline password with 100,100 iterations, thus making password cracking attempts very much harder even if attackers make off with locally-stored copies of your password vault.)

What to do?

We think it’s reasonable to say that our early assumptions were correct, and that although this is an embarrassing incident for LastPass, and might reveal trade secrets that the company considered part of its shareholder value…

…this hack can be thought of as LastPass’s own problem to deal with, because no customer passwords were reached, let alone cracked, in this attack:

This attack, and LastPass’s own incident report, are also a good reminder that “divide and conquer”, also known by the jargon term Zero Trust, is an important part of contemporary cyberdefence.

As Sophos expert Chester Wisniewski explains in his analysis of the recent Uber hack, there’s a lot more at stake if crooks who get access to some of your network can roam around wherever they like in the hope of getting access to all of it:

Click-and-drag on the soundwaves below to skip to any point. You can also listen directly on Soundcloud.

S3 Ep100.5: Uber breach – an expert speaks [Audio + Text]


With Paul Ducklin and Chester Wisniewski

Intro and outro music by Edith Mudge.

Click-and-drag on the soundwaves below to skip to any point. You can also listen directly on Soundcloud.

You can listen to us on Soundcloud, Apple Podcasts, Google Podcasts, Spotify, Stitcher and anywhere that good podcasts are found. Or just drop the URL of our RSS feed into your favourite podcatcher.



DUCK.  Hello, everybody.

Welcome to this special mini-episode of the Naked Security podcast.

My name is Paul Ducklin, and I’m joined today by my friend and colleague Chester Wisniewski.

Chester, I thought we should say something about what has turned into the big story of the week… it’ll probably be the big story of the month!

I’ll just read you the headline I used on Naked Security:

“UBER HAS BEEN HACKED, boasts hacker – how to stop it happening to you.”


Tell us all about it….

CHET.  Well, I can confirm that the cars are still driving.

I’m coming to you from Vancouver, I’m downtown, I’m looking out the window, and there’s actually an Uber sitting outside the window…

DUCK.  It hasn’t been there all day?

CHET.  No, it hasn’t. [LAUGHS]

If you press the button to hail a car inside the app, rest assured: at the moment, it appears that you will actually have someone come and give you a ride.

But it’s not necessarily so assured, if you’re an employee at Uber, that you’re going to be doing much of anything for the next few days, considering the impact on their systems.

We don’t know a lot of details, actually, Duck, of exactly what happened.

But, at a very high level, the consensus appears to be that there was some social engineering of an Uber employee that allowed someone to get a foothold inside of Uber’s network.

And they were able to move laterally, as we say, or pivot, once they got inside in order to find some administrative credentials that ultimately led them to have the keys to the Uber kingdom.

DUCK.  So this doesn’t look like a traditional data stealing, or nation state, or ransomware attack, does it?

CHET.  No.

That’s not to say someone else may not also have been in their network using similar techniques – you never really know.

In fact, when our Rapid Response team responds to incidents, we often find that there’s been more than one threat actor inside a network, because they exploited similar methods of access.

DUCK.  Yes… we even had a story of two ransomware crooks, basically unknown to each other, who got in at the same time.

So, some of the files were encrypted with ransomware-A-then-ransomware-B, and some with ransomware-B-followed-by-ransomware-A.

That was an unholy mess…

CHET.  Well, that’s old news, Duck. [LAUGHS]

We’ve since published another one where *three* different ransomwares were on the same network.

DUCK.  Oh, dear! [BIG LAUGH] I keep laughing at this, but that’s wrong. [LAUGHS]

CHET.  It’s not uncommon for multiple threat actors to be in, because, as you say, if one person is able to discover a flaw in your approach to defending your network, there’s nothing to suggest that other people may not have discovered the same flaw.

But in this case, I think you’re right, in that it seems to be “for the lulz”, if you will.

I mean, the person who did it was mostly collecting trophies as they bounced through the network – in the form of screenshots of all these different tools and utilities and programs that were in use around Uber – and posting them publicly, I guess for the street cred.

DUCK.  Now, in an attack done by somebody who *didn’t* want bragging rights, that attacker could have been an IAB, an initial access broker, couldn’t they?

In which case, they wouldn’t have made a big noise about it.

They would have collected all the passwords and then got out and said, “Who would like to buy them?”

CHET.  Yes, that’s super-super dangerous!

As bad as it seems to be Uber right now, in particular someone on Uber’s PR or internal security teams, it’s actually the best possible outcome…

…which is just that the outcome of this is going to be embarrassment, probably some fines for losing sensitive employee information, that kind of thing.

But the truth of the matter is for almost everyone else that this type of an attack victimises, the end result ends up being ransomware or multiple ransomwares, combined with cryptominers and other kinds of data theft.

That is far, far more costly to the organisation than simply being embarrassed.

DUCK.  So this idea of crooks getting in and being able to wander around at will and pick and choose where they go…

…is sadly not unusual.

CHET.  It really emphasises the importance of actively looking for problems, as opposed to waiting for alerts.

Clearly, this person was able to breach Uber security without triggering any alerts initially, which allowed them the time to wander around.

That’s why threat hunting, as the terminology goes, is so critical these days.

Because the closer to minute-zero or day-zero that you can detect the suspicious activity of people poking around in file shares and suddenly logging into a whole bunch of systems serially in a row – those types of activities, or lots of RDP connections flying around the network from accounts that are not normally associated with that activity…

…those types of suspicious things can help you limit the amount of damage that person can cause, by limiting the amount of time they have to unravel any other security mistakes you may have made that allowed them to gain access to those administrative credentials.

This is a thing that a lot of teams are really struggling with: how to see these legitimate tools being abused?

That’s a real challenge here.

Because, in this example, it sounds like an Uber employee was tricked into inviting someone in, in a disguise that looked like them in the end.

You’ve now got a legitimate employee’s account, one that accidentally invited a criminal into their computer, running around doing things that employee is probably not normally associated with.

So that really has to be part of your monitoring and threat hunting: knowing what normal really is so, that you can detect “anomalous normal”.

Because they didn’t bring malicious tools with them – they’re using tools that are already there.

We know they looked at PowerShell scripts, that kind of thing – the stuff you probably already have.

What’s unusual is this person interacting with that PowerShell, or this person interacting with that RDP.

And those are things that are much harder to watch out for than simply waiting for an alert to pop up in your dashboard.

DUCK.  So, Chester, what is your advice for companies that don’t want to find themselves in Uber’s position?

Although this attack has understandably got a massive amount of publicity, because of the screenshots that are circulating, because it seems to be, “Wow, the crooks got absolutely everywhere”…

…in fact, it’s not a unique story as far as data breaches go.

CHET.  You asked about the advice, what would I tell an organisation?

And I have to think back to a good friend of mine who was a CISO of a major university in the United States about ten years ago.

I asked him what his security strategy was and he said: “It’s very simple. Assumption of breach.”

I assume I’m breached, and that people are in my network that I don’t want in my network.

So I have to build everything with the assumption that somebody’s already in here who shouldn’t be, and ask, “Do I have the protection in place even though the call is coming from inside the house?”

Today we have a buzzword for that: Zero Trust, which most of us are sick of saying already. [LAUGHS]

But that is the approach: assumption of breach; zero trust.

You should not have the freedom to simply roam around because you put on a disguise that appears to be an employee of the organisation.

DUCK.  And that’s really the key of Zero Trust, isn’t it?

It doesn’t mean, “Uou must never trust anybody to do anything.”

It’s kind of a metaphor for saying, “Assume nothing”, and, “Don’t authorise people to do more than they need to do for the task in hand.”

CHET.  Precisely.

On the assumption that your attackers don’t get as much joy from outing the fact that you were hacked as happened in this case…

…you probably want to make sure you have a good way for staff members to report anomalies when something doesn’t seem right, to make sure that they can give a heads-up to your security team.

Because talking about data breach dwell times from our Active Adversary Playbook, the criminals most often are on your network for at least ten days:

So you’ve got a solid week-to-ten-days, typically, where if you just have some eagle eyes that are spotting things, you’ve got a real good chance at shutting it down before the worst happens.

DUCK.  Indeed, because if you think about how a typical phishing attack works, it’s very rare that the crooks will succeed on the first attempt.

And if they don’t succeed on the first attempt, they don’t just pack up their bags and wander off.

They try the next person, and the next person, and the next person.

If they’re only going to succeed when they try the attack on the 50th person, then If any of the previous 49 spotted it and said something, you could have intervened and fixed the problem.

CHET.  Absolutely – that’s critical!

And you talked about tricking people into giving away 2FA tokens.

That’s an important point here – there was multi-factor authentication at Uber, but the person seems to have been convinced to bypass it.

And we don’t know what that methodology was, but most multi-factor method, unfortunately, do have the ability to be bypassed.

All of us are familiar with the time-based tokens, where you get the six digits on the screen and you’re asked to put those six digits into the app to authenticate.

Of course, there’s nothing stopping you from giving the six digits to the wrong person so that they can authenticate.

So, two factor authentication is not an all-purpose medicine that cures all disease.

It is simply a speed bump that is another step along the path to becoming more secure.

DUCK.  A well-determined crook who’s got the time and the patience to keep on trying may eventually get in.

And like you say, your goal is to minimise the time they have to maximize the return on the fact that they got in the first place…

CHET.  And that monitoring needs to happen all the time.

Companies like Uber are large enough to have their own 24/7 security operations centre to monitor things, though we’re not quite sure what happened here, and how long this person was in, and why they weren’t stopped

But most organizations are not necessarily in a position to be able to do that in-house.

It’s super-handy to have external resources available that can monitor – *continuously* monitor – for this malicious behaviour, shortening even further the amount of time that the malicious activity is happening.

For folks that maybe have regular IT responsibilities and other work to do, it can be quite hard to see these legitimate tools being used, and spot one particular pattern of them being used as a malicious thing…

DUCK.  The buzzword that you’re talking about there is what we know as MDR, short for Managed Detection and Response, where you get a bunch of experts either to do it for you or to help you.

And I think there are still quite a lot of people out there who imagine, “If I’m seen to do that, doesn’t it look like I’ve abrogated my responsibility? Isn’t it an admission that I absolutely don’t know what I’m doing?”

And it isn’t, is it?

In fact, you could argue it’s actually doing things in a more controlled way, because you’re choosing people to help you look after your network *who do that and only that* for a living.

And that means that your regular IT team, and even your own security team… in the event of an emergency, they can actually carry on doing all the other things that need doing anyway, even if you’re under attack.

CHET.  Absolutely.

I guess the last thought I have is this…

Don’t perceive a brand like Uber being hacked as meaning that it’s impossible for you to defend yourself.

Big company names are almost big trophy hunting for people like the person involved in this particular hack.

And just because a big company maybe didn’t have the security they should doesn’t mean you can’t!

There was a lot of defeatist chatter amongst a lot of organisations I talked to after some previous big hacks, like Target, and Sony, and some of these hacks that we had in the news ten years ago.

And people were like, “Aaargh… if with all the resources of Target they can’t defend themselves, what hope is there for me?”

And I don’t really think that’s true at all.

In most of these cases, they were targeted because they were very large organizations, and there was a very small hole in their approach that somebody was able to get in through.

That doesn’t mean that you don’t have a chance at defending yourself.

This was social engineering, followed by some questionable practices of storing passwords in PowerShell files.

These are things that you can very easily watch for, and educate your employees on, to ensure that you’re not making the same mistakes.

Just because Uber can’t do it doesn’t mean you can’t!

DUCK.  Indeed – I think that’s very well put, Chester.

Do you mind if I end with one of my traditional cliches?

(The thing about cliches is that they generally become cliches by being true and useful.)

After incidents like this: “Those who cannot remember history are condemned to repeat it – don’t be that person!”

Chester, thank you so much for taking time out of your busy schedule, because I know you actually have an online talk to do tonight.

So, thank you so much for that.

And let us finish in our customary way by saying, “Until next time, stay secure.”


go top