Category Archives: News

S3 Ep101: Uber and LastPass breaches – is 2FA all it’s cracked up to be? [Audio + Text]

LISTEN NOW

With Doug Aamoth and Paul Ducklin.

Intro and outro music by Edith Mudge.

Click-and-drag on the soundwaves below to skip to any point. You can also listen directly on Soundcloud.

You can listen to us on Soundcloud, Apple Podcasts, Google Podcasts, Spotify, Stitcher and anywhere that good podcasts are found. Or just drop the URL of our RSS feed into your favourite podcatcher.


READ THE TRANSCRIPT

DOUG.  Uber hacked, more on the LastPass breach, and Firefox 105.

All that, and more, on the Naked Security Podcast.

[MUSICAL MODEM]

Welcome to the podcast, everybody, I am Doug Aamoth.

With me, as always, is Paul Ducklin…

[DRAMATIC VOICE] …the host of Security SOS Week, a star-studded lineup of interviews with security experts running from 26-29 September 2022.


DUCK.  I like the sound of that, Doug. [LAUGHS]


DOUG.  Yes!


DUCK.  Please join us next week, folks.

It’s the last week of September.

We chose that because it’s the week leading up to Cybersecurity Awareness Month – that’s not a coincidence.

So, 26, 27, 28, and 29 September 2022.

Each day there’s a 30-minute interview with one of four different experts at Sophos.

We’ve got Fraser Howard, malware expert extraordinaire.

We’ve got Greg Rosenberg, who will explain the challenges of detecting that someone is in your network to start with, so you can head them off before it goes wrong.

There’s Peter Mackenzie from our Incident Response Team, who will tell you some fascinating, scary, but very educational stories about attackers that he’s been sent into bat against.

And we wrap it all up with Craig Jones, who will tell you how to set up a SecOps team of your own.

Craig is the Senior Director of Security Operations *at Sophos itself*, Doug, so he does cybersecurity in a cybersecurity company.

He’s a lovely chap, and well worth listening to.

The URL is: https://sophos.com/sosweek


DOUG.  Can’t wait… I will be there!

Please join me, everyone – it will be a rollicking good time.

And speaking of a rollicking good time, it’s time for our This Week in Tech History segment.

Something that’s near and dear to my heart – this week, on 23 September 2008, the world’s first Android phone was released.

It was called the T-Mobile G1, and it featured a 3.2-inch flip-out screen that revealed a full hardware keyboard.

It also had a trackball and no standard headphone jack.

Early reviews were mixed, but hopeful.

Thanks to Android’s relatively open nature, G1 went on to sell a million handsets in six months, and at one point accounted for two-thirds of devices on T-Mobile’s 3G network.

I had one of those devices… it was one of my favorite phones of all time.


DUCK.  Aaaaah, trackballs on phones, eh?

Remember the BlackBerries?

It was the thing, wasn’t it… that trackball was really great.


DOUG.  It was good for scrolling.


DUCK.  Then they went, “Out with moving parts,” and it was an infrared sensor or something.


DOUG.  Yes.


DUCK.  How times change, Doug.


DOUG.  Yes… I miss it.


DUCK.  Like you, I liked those slide-out keyboards that the early phones had.

There’s something reassuring about actually hearing the click-click-click.

I think what I really liked about it is that when you popped out the keyboard, it didn’t obscure half the screen.


DOUG.  Exactly!


DUCK.  It wasn’t like half the email you’re reading disappeared when you wanted to reply.

Which I guess we’ve just got used to now… that’s the way of the world, I suppose.


DOUG.  That was a long time ago – simpler times.

Let’s talk about the Firefox 105 release.

What is new from a security standpoint here, Paul?


DUCK.  Fortunately, nothing that appears to be in the wild and nothing that rates a critical level of vulnerability.

But there are a few intriguing vulnerabilities.

One in which an individual web page that’s split into a bunch of separate IFRAMES could have security permission leakage between those components.

So, you might have a less-privileged frame from a subdomain on your site, for example, that isn’t supposed to be able to access the webcam (because this bug is about device permissions), yet it looks as though you might actually be able to do so.

And another similar sounding bug, where a subdomain of your website – a blog or a microsite or something like that – could actually mess with cookies in the parent site.

Oh, and a good old “stack buffer overflow when initialising graphics”… just a reminder that memory bugs are still a problem!

And of course, there’s the usual “memory safety bugs fixed in Firefox 105”, and in the Extended Support Release, which is 102.3.

Remember that in the Extended Support Release, the two numbers add together: 102+3 = 105.

So, the Extended Support Release is everything from the main version number, plus three updates worth of security fixes, but with the feature fixes held back.

So get it while it’s fresh.


DOUG.  Please do!

Let’s move on to the story of the century, breathlessly reported: “Uber has been hacked.”

Looking a little closer at it… yes, it’s bad, it’s embarrassing, but it could have been much, much worse.


DUCK.  Yes, Uber has come out with a follow up report, and it seems that they’re suggesting that a hacking group like LAPSUS$ was responsible.

We’ve spoken about LAPSUS$ on the podcast before.

It’s a sort of a “let’s do it for the lulz” kind of thing, where it doesn’t look as though they’re actually after selling the data, although they might give it away for free or certainly embarrass the company with it.

As I say, the embarrassment comes from the apparent extent of the breach, fortunately, rather than its depth.

It seems like the attackers wanted to wander around through the network as quickly as possible, grabbing screenshots, saying, “Hey, look, here’s me in all sorts of things”…

…including Slack workspaces; Uber’s threat protection software (in old language, the anti-virus); an AWS console; company travel and expenses.

There was a screenshot that I saw published that showed who’d put in the biggest T&E [travel and expenses] claims in recent times. [LAUGHTER]

We laugh, but there are employee names in there, so that’s a bad look because it’s implying that the person could have got at employee data.

A vSphere virtual server console; Google workspaces; and the place where it seems the hacker actually put in the “UBER HAS BEEN HACKED” in capital letters that made the headlines (it even made the Naked Security headline).

Apparently that was posted to… (oh, dear, Doug [LAUGHS] – it’s not funny, yet it is)

…to Uber’s own bug bounty service, which is a very embarrassing look.


DOUG.  It feels like someone got a hold of an Uber polo shirt and put it on, and sweet-talked their way past the reception desk saying, “Oh, my badge isn’t working,” or something, got into the headquarters and then just started taking pictures of stuff.

Then they wrote on the bulletin board in the employee break room that they’ve been hacked.

It feels like this person could have been an Initial Access Broker [jargon term for hacker who steals passwords and sells them on] if they really wanted to.

They could have done so many additional bad things while they were in there.

But they just took pictures, and it was an embarrassment to Uber.


DUCK.  Yes.

I think the key detail that we could add to your analogy of “getting through the main security checkpoint” is that, on the way in, it also seems that they were able to reach into the super-secure secret cabinet where the access-all-areas passes are kept, and purloin one.


DOUG.  Yes. [LAUGHS]


DUCK.  In other words, they found a password in a PowerShell script, on an openly visible network share…

…so they only needed low level access, and that allowed them to get into what was essentially the password manager for Uber’s networks.


DOUG.  Yes.

So it’s not that this wasn’t unavoidable.

If we get to the advice in your article here, we have several things that Uber could have done differently.

Starting with: “Password managers and two-factor authentication are not a panacea.”

Just because you have those… that’s a security gate, but it’s not the end-all and be-all to keeping someone out.


DUCK.  Absolutely.

We’ll be talking about the LastPass breach in a minute, where it seems that the attackers didn’t actually need to bother with the 2FA side of things.

They just waited until the user that they were shadowing had gone through that exercise themselves, and then “borrowed their pass”.

So, indeed, 2FA doesn’t mean, “Oh, now I don’t have to worry about outsiders getting in.”

It does make that initial access a bit harder, and may make the social engineering more complicated and more likely to stand out.

But as you say, it’s an additional security gate.


DOUG.  And the next one, on the same note, is: “Once you’re in, you can’t just let people wander around.”

Security belongs everywhere in the network, not just at the edge.


DUCK.  Do I hear you saying the words Zero Trust, Douglas?


DOUG.  [LAUGHS] I was going to…


DUCK.  I know that sounds like a bit of a sales schpiel, and (surprise, surprise) Sophos has a Zero Trust Network Access product.

But we have that product because I think it’s something that’s demanded by the way that modern networks operate, so that you only get the access you actually need for the task in hand.

And, if you think about it, that doesn’t just benefit the company that’s dividing up its network.

It’s also good for users, because it means they can’t make unfortunate blunders even though they think they’re trying to do the right thing.


DOUG.  And we also talk about: “Regular cybersecurity measurement and testing”.

If you’re not able to do that in-house, consider hiring it out, because you need eyes on this around the clock.


DUCK.  Yes.

Two cliches, if I may, Doug?


DOUG.  You may. [LAUGHS]


DUCK.  Cybersecurity is a journey, not a destination.

You continually have to revisit to make sure [A] that you did correctly what you intended, and [B] that what you planned to do yesterday is still a valid and useful defence today.

And the idea of having somebody to help you review what’s happening, particularly when you think something bad has just happened, is it means that you don’t end up with security incidents being major distractions to your regular IT and Security Operations team.

Distractions could actually be deliberately seeded by the crooks to act as a distraction for the attack that they’ve got planned for later…


DOUG.  And then finally, we round ited out with a couple of tips for your staff: “Set up a cyber security hotline for your staff to report incidents”, and trust them to help you out by reporting such incidents.


DUCK.  Yes.

A lot of people have decided that people are the biggest problem.

I think that’s the wrong way to look at it.

People are, in fact, one of the best ways that you can notice things that you didn’t expect.

It’s always the things that you didn’t expect that will catch you out, because if you had expected them, you would probably have prevented them in the first place!

Take the goal of turning everyone in your organisation into eyes and ears for your own security team.


DOUG.  Very good!

And we’ve got more Uber coverage.

Paul, you and Chester Wisniewski did a great minisode, S3 Ep100.5.

Pure thunder, if I may.

It’s called: Uber breach – An expert speaks.

You can hear Paul and Chet talking about this particular breach in a little bit more depth:


DUCK.  I think the most important thing that came out of that minisode of the podcast is what you alluded to earlier, “What if this had been an Initial Access Broker?”

In other words, if they went in specifically to get the passwords and got out quietly.

This kind of broad-but-shallow attack is actually surprisingly common, and in many cases, as you suggested, the problem is that you don’t realise it’s happened.

These crooks go out of their way to keep as quiet as possible, and the idea is they take all those access passwords, access tokens, information they’ve got…

…and sell it on the darkweb for other crooks who want to do very specific things in specific parts of your network.


DOUG.  All right, we will stay on the breach train, but we’re just going to switch cars on the train.

We’re going to walk across and be careful not to fall out onto the platform… but we’re going to get into the LastPass car.

They’ve got a post mortem out.

They still don’t know how the criminals got in, but at least they admitted it.

And it seems like it wasn’t necessarily for the lulz… thus similar but different to the Uber breach.


DUCK.  Indeed, it seems that this one, you might say, was deeper but narrower.

I think the report is a good example of how to provide information that’s actually useful after an attack.

As you say, they seem to have come out with information that makes it clear what they think happened.

They admitted to the “known unknowns”.

For example, they said, “It looks as though the crooks implanted malware that was able to masquerade as a developer who had already logged in with their password and 2FA code.”

They figured that out, but they don’t know how that implant happened in the first place, and they were decent enough to say they didn’t know.

And I think that’s quite good, rather than just going, “Oh, well, we’ve definitely fixed all the problems and this won’t happen again.”

If I were a LastPass user, it would make me more inclined to believe the other things that I have to rely on them to state…

…namely that the development network where their code was stolen is separate from their other networks, so that the attackers were not able to reach out and get things like customer data or password hashes.

And I’m also inclined to accept LastPass’s explanation (because they’re able to justify it) that even if the crooks *had* been able to jump from the developer network to the cloud storage parts of the network, and even if they had been able to run off with password hashes, it would have been very difficult for them to do anything with it.

Because LastPass simply doesn’t know your master password.

And they have a little diagram that explains why they believe that to be the case.

So, I think, if I were a Last Pass user, that I would be inclined to believe them.


DOUG.  I *am* a Last Pass user, and I found this to be more reassuring than not.

I wasn’t too worried about this before, and now I’m slightly less worried, and certainly not worried enough to dump it wholesale, change all my passwords, and that kind of stuff.

So I thought it was pretty good.


DUCK.  Indeed, one of the concerns that people came out with when we first reported on this breach is, “Well, the crooks got into the source code control system. If they were able to download all this intellectual property, what if they were able to upload some sneaky and unauthorised changes at the same time?”

Maybe they ran off with the code so they could sell the intellectual property, so industrial espionage was their primary vehicle…

…but what if there was a supply chain attack as well?

And LastPass did attempt to answer that question by saying, “We’ve reviewed source code changes and as far as we can see, the attackers were not able to, or did not, make any.”

Also, they explain how even if the crooks had made changes, there are checks and balances that prevent those changes just flowing automatically into the software that you might download, or that their own cloud services might use.

In other words, they have a physical separation between the developer network and the production network, and a full-and-proper code review and testing process is required each time for something essentially to jump across that gap.

I found that reassuring.

They’ve taken precautions that make it less likely that a supply chain attack in the development network could reach customers.

And they appear to have gone out of their way to verify that no such changes were made anyway.


DOUG.  Alright, there’s more on that on nakedsecurity.sophos.com, including a link to the LastPass briefing itself.

Let us now turn to one of our listeners!

Naked Security Podcast listener Jonas writes in…

…and this is an oldie-but-a-goodie.

I wouldn’t have believed this myself – I’ve heard this story before in different contexts, and I actually witnessed this as I was working as a computer technician back in the early 2000s.

This is a real problem, and it happens.

Jonas writes:

“In in the early 1990s, our computer classroom had a lot of Apple Macintosh Classics with the 3.5-inch floppy drives.

In those days, when you needed to install things, you did so with a bunch of disks – Insert disk 1; Insert disk 2; and so on.

Well, one of my classmates took the installation instructions too literally.

She started with the first diskette, and after a while the installation process instructed her to ‘Please insert disk 2’, and she did.”

Just let that sit there for a little bit…


DUCK.  [LAUGHS A BIT TOO LOUDLY] We shouldn’t laugh, eh?

The instructions could have been clearer!


DOUG.  Jonas continues:

“When retelling the story, she said, ‘The second disk was a bit harder to get in, but I managed to force it in. But it still kept asking for the second disk.’

So she didn’t understand why it still needed disk 2 when she had already inserted disk 1 *and* disk 2… and it was quite hard to get the two disks out.

And even then, the floppy drive never worked again on that Mac anyway.

It needed to be replaced, but the whole class learned you needed to remove the previous disk before inserting the next one.”


DUCK.  Well, there you have it!


DOUG.  I always remember my days as a technician at CompUSA.

We had a counter.

People would lug their desktop computers in, put the desktop up on the counter, and tell us what was wrong.

I saw a customer come in and immediately saw a diskette wedged in the 3.5-inch floppy drive, and I thought. “Oh my God. I’ve heard this story. I’ve read about it on the internet and I’m finally experiencing it in real life.”

It didn’t get all the way in, but they managed to halfway jam a second diskette into the floppy drive, and they couldn’t get it out.

So we had to open the case of the computer, disconnect and unscrew the floppy drive, pull the floppy drive out of the front of the computer, and then it took a couple of us to dislodge that diskette.

And, of course, the disk drive had to be replaced…

Thank you very much, Jonas, for sending that in.

If you have an interesting story, comment or question you’d like to submit, we’d love to read it on the podcast.

You can email tips@sophos.com, you can comment on any one of our articles, or you can hit us up on social: @NakedSecurity.

That’s our show for today.

Thanks very much for listening.

For Paul Ducklin, I’m Doug Aamoth, reminding you until next time to…


BOTH.  Stay secure!

[MUSICAL MODEM]


Interested in cybersecurity? Join us for Security SOS Week 2022!

Sophos Security SOS Week is back by popular demand, from 26-29 September 2022!

Four top security experts are once again stepping up to share their expertise in a series of daily 30-minute interviews.

This year, for the first time, we’re filming the interviews, giving you the option to watch our experts in action.

Of course, you don’t have to watch if you don’t want to – there are no PowerPoint decks, no pie charts, no pew-pew graphs and no sales schpiel…

…so you can minimise the window if you like, and simply sit back (or carry on working) and listen:

[embedded content]

The theme for 2022 is Prevention, Detection and Response, and we’ll be talking to:

  • Fraser Howard from Sophos Labs. Fraser is a world-renowned malware researcher whom we describe quite unashamedly here on Naked Security as a “specialist in everything”. Learn how to handle the complexities of contemporary malware prevention.
  • Greg Rosenberg from Sophos Sales Engineering. Learn how to take on the challenges of detecting in advance that something bad is about to happen to your network, so you can head it off at the pass.
  • Peter Mackenzie from Sophos Incident Response. Peter is a threat-hunter extraordinaire who’ll be sharing a range of fascinating, scary but educational tales of attackers that he himself has been sent in to bat against.
  • Craig Jones from Sophos Security Operations. On the final day, Craig will explain how you can put together prevention, detection and response to build a successful SecOps team of your own.

To sign up now, simply visit https://sophos.com/sosweek

There are two time slots – one perfect for Europe, Africa and the Middle East, and a second slot that aligns with the Americas.


Sophos Security SOS Week, 26-29 September 2022:

Four short but fascinating talks with world experts.

Learn about protection, detection and reponse,
and how to set up a successful SecOps team of your own:


LastPass source code breach – incident response report released

If the big story of this month looks set to be Uber’s data breach, where a hacker was allegedly able to roam widely through the ride-sharing company’s network…

..the big story from last month was the LastPass breach, in which an attacker apparently got access to just one part of the LastPass network, but was able to make off with the company’s proprietary source code.

Fortunately for Uber, their attacker seemed determined to make a big, quick PR splash by grabbing screenshots, spreading them liberally online, and taunting the company with shouty messages such as UBER HAS BEEN HACKED, right in its own Slack and bug bounty forums:

The attacker or attackers at LastPass, however, seem to have operated more stealthily, apparently tricking a LastPass developer into installing malware that the cybercriminals then used to hitch a ride into the company’s source code repository:

LastPass has now published an official follow-up report on the incident, based on what it has been able to figure out about the attack and the attackers in the aftermath of the intrusion.

We think that the LastPass article is worth reading even if you aren’t a LastPass user, because we think it’s a reminder that a good incident response report is as useful for what it admits you were unable to figure out as for what you were.

What we now know

The boldface sentences below provide an outline of what LastPass is saying:

  • The attacker “gained access to the [d]evelopment environment using a developer’s compromised endpoint.” We’re assuming this was down to the attacker implanting system-snooping malware on a programmer’s computer.
  • The trick used to implant the malware couldn’t be determined. That’s disappointing, because knowing how your last attack was actually carried out makes it easier to reassure customers that your revised prevention, detection and response procedures are likely to block it next time. Many potential attack vectors spring to mind, including: unpatched local software, “shadow IT” leading to an insecure local configuration, a phishing click-through blunder, unsafe downloading habits, treachery in the source code supply chain relied on by the coder concerned, or a booby-trapped email attachment opened in error. Hats off to LastPass for admitting to what amounts to a “known unknown”.
  • The attacker “utilised their persistent access to impersonate the developer once the developer had successfully authenticated using multi-factor authentication.” We assume this means that the hacker never needed to acquire the victim’s password or 2FA code, but simply used a cookie-stealing attack, or extracted the developer’s authentication token from genuine network traffic (or from the RAM of the victim’s computer) in order to piggy-back on the programmer’s usual access:
  • LastPass didn’t notice the intrusion immediately, but did detect and expel the attacker within four days. As we noted in a recent article about the risks of timestamp ambiguity in system logs, being able to determine the precise order in which events occurred during an attack is a vital part of incident reponse:
  • LastPass keeps its development and production networks physically separate. This is a good cybersecurity practice because it prevents an attack on the development network (where things are inevitably in an ongoing state of change and experimentation) from turning into an immediate compromise of the official sofware that’s directly available to customers and the rest of the business.
  • LastPass doesn’t keep any customer data in its development environment. Again, this is good practice given that developers are, as the job name suggests, generally working on software that has yet to go through a full-on security review and quality assurance process. This separation also makes it believable for LastPass to claim that no password vault data (which would have been encrypted with users’ private keys anyway) could have been exposed, which is a stronger claim than simply saying “we couldn’t find any evidence that it was exposed.” Keeping real-world data out of your development network also prevents well-meaning coders from inadvertently grabbing data that’s meant to be under regulatory protection and using it for unofficial test purposes.
  • Although source code was stolen, no unauthorised code changes were left behind by the attacker. Of course, we only have LastPass’s own claim to go on, but given the style and tone of rest of the incident report, we can see no reason not to take the company at its word.
  • Source code moving from the development network into production “can only happen after the completion of rigorous code review, testing, and validation processes”. This makes it believable for LastPass to claim that no modified or poisoned source code would have reached customers or the rest of the business, even if the attacker had managed to implant rogue code in the version control system..
  • LastPass never stores or even knows its users’ private decryption keys. In other words, even if the attacker had made off with password data, it would have ended up as just so much shredded digital cabbage. (LastPass also provides a public explanation of how it secures password vault data against offline cracking, including using client-side PBKDF2-HMAC-SHA256 for salting-hashing-and-stretching your offline password with 100,100 iterations, thus making password cracking attempts very much harder even if attackers make off with locally-stored copies of your password vault.)

What to do?

We think it’s reasonable to say that our early assumptions were correct, and that although this is an embarrassing incident for LastPass, and might reveal trade secrets that the company considered part of its shareholder value…

…this hack can be thought of as LastPass’s own problem to deal with, because no customer passwords were reached, let alone cracked, in this attack:

This attack, and LastPass’s own incident report, are also a good reminder that “divide and conquer”, also known by the jargon term Zero Trust, is an important part of contemporary cyberdefence.

As Sophos expert Chester Wisniewski explains in his analysis of the recent Uber hack, there’s a lot more at stake if crooks who get access to some of your network can roam around wherever they like in the hope of getting access to all of it:

Click-and-drag on the soundwaves below to skip to any point. You can also listen directly on Soundcloud.


S3 Ep100.5: Uber breach – an expert speaks [Audio + Text]

CYBERSECURITY: “THEY DIDN’T BUT YOU CAN!”

With Paul Ducklin and Chester Wisniewski

Intro and outro music by Edith Mudge.

Click-and-drag on the soundwaves below to skip to any point. You can also listen directly on Soundcloud.

You can listen to us on Soundcloud, Apple Podcasts, Google Podcasts, Spotify, Stitcher and anywhere that good podcasts are found. Or just drop the URL of our RSS feed into your favourite podcatcher.


READ THE TRANSCRIPT

[MUSICAL MODEM]

DUCK.  Hello, everybody.

Welcome to this special mini-episode of the Naked Security podcast.

My name is Paul Ducklin, and I’m joined today by my friend and colleague Chester Wisniewski.

Chester, I thought we should say something about what has turned into the big story of the week… it’ll probably be the big story of the month!

I’ll just read you the headline I used on Naked Security:

“UBER HAS BEEN HACKED, boasts hacker – how to stop it happening to you.”

So!

Tell us all about it….


CHET.  Well, I can confirm that the cars are still driving.

I’m coming to you from Vancouver, I’m downtown, I’m looking out the window, and there’s actually an Uber sitting outside the window…


DUCK.  It hasn’t been there all day?


CHET.  No, it hasn’t. [LAUGHS]

If you press the button to hail a car inside the app, rest assured: at the moment, it appears that you will actually have someone come and give you a ride.

But it’s not necessarily so assured, if you’re an employee at Uber, that you’re going to be doing much of anything for the next few days, considering the impact on their systems.

We don’t know a lot of details, actually, Duck, of exactly what happened.

But, at a very high level, the consensus appears to be that there was some social engineering of an Uber employee that allowed someone to get a foothold inside of Uber’s network.

And they were able to move laterally, as we say, or pivot, once they got inside in order to find some administrative credentials that ultimately led them to have the keys to the Uber kingdom.


DUCK.  So this doesn’t look like a traditional data stealing, or nation state, or ransomware attack, does it?


CHET.  No.

That’s not to say someone else may not also have been in their network using similar techniques – you never really know.

In fact, when our Rapid Response team responds to incidents, we often find that there’s been more than one threat actor inside a network, because they exploited similar methods of access.


DUCK.  Yes… we even had a story of two ransomware crooks, basically unknown to each other, who got in at the same time.

So, some of the files were encrypted with ransomware-A-then-ransomware-B, and some with ransomware-B-followed-by-ransomware-A.

That was an unholy mess…


CHET.  Well, that’s old news, Duck. [LAUGHS]

We’ve since published another one where *three* different ransomwares were on the same network.


DUCK.  Oh, dear! [BIG LAUGH] I keep laughing at this, but that’s wrong. [LAUGHS]


CHET.  It’s not uncommon for multiple threat actors to be in, because, as you say, if one person is able to discover a flaw in your approach to defending your network, there’s nothing to suggest that other people may not have discovered the same flaw.

But in this case, I think you’re right, in that it seems to be “for the lulz”, if you will.

I mean, the person who did it was mostly collecting trophies as they bounced through the network – in the form of screenshots of all these different tools and utilities and programs that were in use around Uber – and posting them publicly, I guess for the street cred.


DUCK.  Now, in an attack done by somebody who *didn’t* want bragging rights, that attacker could have been an IAB, an initial access broker, couldn’t they?

In which case, they wouldn’t have made a big noise about it.

They would have collected all the passwords and then got out and said, “Who would like to buy them?”


CHET.  Yes, that’s super-super dangerous!

As bad as it seems to be Uber right now, in particular someone on Uber’s PR or internal security teams, it’s actually the best possible outcome…

…which is just that the outcome of this is going to be embarrassment, probably some fines for losing sensitive employee information, that kind of thing.

But the truth of the matter is for almost everyone else that this type of an attack victimises, the end result ends up being ransomware or multiple ransomwares, combined with cryptominers and other kinds of data theft.

That is far, far more costly to the organisation than simply being embarrassed.


DUCK.  So this idea of crooks getting in and being able to wander around at will and pick and choose where they go…

…is sadly not unusual.


CHET.  It really emphasises the importance of actively looking for problems, as opposed to waiting for alerts.

Clearly, this person was able to breach Uber security without triggering any alerts initially, which allowed them the time to wander around.

That’s why threat hunting, as the terminology goes, is so critical these days.

Because the closer to minute-zero or day-zero that you can detect the suspicious activity of people poking around in file shares and suddenly logging into a whole bunch of systems serially in a row – those types of activities, or lots of RDP connections flying around the network from accounts that are not normally associated with that activity…

…those types of suspicious things can help you limit the amount of damage that person can cause, by limiting the amount of time they have to unravel any other security mistakes you may have made that allowed them to gain access to those administrative credentials.

This is a thing that a lot of teams are really struggling with: how to see these legitimate tools being abused?

That’s a real challenge here.

Because, in this example, it sounds like an Uber employee was tricked into inviting someone in, in a disguise that looked like them in the end.

You’ve now got a legitimate employee’s account, one that accidentally invited a criminal into their computer, running around doing things that employee is probably not normally associated with.

So that really has to be part of your monitoring and threat hunting: knowing what normal really is so, that you can detect “anomalous normal”.

Because they didn’t bring malicious tools with them – they’re using tools that are already there.

We know they looked at PowerShell scripts, that kind of thing – the stuff you probably already have.

What’s unusual is this person interacting with that PowerShell, or this person interacting with that RDP.

And those are things that are much harder to watch out for than simply waiting for an alert to pop up in your dashboard.


DUCK.  So, Chester, what is your advice for companies that don’t want to find themselves in Uber’s position?

Although this attack has understandably got a massive amount of publicity, because of the screenshots that are circulating, because it seems to be, “Wow, the crooks got absolutely everywhere”…

…in fact, it’s not a unique story as far as data breaches go.


CHET.  You asked about the advice, what would I tell an organisation?

And I have to think back to a good friend of mine who was a CISO of a major university in the United States about ten years ago.

I asked him what his security strategy was and he said: “It’s very simple. Assumption of breach.”

I assume I’m breached, and that people are in my network that I don’t want in my network.

So I have to build everything with the assumption that somebody’s already in here who shouldn’t be, and ask, “Do I have the protection in place even though the call is coming from inside the house?”

Today we have a buzzword for that: Zero Trust, which most of us are sick of saying already. [LAUGHS]

But that is the approach: assumption of breach; zero trust.

You should not have the freedom to simply roam around because you put on a disguise that appears to be an employee of the organisation.


DUCK.  And that’s really the key of Zero Trust, isn’t it?

It doesn’t mean, “Uou must never trust anybody to do anything.”

It’s kind of a metaphor for saying, “Assume nothing”, and, “Don’t authorise people to do more than they need to do for the task in hand.”


CHET.  Precisely.

On the assumption that your attackers don’t get as much joy from outing the fact that you were hacked as happened in this case…

…you probably want to make sure you have a good way for staff members to report anomalies when something doesn’t seem right, to make sure that they can give a heads-up to your security team.

Because talking about data breach dwell times from our Active Adversary Playbook, the criminals most often are on your network for at least ten days:

So you’ve got a solid week-to-ten-days, typically, where if you just have some eagle eyes that are spotting things, you’ve got a real good chance at shutting it down before the worst happens.


DUCK.  Indeed, because if you think about how a typical phishing attack works, it’s very rare that the crooks will succeed on the first attempt.

And if they don’t succeed on the first attempt, they don’t just pack up their bags and wander off.

They try the next person, and the next person, and the next person.

If they’re only going to succeed when they try the attack on the 50th person, then If any of the previous 49 spotted it and said something, you could have intervened and fixed the problem.


CHET.  Absolutely – that’s critical!

And you talked about tricking people into giving away 2FA tokens.

That’s an important point here – there was multi-factor authentication at Uber, but the person seems to have been convinced to bypass it.

And we don’t know what that methodology was, but most multi-factor method, unfortunately, do have the ability to be bypassed.

All of us are familiar with the time-based tokens, where you get the six digits on the screen and you’re asked to put those six digits into the app to authenticate.

Of course, there’s nothing stopping you from giving the six digits to the wrong person so that they can authenticate.

So, two factor authentication is not an all-purpose medicine that cures all disease.

It is simply a speed bump that is another step along the path to becoming more secure.


DUCK.  A well-determined crook who’s got the time and the patience to keep on trying may eventually get in.

And like you say, your goal is to minimise the time they have to maximize the return on the fact that they got in the first place…


CHET.  And that monitoring needs to happen all the time.

Companies like Uber are large enough to have their own 24/7 security operations centre to monitor things, though we’re not quite sure what happened here, and how long this person was in, and why they weren’t stopped

But most organizations are not necessarily in a position to be able to do that in-house.

It’s super-handy to have external resources available that can monitor – *continuously* monitor – for this malicious behaviour, shortening even further the amount of time that the malicious activity is happening.

For folks that maybe have regular IT responsibilities and other work to do, it can be quite hard to see these legitimate tools being used, and spot one particular pattern of them being used as a malicious thing…


DUCK.  The buzzword that you’re talking about there is what we know as MDR, short for Managed Detection and Response, where you get a bunch of experts either to do it for you or to help you.

And I think there are still quite a lot of people out there who imagine, “If I’m seen to do that, doesn’t it look like I’ve abrogated my responsibility? Isn’t it an admission that I absolutely don’t know what I’m doing?”

And it isn’t, is it?

In fact, you could argue it’s actually doing things in a more controlled way, because you’re choosing people to help you look after your network *who do that and only that* for a living.

And that means that your regular IT team, and even your own security team… in the event of an emergency, they can actually carry on doing all the other things that need doing anyway, even if you’re under attack.


CHET.  Absolutely.

I guess the last thought I have is this…

Don’t perceive a brand like Uber being hacked as meaning that it’s impossible for you to defend yourself.

Big company names are almost big trophy hunting for people like the person involved in this particular hack.

And just because a big company maybe didn’t have the security they should doesn’t mean you can’t!

There was a lot of defeatist chatter amongst a lot of organisations I talked to after some previous big hacks, like Target, and Sony, and some of these hacks that we had in the news ten years ago.

And people were like, “Aaargh… if with all the resources of Target they can’t defend themselves, what hope is there for me?”

And I don’t really think that’s true at all.

In most of these cases, they were targeted because they were very large organizations, and there was a very small hole in their approach that somebody was able to get in through.

That doesn’t mean that you don’t have a chance at defending yourself.

This was social engineering, followed by some questionable practices of storing passwords in PowerShell files.

These are things that you can very easily watch for, and educate your employees on, to ensure that you’re not making the same mistakes.

Just because Uber can’t do it doesn’t mean you can’t!


DUCK.  Indeed – I think that’s very well put, Chester.

Do you mind if I end with one of my traditional cliches?

(The thing about cliches is that they generally become cliches by being true and useful.)

After incidents like this: “Those who cannot remember history are condemned to repeat it – don’t be that person!”

Chester, thank you so much for taking time out of your busy schedule, because I know you actually have an online talk to do tonight.

So, thank you so much for that.

And let us finish in our customary way by saying, “Until next time, stay secure.”

[MUSICAL MODEM]

UBER HAS BEEN HACKED, boasts hacker – how to stop it happening to you

By all accounts, and sadly there are many of them, a hacker – in the break-and-enter-your-network-illegally sense, not in a solve-super-hard-coding-problems-in-a-funky-way sense – has broken into ride-sharing company Uber.

According to a report from the BBC, the hacker is said to be just 18 years old, and seems to have pulled off the attack for the same sort of reason that famously drove British mountain climber George Mallory to keep trying (and ultimately dying in the attempt) to summit Mount Everest in the 1920s…

“because it’s there.”

Uber, understandably, hasn’t said much more so far [2022-09-16T15:45Z] than to announce on Twitter:

How much do we know so far?

If the scale of the intrusion is as broad as the alleged hacker has suggested, based on the screenshots we’ve seen plastered on Twitter, we’re not surprised that Uber hasn’t offered any specific information yet, especially given that law enforcement is involved in the investigation.

When it comes to cyberincident forensics, the devil really is in the details.

Nevertheless, publicly available data, allegedly released by the hacker himself and distributed widely, seems to suggest that this hack had two underlying causes, which we’ll describe with a medieval analogy.

The intruder:

  • Tricked an insider into letting them into the courtyard, or bailey. That’s the area inside the outermost castle wall, but separate from the best-defended part.
  • Found unattended details explaining how to access the keep, or motte. As the name suggests, the keep is the central defensive stronghold of a traditional medieval European castle.

The initial breakin

The jargon term for blagging your way into the 21st century equivalent of the castle courtyard is social engineering.

As we all know, there are many ways that attackers with time, patience and the gift of the gab can persuade even a well-informed and well-meaning user to help them bypass the security processes that are supposed to keep them out.

Automated or semi-automated social engineering tricks include email and IM-based phishing scams.

These scams lure users into entering their login details, often including their 2FA codes, on counterfeit web sites that look like the real deal but actually deliver the needed access codes to the attackers.

For a user who is already logged in, and is thus temporarily authenticated for their current session, attackers may attempt to get at so-called cookies or access tokens on the user’s computer.

By implanting malware that hijacks existing sessions, for example, attackers may be able to masquerade as a legitimate user for long enough to take over completely, without needing any of the usual credentials that the user themselves required to login from scratch:

And if all else fails – or perhaps even instead of trying the mechanical methods described above – the attackers can simply call up a user and charm them, or wheedle, or beg, or bribe, or cajole, or threaten them instead, depending on how the conversation unfolds.

Skilled social engineers are often able to convince well-meaning users not only to open the door in the first place, but also to hold it open to make it even easier for the attackers to get in, and perhaps even to carry the attacker’s bags and show them where to go next.

That’s how the infamous Twitter hack of 2020 was carried out, where 45 blue-flag Twitter accounts, including those of Bill Gates, Elon Musk and Apple, were taken over and used to promote a cryptocurrency scam.

That hacking wasn’t so much technical as cultural, carried out via support staff who tried so hard to do the right thing that they ended up doing exactly the opposite:

Full-on compromise

The jargon term for the equivalent of getting into the castle’s keep from the courtyard is elevation of privilege.

Typically, attackers will deliberately look for and use known security vulnerabilities internally, even though they couldn’t find a way to exploit them from the outside because the defenders had taken the trouble to protect against them at the network perimeter.

For example, in a survey we published recently of intrusions that the Sophos Rapid Response team investigated in 2021, we found that in only 15% of initial intrusions – where the attackers get over the external wall and into the bailey – were the criminals able to break in using RDP.

(RDP is short for remote desktop protocol, and it’s a widely used Windows component that’s designed to let user X work remotely on computer Y, where Y is often a server that doesn’t have a screen and keyboard of its own, and may indeed be three floors underground in a server room, or across the world in a cloud data centre.)

But in 80% of attacks, the criminals used RDP once they were inside to wander almost at will throughout the network:

Just as worryingly, when ransomware wasn’t involved (because a ransomware attack makes it instantly obvious you’ve been breached!), the median average time that the criminals were roaming the network unnoticed was 34 days – more than a calendar month:

The Uber incident

We’re not yet certain how the initial social engineering (shortened to SE in hacking jargon) was carried out, but threat researcher Bill Demirkapi has tweeted a screenshot that seems to reveal (with precise details redacted) how the elevation of privilege was achieved.

Apparently, even though the hacker started off as a regular user, and therefore had access only to some parts of the network…

…a bit of wandering-and-snooping on unprotected shares on the network revealed an open network directory that included a bunch of PowerShell scripts…

…that included hard-coded security credentials for admin access to a product known in the jargon as a PAM, short for Privileged Access Manager.

As the name suggests, a PAM is a system used to manage credentials for, and control access to, all (or at least a lot of) the other products and services used by an organisation.

Wryly put, the attacker, who probably started out with a humble and perhaps very limited user account, stumbled on an ueber-ueber-password that unlocked many of the ueber-passwords of Uber’s global IT operations.

We’re not sure just how broadly the hacker was able to roam once they’d prised open the PAM database, but Twitter postings from numerous sources suggest that the attacker was able to penetrate much of Uber’s IT infrastructure.

The hacker allegedly dumped data to show that they’d accessed at least the following business systems: Slack workspaces; Uber’s threat protection software (what is often still casually referred to as an anti-virus); an AWS console; company travel and expense information (including employee names); a vSphere virtual server console; a listing of Google Workspaces; and even Uber’s own bug bounty service.

(Apparently, and ironically, the bug bounty service was where the hacker bragged loudly in capital letters, as shown in the headline, that UBER HAS BEEN HACKED.)

What to do?

It’s easy to point fingers at Uber in this case and imply that this breach should be considered much worse than most, simply because of the loud and very public nature of it all.

But the unfortunate truth is that many, if not most, contemporary cyberattacks turn out to have involved the attackers getting exactly this degree of access…

…or at least potentially having this level of access, even if they didn’t ultimately poke around everywhere that they could have.

After all, many ransomware attacks these days represent not the beginning but the end of an intrusion that probably lasted days or weeks, and may have lasted for months, during which time the attackers probably managed to promote themselves to have equal status with the most senior sysadmin in the company they’d breached.

That’s why ransomware attacks are often so devastating – because, by the time the attack comes, there are few laptops, servers or services the criminals haven’t wrangled access to, so they’re almost literally able to scramble everything.

In other words, what seems to have happened to Uber in this case is not a new or unique data breach story.

So here are some thought-provoking tips that you can use as a starting point to improve overall security on your own network:

  • Password managers and 2FA are not a panacea. Using well-chosen passwords stops crooks guessing their way in, and 2FA security based on one-time codes or hardware access tokens (usually small USB or NFC dongles that a user needs to carry with them) make things harder, often much harder, for attackers. But against today’s so-called human-led attacks, where “active adversaries” involve themselves personally and directly in the intrusion, you need to help your users change their general online behaviour, so they are less likely to be talked into sidestepping procedures, regardless of how comprehensive and complex those procedures might be.
  • Security belongs everywhere in the network, not just at the edge. These days, very many users need access to at least some part of your network – employees, contractors, temporary staff, security guards, suppliers, partners, cleaners, customers and more. If a security setting is worth tightening up at what feels like your network perimeter, then it almost certainly needs tightening up “inside” as well. This applies especially to patching. As we like to say on Naked Security, “Patch early, patch often, patch everywhere.”
  • Measure and test your cybersecurity on a regular basis. Never assume that the precautions you thought you put in place really are working. Don’t assume; always verify. Also, remember that because new cyberattack tools, techniques and procedures show up all the time, your precautions need reviewing regularly. In simple words, “Cybersecurity is a journey, not a destination.”
  • Consider getting expert help. Signing up for a Managed Detection and Response (MDR) service is not an admission of failure, or a sign that you don’t understand cybersecurity yourself. MDR is not an abrogation of your reponsibility – it’s simply a way to have dedicated experts on hand when you really need them. MDR also means that in the event of an attack, your own staff don’t have to drop everything they are currently doing (including regular tasks that are vital to the continuity of your business), and thus potentially leave other security holes open.
  • Adopt a zero-trust approach. Zero-trust doesn’t literally mean that you never trust anyone to do anything. It’s a metaphor for “make no assumptions” and “never authorise anyone to do more than they strictly need”. Zero-trust network access (ZTNA) products don’t work like traditional network security tools such as VPNs. A VPN generally provides a secure way for someone outside to get general admission to network, after which they often enjoy much more freedom than they really need, allowing them to roam, snoop and poke around looking for the keys to the rest of the castle. Zero-trust access takes a much more granular approach, so that if all you really need to do is browse the latest internal price list, that’s the access you’ll get. You won’t also get the right to wander into support forums, trawl through sales records, or poke your nose into the source code database.
  • Set up a cybersecurity hotline for staff if you don’t have one already. Make it easy for anyone to report cybersecurity issues. Whether it’s a suspicious phone call, an unlikely email attachment, or even just a file that probably shouldn’t be out there on the network, have a single point of contact (e.g. securityreport@yourbiz.example) that makes it quick and easy for your colleagues to call it in.
  • Never give up on people. Technology alone cannot solve all your cybersecurity problems. If you treat your staff with respect, and if you adopt the cybersecurity attitude that “there is no such thing as a silly question, only a stupid answer”, then you can turn everyone in the organisation into eyes and ears for your security team.

Why not join us from 26-29 September 2022 for this year’s Sophos Security SOS Week:

Four short but fascinating talks with world experts.

Learn about protection, detection and reponse,
and how to set up a successful SecOps team of your own:


go top