S3 Ep56: Cryptotrading rodent, ransomware hackback, and a Docusign phish [Podcast]

[00’29”] Don’t miss our cybersecurity podcast minisodes!
[01’46”] Bliss is a hill in wine country.
[03’37”] Lessons from a cryptotrading hamster.
[08’46”] Ransomware gang hacked back.
[20’27”] Docusign phishers go after 2FA codes.
[30’23”] Oh! No! Sleep mode considered harmful.

With Paul Ducklin and Doug Aamoth. Intro and outro music by Edith Mudge.

LISTEN NOW

Click-and-drag on the soundwaves below to skip to any point in the podcast. You can also listen directly on Soundcloud.


WHERE TO FIND THE PODCAST ONLINE

You can listen to us on Soundcloud, Apple Podcasts, Google Podcasts, Spotify, Stitcher and anywhere that good podcasts are found.

Or just drop the URL of our RSS feed into your favourite podcatcher software.

If you have any questions that you’d like us to answer on the podcast, you can contact us at tips@sophos.com, or simply leave us a comment below.


Apple ships Monterey with security updates, fixes 0-day in Watch and TV products, updates iDevices

First thing this morning, just after midnight, we received the latest slew of Apple Security Bulletins by email.

As often seems to happen with Cupertino’s patches, the emails were informative and confusing in equal measure, offering an intriguing mix of security update information:

  • The latest macOS 12 Monterey emerges as 12.0.1. We’re assuming that the security patches in the otherwise brand-new Monterey release are listed for the benefit of anyone who’s been using the Beta version, because there are 37 listed fixes covering everything from AppKit to zsh. 15 of these were of the “malicious application may be able to execute arbitrary code” sort, with 9 of those bugs dealing with code execution bugs in the kernel itself.
  • Phones and tablets get related updates. Both iOS and iPadOS make a simultaneous jump to version 15.1, fixing many of the same bugs mentioned for macOS 12.0.1, including potential kernel-mode code execution exploits, as loved by jailbreakers, surveillance software makers and cybercriminals alike.
  • The previous iOS 14 flavour gets updated as well. For those who haven’t moved or won’t be moving from iOS 14 to iOS 15, there’s version 14.8.1, fixing a smaller number of bugs than the iOS 15 update. Presumably some of the iOS 15 bugs are unique to new code added for feature purposes.
  • The Big Sur and Catalina strains of macOS are patched. Big Sur gets a version-bump to 11.6.1, while Catalina gets an old-version-style patched labelled Security Update 2021-007, but not a version number change.
  • The watchOS and tvOS flavours get version updates. WatchOS goes to 8.1, while tvOS matches with the iOS and iPadOS version number, and gets 15.1. Importantly, these updates retrofit the iOS 15.0.2 patch to the Watch and TV product lines. The 15.0.2 update appeared more than two weeks ago, and closed a zero-day kernel code execution vulnerability dubbed CVE-2021-30883.
  • Old and now superseded updates get updated update notes. As well as announcing and documenting the abovementioned 8.1 and 15.1 versions for watchOS and tvOS, two bulletins provide “catchup” documentation for the previous updates numbered watchOS 8 and tvOS 15. These bulletins are useful for the purposes of completeness, but would have been more useful still if they had been published when the original updates came out. A similar “catchup” note for Safari 15 is also provided for those who want to know what was fixed there.
  • Not a word about iOS 12. It doesn’t seem to have officially been dropped, but it isn’t getting an update this time round, even though at least one of the recent zero-day bugs patched by Apple is said to be exploitable at least back to the iOS 12 branch of Apple’ code.

What to do?

The bad news this time round is the late arrival of the zero-day patches for watchOS and tvOS, and the neither-confirmed-nor-denied update status of iOS 12.

The good news is the arrival, at last, of the zero-day patches for watchOS and tvOS, and the fact that none of the other updates mention the dreaded words “this issue may have been actively exploited”.

As usual, check that you have the latest versions:

  • Use Settings > General and choose Software Update on your iPhone or iPad.
  • Use Apple menu > System Preferences > Software Update on your MacBook or desktop Mac.

Versions and update names to check for:
---------------------------------------
Monterey (macOS 12) emerges into the daylight as 12.0.1.
Big Sur (macOS 11) should now be 11.6.1
Catalina (macOS 10) should still be 10.15.7 but with Security Update 2021-007
iOS 15 should now be 15.1
iPadOS 15 should now be 15.1
iOS 14 should now be 14.8.1
iPadOS 14 should now be 14.8.1
iOS 12 should still be 12.5.5 (no update shipped, same OS for iPhones and iPads)
tvOS should now be 15.1
watchOS should now be 8.1

Banking scam uses Docusign phish to thieve 2FA codes

Two weeks ago was Cybersecurity Awareness Month’s “Fight the Phish” week, a theme that the #Cybermonth organisers chose because this age-old cybercrime is still a huge problem.

Even though lots of us receive many phishing scams that are obvious when we look at them ourselves…

…it’s easy to forget that the “obviousness” of many scam emails comes from the fact that the crooks never intended those scams for us in the first place.

The crooks simply sent them to everyone as a crude way of sending them to someone.

So most scams might be obvious to most people, but some scams are believable to some people, and, once in a while, “some people” might just include you!

When 0.1% is more than enough

For example, we received a phish this morning that specifically targeted one of the main South African banks.

(We won’t say which bank by name, as a way of reminding you that it could have been any brand that was targeted, but you will recognise the bank’s own website background image if you are a customer yourself.)

There’s no possible reason for any crook to associate Sophos Naked Security with that bank, let alone with an account in South Africa.

So, this was obviously a widely-spammed out global phishing campaign, with the cybercriminals using quantity instead of quality to “target” their victims.

Let’s do some power-of-ten approximations to show what we mean.

Assume the population of South Africa is 100 million – it’s short of that, but we are just doing order-of-magnitude estimations here.

Assume there are 10 billion people in the world, so that South Africans make up about 1% of the people on the planet.

And assume that 10% of South Africans bank with this particular bank and use its website for their online transactions.

At a quick guess, we can therefore say that this phish was believable to at most 1-in-1000 (10% of 1%) of everyone on earth.

In other words, it’s easy to extrapolate that 99.9% of all phishing emails will give themselves away immediately.

Therefore you might wonder to yourself, perhaps with just a touch of smugness, “If 99.9% of them are utterly trivial to detect, how hard can the other 0.1% be?”

On the other hand, the crooks knew all along that 999 people in every 1000 who received this email would know at once that it was bogus and delete it without a second thought…

..and yet it was still worth their while to spam it out.

Are you thinking clearly?

The ultimate believability of phishing scams like this one actually depends on many factors.

These factors include: Do you have an account with the company concerned? Have you done a transaction recently? Are you in the middle of some sort of contract negotiations right now? Did you have a late night? Is your train due in two minutes? Are you thinking clearly today?

After all, the crooks aren’t aiming to fool all of us all the time, just a few of us some of the time.

This scam starts, like many phishing scams, with an email:

The email itself comes from cloud-based document and contract-signing service Docusign, and includes a link to a genuine Docusign page. (We have labelled the Docusign screenshot below as FAKE because the content is made up, in the same way we label emails FAKE even if they appear in your trusted email app.)

The Docusign page itself isn’t dangerous because it doesn’t contain any clickable links, and just seeing the curious text in it should make you realise that this is just what it seems, a suspicious and unlikely document about nothing:

It’s not a contract, so there’s nothing to identify the person at the other end, or to reveal what the document is about, so the Docusign link is actually a red herring, though it does add a sense of legitimacy-mixed-with-curiosity into the scam.

“Is this some kind of imposter?”, you are probably wondering, “And what on earth are they talking about given that Docusign only has a page for me to view, not an actual contract to process?”

So you might be inclined to open the attached PDF, which is indeed just a replica of the document in the Docusign window:

Except that the link in the PDF version of the document is live, and if you’re still wondering what’s going on, you might be inclined to click it, given that the PDF probably opened in your chosen PDF viewer (e.g. Preview, Adobe Reader or your browser)…

…so it doesn’t feel like the you-know-it’s-risky option of “clicking links in emails” any more.

You ought to notice that the URL seems unlikely for a major bank, given that it’s a DNS redirector service in the Philippines, and that the site it redirects to is even more unlikely, given that it’s a hacked agricultural company in Bulgaria.

But one thing is certain, namely that the visuals are surprisingly close to the bank’s regular login page:

Perhaps the bank is trying to draw your attention to a transaction that hasn’t gone through yet, given that you’ve not actually “signed” anything yet via Docusign?

Of course, if you do try to login, the crooks will lead you on a merry but visually agreeable online dance, asking for your password:

The next step asks for your phone number, so the crooks get that even if the final step fails, followed by a short animated delay, presumably while one of the crooks (if they’re online, or an automated system if they aren’t) starts trying to login using your credentials, followed by a fradulent request for your 2FA code:

If the crooks get this far, and you do enter your 2FA code, then they almost certainly have enough to get into your account.

If all else fails, or it you’re suspicious of handling the matter online, as we hope you would be, there’s a fallback South African phone number listed in the “invoice” that you can call for help.

It’s not the bank’s real call centre, of course – in fact, it’s a VoIP (internet telephony) connection, so you could end up anywhere in the world.

We didn’t try calling it, but we don’t doubt that if you were to do so, the phone would be answered by someone claiming to be from the very bank against which this scam is being worked.

We’re guessing that a polite and helpful person at the other end would simply explain to you how to connect to the fraudulent site by typing in the URL yourself, and patiently wait with you as you went through the process.

That “helpful” person would probably log into the bank with your credentials in parallel with your call, copying the password and 2FA code as soon as you’d handed them over, and then they’d be helping themselves for real, intead of pretending to “help” you.

What to do?

Here are our tips to avpid getting caught out, even if it’s only those 1-in-1000 emails that you need to worry about:

  • Check those URLs. Copying the look-and-feel of a brand’s website is easy, but hacking into that brand’s own servers to run the scam is much harder. If you can’t see the URL clearly, for example because you are on a mobile phone, consider switching to a laptop, where details such as full web addresses are much easier to check out.
  • Avoid links in emails or attachments. You might be willing to click a Docusign link, assuming you are expecting one and the URL checks out. That means taking what amount to a well-informed risk. But for services such as banks, webmail and courier companies where you already have an account, bookmark the company’s true website for yourself well in advance. Then you never need to rely on links that could have come from anyone, and probably did.
  • Use a password manager. Password managers not only choose random, complex and different passwords for every site, so you can’t use the same password twice by mistake, but also associate each password with a specific URL. This means that when you click through to a fake site, the password manager simply doesn’t know which password to use, so it doesn’t try to log you in at all.
  • Never call the crooks back. Just as you should avoid links in emails, you should also avoid phone numbers offered by someone you don’t know. After all, whether the number is genuine or not, the person at the other end is going to greet you as though it is. Find the right number to call by looking it up yourself, ideally without using the internet at all, e.g. from existing printed records or off the back of your credit card.

Listen up 4 – CYBERSECURITY FIRST! Purple teaming – learning to think like your adversaries

Article 4 of Week 4 of Cybersecurity Awareness month!

To access all four presentations on one page, please go to:
https://nakedsecurity.sophos.com/tag/sos-2021

We sign off from this article series with a fascinating interview with Michelle Farenci, Information Security Engineer at Sophos.

Michelle knows her stuff – she’s a cybersecurity practitioner inside a cybersecurity company!

Learn why thinking like an attacker makes you a better defender:

LISTEN TO THE AUDIO

Click-and-drag on the soundwaves below to skip to any point in the podcast. You can also listen directly on Soundcloud.

READ THE TRANSCRIPT

[FX: MORSE CODE GREETING AND SYNTH VOICE]

PD. Hello, everybody, welcome to the Security SOS 2021 webinar series.

I’m Paul Ducklin, and today my guest is Michelle Farenci.

Hello, Michelle.


MF. Hello!


PD. And our topic today is all about “Purple Teaming”.

So, let’s start right at the beginning with, “What are we talking about here?”

Then we’re going to look at, “Why is this important?”

Even if you have a very small business, how can this help you?

And then, importantly, we’re going to finish by looking at how you do it.

So, Michelle, it sounds very mysterious: “Purple Teaming”.

I think a lot of people would have heard of Red Teams and Blue Teams, these synthetic opponents…

But why don’t you kick us off by telling us, firstly, a little bit about yourself, and what you do for a living at Sophos, and then tell us, “What is a red team? What is a blue team?”

And, most importantly, “What is this perhaps slightly more modern concept of purple team”?


MF. So, I actually got my start over in network security originally, when I got into information security.

And from there, I moved around, mostly doing security engineering threat detection blue team work, which I’ll be explaining shortly, as you mentioned.

And I spent a little bit of time getting to see the audit side, but I’ve almost entirely lived in this blue team sphere: looking at the alerts, threats, anything that comes up.

And if we go onto the concept of red team and blue team, I am definitely a blue teamer… it goes back to military war games where you have attackers and defenders. Attacking, offensive, is red side or Red Team, and defensive side is Blue Team.


PD. So in that context, in the military context, it was actually all your own people, but they were just pretending to attack each other so that they could practice various scenarios of attack and defense, right?


MF. This is a massively important distinction you’re making.

Because the nature of this concept of purple teaming, where you have your attackers and defenders, is they come together and they use the red and blue sides against each other, and play against one another, to suss out where the weaknesses are, the gaps, and try to get one over on the other guy.

And most often, you’re not going to see this within your own people, because most companies are not large enough to support staffing both a red team and blue team.


PD. Right.


MF. But, in a broader sense, the purple teaming that we’re talking about also refers to the ability to be on one team and understand the mindset and think like the other team.

This is something that’s much easier to learn how to do if you’ve had this purple team experience and can understand and see how the other side works.


PD. So the idea, simply put, is: purple (or is it magenta?) is basically what you get when you mix red and blue together…


MF. When you tell the two siloed-off teams to get together, and not play nice, but don’t break anything… this is what happens! [LAUGHS]


PD. I imagine if you didn’t sit down and talk to the other side – because you can when they’re not actually the enemy…

Then you’d end up with people that are quite good at reading each other, but it wouldn’t give them much of a start against what might happen when some attacker they’ve never met before came along and maybe mixed things up a little bit differently?


MF. I mean, training against the same… if you’ll excuse my terminology, training against the same “target dummy” repeatedly just teaches you how to keep attacking that target dummy.

So, if your targets are always the same, you might learn what they particularly are bad at identifying and will miss, and play to that, as opposed to a much more realistic example where you don’t know what you’re going to get, because the entire point of these more sophisticated attacks… you want it to mimic reality, attackers don’t want to be noticed.

Purple team, you’re training people up or you’re identifying gaps.

You’re really finding where the weaknesses are on both sides.

And defenders are also used to working within really strict controls and parameters so that they don’t break things.


PD. Yes, of course.


MF. It’s a totally opposite mindset, which was pointed out to me by a red teamer, where the red team doesn’t have to live within these constraints.

What they’re trying to do is get in and get the data, ideally without being noticed.

They have many ways that they can do that, and they’re not too worried about the controls that the blue team has to operate within for their detections.

So, they can use any method to get around this detection, or multiple detections, that have been set up.

But the blue team… they would have to set up another detection to catch the new red team going around their controls.


PD. Now, in last year’s SOS Week, when we actually had Craig [Jones] from our blue team and Luke [Grove] from our red team, and we sort of played them off against one another in the webinar, which was quite fun…

One thing that we thought it was important to mention, so maybe you can just say something about that, is that even if you get the luxury, or the fun, or the hacking experience of working in a red team, like, “Hey, you have to pretend you’re the hacker and you have to try and break into the company,” it’s not still a free for all, is it?

For example, you can’t break the law like a crook could.

You couldn’t go, “Well, I want to get into the server room – I’ll just smash the door down with a sledgehammer.”

And there are going to be some things where you know, if you tried that attack, you might break something, so you might’ve been told that’s off limits.

But most importantly, you may never, ever do anything like that without formal permission, in advance, from the person who owns and operates the network, whether that’s your employer or somebody who’s hired you in from outside…


MF. So, most commonly that permission that’s been given comes in the form of more or less a permission slip called the “Scope of Work”, where the person who is having the penetration testing or red team work put against their infrastructure… they are telling the red team what they can hit.

And there might be certain hosts they can hit, but not with all attacks.

Password spraying in certain areas is usually one of the common ones, so users aren’t locked out and unable to do their job during the day.


PD. So this is very much a similar idea, I guess, to what you see in a lot of so-called bug bounty programs, where companies say:

“We don’t mind you hacking on our services, our servers, our products, and these things are in scope.

But there are some things we already know that there’s a problem there; we don’t need you to prove it again.

So for example, running a distributed denial of service attack against our servers to crash them?

No thank you, that’s off limits.”


MF. Even the researchers have to act within the social contract that the red teamers have to act within.

So, if a researcher submits a bounty through the established bounty program, but they’re holding a sub-domain or domain hostage, or holding something hostage from the company until they get their payout, or doing anything that just seems really immoral and unethical, they can be disqualified from the bounty in some of these programs, for sure.


PD. Yes, I remember there was a case… I think it was in the UK recently, where a company was very proud of itself for a phishing simulation they did, where they wanted to teach their users that crooks might send them emails that were very, very tempting.

So they sent one saying, “Hello, all staff. We know it’s been a tough year because of coronavirus, but everyone’s getting a bonus. Click here.”

And, of course, those who reported it as a phish didn’t get a bonus, but they just got a little sticker from teacher.

And those who did click, thinking they were getting a bonus, got into trouble.

Although it was very clever from a phishing point of view, from a social contract point of view, it was pretty poor for morale, pretty low ebb.

So I guess the moral of the story is, “Don’t do that.”


MF. Yes!

Generally speaking, being decent to each other is even more important when you’re breaking into somebody else’s estate and rifling around. [LAUGHS]


PD. And I remember last year… Craig and Luke were talking about this: occasionally, the red team might be asked to mount an attack in a particular way.

In other words, the blue team actually know what’s coming, but they won’t necessarily know when.

So, it’s not as realistic as if crooks tried it, because the crooks can do whatever they want, but the motivation there was they put some mitigations, some detections, some alerts, some sensors, whatever you might call them, in place…

And they wanted to verify that if someone came in with that sort of attack, that the defenses that they had put in place would actually work.

It’s not all just newfangled “hack-it-and-see”, is it?

There is a sense of intellectual order, even if you work in a red team.


MF. Yes.

And I think that leads rather nicely to the benefit for red teamers in thinking more like the blue team.

Especially the fact that they’re being given a specific attack to test against a detection; especially because red teamers generally don’t have to worry about an environment that looks the same day after day.

Most red teamers that you’ll hear talked about are penetration testers.

They go on various engagements; they’re not looking at the same environment even every week.

So they don’t have to worry about what the network looks like, or what can be detected, what’s a noisy move.


PD. Michelle, just to finish up this sidea of “What is purple teaming?”… is it just a case that it’s a cheap way of having a red team and a blue team because you can actually get by maybe with one or two people and they do a bit of both?

And that, if you’re a big rich company, you wouldn’t do such a thing, and you’d have, say, four red teamers and four blue teamers and never they’d mix…

Or is there a bit more to purple teaming than that?


MF. Trying to hire a purple team isn’t really something you can do.

If you can find someone who has the skills to be an effective red teamer and blue teamer in the same person, they are probably extremely expensive.

So purple teaming is really more to get experience – either with spinning up your own blue team, and helping to get them trained up and think like attackers, so that they can then go and build better detections and learn from that…

Or you’re using purple teaming to keep everybody on their toes and fresh, and working on something that they might not see every single day or even rarely.

It gives them practice.


PD. Now, I know this came up last year and I’ll ask you again…

There seems to be a sense, when you talk to people who are thinking of getting into cybersecurity, that the red team is the glamorous side, and the blue team… well, that’s just boring, running the reports and dumping the log files.

But nothing could really be further from the truth, could it?

Both sides have their challenges, and the need to think on your feet.

And both sides, for better or for worse (and get used to this if you want to get into cybersecurity) have plenty of report writing and explaining things, hopefully in plain English, as part of the job.

It’s not just that if you get a red team gig, all you have to do is get out of bed, hack a bit and nothing else…


MF. It’s really not!

Both sides, as you mentioned, have to write reports.

Red Team, it’s penetration test engagements, where it’s all of their findings in a sizable report, hopefully explained well and in a way that’s easy to understand.

On the Blue Team side, you are also writing reports, but they’re probably incident or analysis type reports, which it would also be behoove you to write so other people can understand – and it’s not as easy as it sounds!


PD. No, you’re not wrong there!

Because IT and cybersecurity love their jargon, perhaps as much as any other field you can think of off the top of your head.

What you need to be able to do is to explain things in simple enough terms that it’s obvious what the benefit to the business would be of doing some things, and what the risk, the quantifiable risk, would be of not doing certain other things.


MF. Yes.

And it’s definitely important, because security defenses are generally not money-making purchases.

They are reputation and money saving purchases.

As someone put to me once: defensive purchases are an investment and insurance policy: the staff, and your EDR, and your SIEM, is going to be cheaper than fines, legal fees and business loss from reputation damage.


PD. Yes.

It’s like, “The only backup you will ever regret is the one you didn’t make” type of argument…


MF. Exactly!


PD. OK, Michelle, so it sounds like, particularly being on the red team, “Hey, you get to hack for money and you don’t go to prison for it as long, as he keeps to the rules.”

So, that sounds like fun.

There is an unglamorous side, but which is actually the really important side… the new things you’ve learned to do, can you quantify those in a way that are easily understandable to other people?

I think it’s obvious why that is important, given that today’s cybercrooks: [A] have a lot of money at their disposal, [B] have plenty of time, and [C] they don’t have to play by any rules at all.

But the burning question, particularly for a small or medium-sized business that relies very heavily on IT, is: how do you get into this aspect of cybersecurity if you are a small business?

Do you have to say, “I’m going to go and try and get some security hacker types,” or can you actually do it with partnerships with other people if you want?


MF. So, the best way to go about it is going to almost invariably come down to your available budget, because security teams are, by their very nature, expensive.

As I’ve previously mentioned, they are not generally your moneymaker – they’re investments.


PD. On the other hand, as you said, if it means that you don’t have to pay a $4 million fine to the regulator for leaking your customers’ data, and then spend three years trying to rebuild your business… [LAUGHS] maybe they do pay for themselves?


MF. In a sense, yes, maybe they do pay for themselves.

But similar to how you can hire penetration testing services, you can effectively hire blue team services.

You can hire other organizations to be a remote security operation center for your organization, and handover as much or as little of that as you’d like, depending on the vendor.


PD. That’s largely the idea behind the Sophos Managed Threat Response service, isn’t it?

We’re not saying, “Well, we want to take over your operations,” we’re just saying, “We’re quite good at cybersecurity. We’re good at noticing the signs of certain sorts of attack like ransomware’s coming in two days, believe us,” and you can get us to help you a little, medium, a lot, either reactively or proactively…

It’s a service that you can buy as much or as little of as you want, to fit the needs of your business at any moment.


MF. And one of the brilliant things about purchasing the service for this is that you benefit from an organization that *did* have the budget to hire someone who already has the expertise of being able to, in theory, think like the other team, as well as do their own job.

MTR casebook: Uncovering a backdoor implant in a SolarWinds Orion server


PD. And may indeed have dealt with similar sorts of attacks in other networks.

So they have (I hate the paramilitary jargon of cybersecurity, sometimes)… in a way, they’re kind of “battle hardened”, right?

Which is quite valuable when the pressure’s on!


MF. They’ve had the benefit of this, as we’ve been calling it, purple teaming.

Really, it’s the benefit of understanding the other side’s mindset, and applying that to protecting you.

Or, in the case of hiring for penetration testing, red teaming, then they’ve seen so many environments that they have a really good idea of what they think will go undetected in your own network, and what may work as an exploit, because they’ve seen it however many times before.


PD. That’s not something that a company could just learn instantaneously.

So, I imagine that by outsourcing your red teaming, blue teaming, purple teaming, even if only for a while…

It’s actually a great way not just to get started, but if you want to build that expertise yourself, to get those people essentially trained on the job, “learn while doing.”

Is that right?


MF. For purple team exercises, that would definitely help.

You’d probably have to do a decent amount of looking to determine what is the right fit in terms of where you would hire for the correct experience for the level that your team is at, and what can be provided for them in terms of support if it’s needed.


PD. Now, Michelle, another thing – certainly we hear this a lot in comments on Naked Security…

There is this sense that, if you need to build a blue team of your own, some people feel that that’s like an admission of defeat.


MF. I mean, I would say that it’s really an insurance policy.

Sure, they might get in, but at least you insured yourself against dumber ways they could have gotten in that would be even worse for your company’s reputation if it got out that that was what had happened…

What you sound like after a data breach


PD. Or more importantly, they might get in, but only be able to achieve 10% of their results.

For example, they might be able to set up some accounts where they think they’re going to get back in later… but if you get them in time, before they get around to scrambling your files and trashing your backups, then you’ve headed off that demand, “Hey, you have to pay us $4 million or we won’t give you your data back.”


MF. I mean, there’s always going to be something bigger and worse…

But the problem we’re now running into, as an industry, and as technology improves, is the malware can become so much more complicated to try to avoid detection.

The only way to pick something up like that is that you’re looking for behavior-based detections.

These are extremely difficult to code for, because human behavior doesn’t translate directly into the data logs, and usually your data logs from the detections is how you then build your alerting.


PD. Where the attackers are bringing malware, it’s not as though once they’ve released one bit of malware, that’s the only attack they’re going to try.

They might have been in your network for hours, or days, or weeks for all you know… they may even be equal to your own syadmins.

The crooks may even have mapped your own network out better than you have.

Therefore, the fact that you see any sign of anomalous behavior… it’s not just, “Oh, we stopped it. We did a good job.”

If you stopped it, that’s great.. but you really need to be answering the question, “Where did that come from, and what might happen next time?”


MF. I would say, generally speaking, the most common human element that you’re going to see is in phishing, which is never going to die.

It’s a low effort, low time-sink for potential high bonus if it’s successful, and that is purely down to human behavior.

It’s all social engineering.

Phishing tricks that really work – and how to avoid them


PD. Yes!

My understanding is that a lot of the phishing services sellers these days are offering what are essentially human-backed services.

They won’t just offer you, who knows nothing about technology but would like to get into cyber crime… they’re not just saying, “Oh, we’ll write you an email and we’ll put some logos in, and then we’ll run a little website for you.”

They’re offering a whole package where, when someone lands on the phishing page that they’ve made for you, you’ll get an alert.

You can even have a help button where you go in and help the person “phish themselves”, or where, when they give you their two-factor authentication code, you’re actually right there looking at the screen, ready to try it yourself in the 30 second window.

The crooks have learned a lot of patience, it seems, that perhaps they didn’t have when it was all about those super-extra fast-spreading Code Red/SQL Slammer viruses, where the volume was the exciting part.

So, I guess if you want to defend against that, you have to be thinking about more than just, “Oh, well, I’ve got some scanning technology and I’ll look at the reports tomorrow.” [LAUGHS]


MF. Well, part of why phishing isn’t going away is because it’s one of the easiest ways to gain a foothold.

And gaining a foothold is that much harder with most organizations that do have security monitoring, and engage in penetration testing and red team engagements, or have a blue team, and they have had a purple team engagement so that blue teamers can learn to think like red teamers or attackers.


PD. Michelle, I’d like to finish off by giving advice of a slightly different sort.

To those of our listeners who might be interested in getting into this sort of career in cybersecurity, there’s obviously a crying demand for active cybersecurity practitioners.

If you don’t know much about it, but you’d like to get going, where would you recommend that people start?


MF. I recommend the best place to start is with some online research, and looking into free open source training tools.

From there, you can also join online communities and, coronavirus notwithstanding, things may open up into in-person groups in the future again.

If not, there are always the online groups and security communities in every major city for sure, even a lot of the larger towns.

And just meet up with folks there!

You can connect and network, and even find mentors, and learn more about the different areas of the field to find what you’re really wanting to do in it.


PD. Yes, I think you will find that that part of the cybersecurity industry is surprisingly co-operative.

Because our competition in cybersecurity is really the crooks, it’s not each other!


MF. No, it is definitely not each other!


PD. You don’t have to go to the million dollar conferences, do you?

There are lots of events that are fairly low cost.

And, like you say, there are loads of free tools and free training materials online that mean that you don’t have to decide in advance, “Oh, I’m going to spend four years at college or university getting a diploma or a degree,” and then find out that you don’t like it.

You can actually learn as you go… and the community would love to have you, I’d say.


MF. I’ve found that people in cybersecurity are more than happy to share the knowledge they’ve accumulated with anybody who will listen…

To prevent them from making the same newbie mistakes that they did!

Cybersecurity Awareness Month: Building your career


PD. Great point!

And lastly, if I may say so, if you are determined to get into cybersecurity and you do want to try things like offensive security, hacking, penetration testing…

*Never* fall into the temptation of doing it on somebody else’s network without their full, express permission in advance.

Or you may never get that job in cybersecurity because nobody’s going to trust you again.


MF. [PRETENDING TO BE FURTIVE] Isn’t that also a crime?


PD. [LAUGHS] Well, yes, there is that, now that you mention it.


MF. [LAUGHS] It may be hard to be hired anywhere else, if you also committed a crime.


PD. Michelle is quite right there, folks!

Digging around in other people’s networks without permission, in most countries of the world, is a criminal offense.

And even just looking is not allowed.

So even if you say, “Oh, I didn’t change anything,” or, “I was doing it with the best will in the world”… unless they said you could, you can’t!

Michelle, thank you so much for your time.


MF. Thank you for having me.


PD. It’s great to hear your passion for building this sort of security expertise that is absolutely real-world.

Particularly hearing it from someone who doesn’t just have a security role, but has a cybersecurity role *inside a cybersecurity company*!

It doesn’t get much more difficult than that.

So, thank you very much for your time, and thanks to everybody who attended this webinar.

And it remains only for me to say…

Until next time, stay secure.


MF. Stay secure.

[FX: MORSE CODE SIGNOFF]

Learn more about Sophos Managed Threat Response:
Sophos MTR – Expert Led Response  ▶
24/7 threat hunting, detection, and response  ▶


Listen up 2 – CYBERSECURITY FIRST! How to protect yourself from supply chain attacks

Here’s the second in our series of Naked Security Podcast minisodes for Week 4 of Cybersecurity Awareness month.

To access all four presentations on one page, please go to:
https://nakedsecurity.sophos.com/tag/sos-2021

This article is an interview with Sophos expert Chester Wisniewski, Principal Research Scientist at Sophos, and it’s full of useful and actionable advice on dealing with supply chain attacks.

This year’s big-news cyberattacks on Kaseya and SolarWinds remind us just how hard it is to defend against these threats, so Chester explains how to control the risk.

LISTEN TO THE AUDIO

Click-and-drag on the soundwaves below to skip to any point in the podcast. You can also listen directly on Soundcloud.

READ THE TRANSCRIPT

[FX: MORSE CODE GREETING AND SYNTH VOICE]

PD. Hello, everybody, welcome to this Security SOS 2021 webinar.

I’m Paul Ducklin, and today’s guest is Chester Wisniewski.

Hello, Chester.


CW. Hey, Duck!


PD. We’re going to be talking about supply chain attacks.

I think we’ve all got a vague idea what “supply chain attack” means, because it’s an old term, from physical supply chain days…

But there’s a bit more to it when it comes to IT and cybersecurity, isn’t there?


CW. Yes, absolutely.

When I think of “supply chain”, the first thing that comes to mind is a foreign government wanting to break into a military contractor.

Probably pretty difficult to get into one of your Tier One military contractors…

So, instead, you might target a supplier to that company, that maybe provides remote IT access for those people to provide some sort of service, and you come in through the side door, if you will.

And certainly, when we’re talking about IT in particular, more and more IT is being provided as a service – outsourced management, this kind of thing.

And that certainly increases the amount of access that a lot of organizations provide to those trusted third parties that can now be targeted as a side door on the way in.

I don’t want to call it a “backdoor”, but it’s certainly not coming in the front door!


PD. There’s no need for that to be just one step up the chain either, is there?

If you know that you rely on a company to provide you with updates, that are in turn provided by another company, and that they get those updates built by a software vendor somewhere else… you could go after that software vendor’s build process.

They poison the vendor that licenses their stuff.

They poison the update server that your outsourcer relies upon.

And they poison you.


CW. Sure!

And, not only that, you also can look at it as how wide a net might be cast by any given type of supply chain attack.

ASUS computers had some poisoned software that they used for driver updating, that appeared to hit millions of computers, but we never figured out which ones the adversaries were actually going to eventually put their payload on.

They were very scattershot in that case.

Whereas, other times we see much more specificity, only affecting people that are directly victims.

Actually, one I know you wrote about, Duck… I think there was an NPM package, a JavaScript package, in the NPM repository a couple of years ago, that was poisoned to steal cryptocurrency wallets.

Of course, that package might have ended up on thousands of people’s computers, but only a fraction of them maybe had a cryptocurrency wallet that would have kicked that poison-pill package into action to then start stealing those wallets.

Historically, it’s a big national security concern, as it should be, whether other governments might be poison-pilling some of our software and supply chains…

But it’s a whole different kettle of fish now that we see ransomware criminals and others getting involved in the supply chain game, and the outcomes are going to be far more impactful and concerning for the average person’s online safety if criminals continue down this path.


PD. So, in the Kaseya incident that happened recently, where, with essentially one ransomware attack, thousands of networks got ransomed at the same time… I think it’s reasonable to assume that the intention of what was essentially a supply chain attack there was the amplification.

It wasn’t, “Well let’s try and reach everybody so we’ll just get the few people we’re targeting.”

It wass, “Let’s get everybody in one go.”

It does show the scale of that problem that, with essentially one intrusion, a thousand networks got hit.


CW. It almost reminds me of a New Age worm, right?

We used to have worms because we had lots of software exposed to the internet that had remotely exploitable holes in it, and then things like WannaCry happened.

Now, as it’s getting harder and harder to write wormable malware, rather than worming through exploits, maybe it’s more efficient to worm through trusted suppliers.


PD. Chester, let’s move on to Part Two, which is, “How do these things typically happen?”

If you like, what are the ingress points that the crooks can use?

Because I think a lot of people have the idea that supply chain attacks… they’re physical things, like they would be if you wanted to substitute defective products in the old days.

Most supply chain attacks – the ones that make the news and that we probably need to be most concerned about – don’t really involve hardware at all do they?

They could, but actually the clear and present danger is the risk that comes through automatic software updating that percolates downwards through many layers…


CW. Yes, and the origins have been around for a while, but there’s lots of different ways that happens, right?

I mean, we’ve been writing about malware that would automatically infect people’s projects when they were compiling them, for example, through compromising the build environment.

And that’s, of course, one of the ways these attacks still happen.

But the example I raised a few minutes ago of that NPM package being compromised is another way that you might be able to slip in code that does just about anything.

I mean, the example we used was stealing cryptocoin wallets, but there’s nothing that would have stopped that code from providing a backdoor, or delivering further software malicious packages, or being ransomware itself.

The options are limitless when you have the opportunity to introduce code into legitimate software.


PD. In at least one case that I’m aware of, the poisoning of open source package management tools, like Ruby Gems, Packagist packages for PHP, NPM… the crooks who wanted to put the additional “naughty software components” into packages that lots of people used actually joined the community first, and made themselves useful, and hung around a bit, and were willingly given the community keys to the castle.

That was when they unleashed their malicious code on everybody else.

These can be plots that take a long time in the hatching!


CW. Obviously, most code these days needs to be signed in some way, so you need to find a way of getting a signature on it so it will be accepted by the updating system.

There are quite a few different ways of doing it, right?

The example you used is you just “pretend to be friends”, until somebody gives you the keys. [LAUGHS]

We’ve seen nation states for years using all their malware to break into organizations and stealing their signing keys, and then using those keys to sign their malware.


PD. Yes.


CW. We’ve seen people pretend to be legitimate to certificate authorities, and acquire legitimate certificates from certificate authorities by impersonating legitimate organizations.

And of course the final way is the first example I used, which was compromising the build environment itself so that the company that’s delivering the poisoned payload is inadvertently signing it themselves.


PD. Yes!

That was a huge problem going back more than a decade now, to anybody who remembers the W32/Induc virus, which infected your Delphi build environment, if you were a programmer.

And then every program that you compiled thereafter had this virus in it.

I remember, in our support group, having terrible trouble explaining to people that the reason that this virus was spreading so continually in their organization was that it was coming from inside the house, as it were.


CW. It did reconfirm my theory that all Delphi software is malware… if you’ve ever looked at Brazilian banking Trojans, you’ll know what I’m talking about. [LAUGHS]


PD. [LAUGHS] Yes, that used to be the malware writers tool of choice, didn’t it?

Seems it’s probably C-Sharp these days.


CW. Well, of course, that’s exactly what happened in the SolarWinds attack as well, right?


PD. Yes.


CW. SolarWinds, in their attack, were inadvertently signing their own software that contained some of this malicious code that was uncovered in December of 2020.


PD. Yes.

My understanding is that the crooks would inject the malicious file just at the point that the build happened, and then remove it afterwards.

I presume, if you had to do a test build or an out-of-tree build by just copying the files, it would all come out absolutely fine.

But the one that was officially built, and had the imprimatur of the company on it ,and was therefore accepted by everyone downstream, was the one that had the malware in it.


CW. Of course, there’s a fifth way to pull this off as well, which was used in the Kaseya attack.

That is using a signed legitimate executable that has a vulnerability in it, and then using that vulnerability in order to inject your own malicious code.

In the case of the Kaseya ransomware from REvil, Microsoft Defender had a vulnerability in it that allowed a DLL to be loaded instead of a legitimate one – called “sideloading”.

And the criminals just used that legitimate binary from Microsoft, with Microsoft’s signature on it, to then inject that malicious ransomware code into the otherwise legitimate Windows Defender process.

That was an older version that was vulnerable, but it still had a Microsoft stamp of approval on it.


PD. The jargon term for that is BYOB, isn’t it?

Short for “Bring Your Own Bug.”.


CW. All of this to me, Duck, just demonstrates that this is not a simple problem, right?

This is a challenge for organizations that provide security tools, services, software services, any kind of programs, especially things that rely on being kept up to date because of their critical nature within an enterprise environment.

And there’s a lot of different places that practitioners need to look in order to secure their systems, and ensure that they aren’t exposed to this vulnerability.


PD. Yes, because it’s a rather crashing irony, isn’t it, that if the crooks can poison the components in the updating process that you’re inclined to trust, for example because of their digital signatures… then it’s very hard to put your finger on why you’re full of untrusted code afterwards.

Because, as far as you can see, nobody’s been downloading anything they shouldn’t.

So, Chester, “What to do?”

How do you reduce the risk of supply chain attacks, both as a supplier, let’s start with that, and as a consumer?

What can you do as an IT provider, as a software vendor, as a managed service provider, so that when you say to your customers and your prospects, “We take your cybersecurity seriously,” you really mean it?


CW. Well, certainly one place to start with as a software provider is understanding that the security of your software is only as good as the security of your entire environment that’s used to build and maintain that software.

And that includes the security of your developer’s desktops and how they authenticate, how they’re maintained and patched, that kind of thing, all the way on to the computers that actually compile the code and package that code up for distribution.

So, I think in a lot of cases we focus too singularly on product security itself, and ignore the process with which that software was born, if you will.

The security of all those things around the software that build it are equally as important to that software security as the code in the software itself.


PD. So, for a company that makes its own software and then publishes it publicly for automatic updates with digital signatures…

That build environment, where the final trusted build is done, and where the digital signatures are actually created, that should really be at least as secure as any other part of your network.

It doesn’t matter how secure the code is that you put in if the process of constructing the final version can inject insecure code at the last minute.


CW. Yes!.

It’s not an easy thing, because you need people to have a lot of flexibility when they’re developing code… it’s not like you want to airgap all your developers, right?

People need to be able to use the internet to search things, and look things up, and access manuals.

There are a million things that often end up with looser security for software engineers compared to the rest of the organization, not tighter.

So it is a very difficult balancing act.

I think it’s one of these things we have to look at similarly to defending our networks in general.

We’re going to do everything we can to prevent it from happening, so we’re going to continuously improve the security of our build environments, our engineering environments, and monitor, monitor, monitor, right?

But on top of that, we can’t spend all our time there.

We also need to think about how would we detect if it occurs, and then how could we respond?

And we can learn from what we saw in some of these other attacks and go, “Well, what would my company do?”

We saw Kaseya very quickly was able to disable their entire cloud infrastructure while they were investigating what was going on.

So, that was operationally a really good thing – it only took them a few minutes to turn off that infrastructure.

So, we can take lessons from these examples and say, “Well, if this happened to my company, how would I detect it?”

Would I likely hear it because *I* found it because I’m looking, or would I likely hear it from a third party because I’m *not* looking?

And then when I do find out about it, what can I do to respond to that, to minimize the harm to the people that are downstream from me that might be impacted?


PD. I guess one example might be a thought experiment that you can conduct with respect to your entire development process…

Imagine that one of your developers, with the best will in the world, does some kind of update of a package that they use.

And that package uses five other packages, and those packages each use 10 other packages – you know how this goes, where you end up with this huge dependency tree that you don’t realize.

If one of those 274 packages that the package you’re using depends upon were found to be poisoned… how quickly could you replace it with one that wasn’t?

How quickly could you advise your customers on how to find whether they had the poison package in the distribution they downloaded?

And how quickly and how reliably could you fix it in a way that people would be inclined to trust you the second time?

That sounds as though I’m saying you should plan for failure, but really what I’m saying is that the time to practice what to do when something goes wrong is before it happens.

Don’t try and make it up as you go along, because you will not have time.


CW. Absolutely.

I personally reviewed my earthquake kit this weekend as some fun things to do on a Sunday, but there’s a reason for that, right?

Geologically, the likelihood of there being a severe earthquake here [in the Pacific North West] is one in 40,000, or something, in a given year.

And that sounds like it’s probably not going to happen… but you know what, I’m going to be pretty grateful for that fresh water and those batteries that aren’t dead that I replaced in my bag this week, if it does happen.

And it only took me a few minutes of thought to be prepared for a crisis event that might happen to my family.

I think we need to be similarly prepared for crisis events in the workplace.

Have we thought about it?

Do we know how we would do it?

Do we know who can approve it if it needs approval to turn something on or off or retract a software package?

And then, when it does happen, if it does happen, you’ll be able to respond in minutes, not days, and that’ll make all the difference to your reputation, to the safety of your customers, and anybody that’s been impacted.


PD. OK, Chester, let’s go to the other side of that coin.

Imagine that we’re not the IT supplier worried about how bad it might be, and how awful it might look if we allow untrusted stuff to float downstream to our paying customers, with our checkmark of approval on it.

But before we go to the final consumers of the stuff coming downstream, what can the people traditionally in the middle, let’s call them service providers, managed service providers…

What can those MSPs do to make sure that they don’t become what you might call an “attack magnifier”, which is I think pretty much what happened in the Kaseya incident, isn’t it?


CW. It is.

Of course that was not a lot of negligence on behalf of the suppliers or the service providers in that case, because it was a zero-day vulnerability being exploited.

But it certainly is a great example of how widely an attack can spread by manipulating service providers and their trusted access to so many people’s computers.

This is something that’s not new, but I think it’s a great example to inform us of how service providers can do a better job of protecting.

This reminds me, about 10 years ago on the Chet Chat [Podcast] that you and I used to do… we were talking a lot about credit card theft, and there was service provider after service provider in that space that managed those little machines you swipe your card through when you’re at restaurants, fast food stores, chemists, and this kind of thing – a lot of that is outsourced to service providers.

Many of those service providers had one password on all 40,000 terminals that they remotely managed.

And we saw how credit card theft after credit card theft was happening by abusing that shared password.


PD. “Password123”.


CW. [LAUGHS] Exactly.

We do still see that in managed service provider environments, even if we’re not talking about credit card machines.

You may have any of six different technicians that are going to provide services to this customer.

And so it’s much easier to have one password for all the customers, or maybe even one password for each customer, but it’s shared amongst 5, 10, 20 people, which of course means if those people are dismissed or decide to leave the organization, you don’t change the password because it’s too hard, because 20 different people are using it.

There’s a lot of this behavior still going on.

And certainly that has been abused to distribute ransomware, not just through zero-days like in the Kaseya incident…

In the past 18 months we saw service providers that specialize in providing services to dental offices, end up deploying ransomware to all their customers.

We saw a similar thing with real estate agents.

There have been many different examples where specialized service providers, who manage large numbers of people in a given space on their behalf, were sharing passwords, were not using multifactor authentication, and had all of the remote access tools directly exposed to the internet.

And so I think those are the three things that come to mind for me specifically when we’re talking about service providers.

Don’t provide access to all of your employees: limit it to the employees that actually need the access.

Make sure they all have unique access, and make sure that access is protected by multifactor authentication.

If you’re a professional technician providing services, you should have no objection to using a security key or an app in order to log into a customer’s environment where full administrative trust has been granted to you.


PD. Exactly.

And if you do work for an MSP and you work with say three out of the 10 customers, the fact that you’re deliberately locked out of helping the other seven customers is not a sign that your employer doesn’t trust you.

It protects you, as much as it protects your employer, as much as it protects the person further down the line.

And I guess that’s an example of what the jargon calls “zero trust”, isn’t it, or “need to know”?

If there are things you do not need to be able to do in order to complete your job, then it’s actually better to be locked out of them, because then nothing can go wrong, whether by accident or by design.


CW. Absolutely.

And a few of our partners I know that I’ve talked to actually have teams that provide services to different groups of customers.

So if you’re this restaurant customer, you have team A assigned to you, and it’s five or six people so that you can cover shifts, you can cover vacations, you can cover maternity leave, whatever you need to cover.

But it’s not all 75 technicians that can access the team A customer accounts.


PD. Right, Chester!

Let’s go to what you might call the mouth of the estuary – the IT consumer.

And I don’t mean consumer as in a home user, necessarily… I mean somebody who accepts things like updates, security advice, security configuration changes, operational configurations from somebody upstream.

What about the person at the end of it all?


CW. Well, I would hope that most organizations have some sort of onboarding process for purchasing software from vendors, and deciding how to evaluate those vendors, and what criteria they must meet in order to qualify to be a vendor to their organization.

That may not occur in really small organizations, although I would still encourage them to do so.

Most organizations do have some sort of process for this.

And what you need to ensure is that security is part of that onboarding process, that the approvals process for them to be onboarded as a vendor ensures that they’re up to the quality that you like.

This is a complicated thing to do from the outside, because you’re unlikely to send in your own team of auditors to audit how they do security.

So, it does get rather complicated.


PD. In fact, Chester, somebody sent in a question and his comment was along the lines of:

“Everybody tells me they take my cybersecurity seriously when I sign up for their service. But then they use exactly the same words when they send me one of those, ‘Oh, sorry. We had a data breach’ emails.

So how on earth am I supposed to tell whether they really do take my cybersecurity seriously or not?”

And that’s the $64,000 question, isn’t it?


CW. Yes.

Because in essence, what you’re trying to do is you’re trying to judge the maturity level of their security program.

And that sounds like weasel words, but you’re not really trying to assess any given one thing.

You’re trying to look at the whole picture of how seriously they take security, and how far are they along in providing all of the latest and best practices.

And sometimes that can be a lot harder for a big, older company that it can be for a young, nimble one, right?

If you look at securing software supply chains, it’s often much easier to do when you’re using modern tooling.

And if you’ve been around for 20 years, you might have old tooling that’s incredibly difficult to swap out for more modern tooling.

So there’s really no hard and fast rule.


PD. On the other hand, you could be dead modern, and you do everything by just saying, “Well, NPM will look after all the dependencies. I only use one module. It will figure out the other 1,879 that I need. And let’s hope none of them got hacked lately.”

So that can cut both ways, can’t it?


CW. It can.

And so, one of the things that I’ve been telling people to do, and it’s certainly something I do, even as a consumer looking at software… is I like to look at the release notes.

I’m the kind of nerd that reads the terms of service before I install an app on my phone.

So maybe I’m not really fitting the normal mould here, but those release notes, to me, are a key component to say, ” All software has vulnerabilities and bugs, and we’re fixing them on a regular basis.”


PD. I agree very strongly with you on that, Chester.

And I think you don’t necessarily need to be technically savvy and understand all the jargon that’s in the release notes.

I think that tone of the company really comes out quite strongly, if you know what to listen for in among the words.


CW. Yes.

I think an organization that’s being open and continually improving their security typically will tell you about it.

They won’t hide it – they’ll give you some sort of detail.

Depending on the size of the company, they may list CVE numbers that are vulnerabilities that have been officially registered.


PD. Yes.


CW. If they’re a smaller company they might not, but they might still make a note saying, “This software has been updated to improve the security. You should apply it now. There were things reported to us by these bug bounty people,” or whatever.

That’s another thing you can look to.

Organizations that run a bug bounty are typically higher up on that security maturity spectrum, because they’re inviting people to scrutinize their software and help them improve the security.

These are all very positive signs that a company feels confident in their ability to defend that software well, or website for that matter, if it’s some sort of cloud service that you’re subscribing to.

And if they have had any security incidents, gosh… those root cause analyses that are published very commonly now by a lot of vendors when they have a public security incident are another one of those things that you can get that tone from.

What is their confidence in what they’re telling you?

Are they being open and honest about the details?

If they are, they’re probably learning from that incident and improving, and it’s not necessarily a bad sign, because we all have incidents in the end.


PD. How does that saying go?

“Once is misfortune, twice is carelessness.”


CW. And I think another sign of this stuff, often, is how well the team at that organization is working.

Are all parts of the company involved?

Because, when you have an incident, you want to make sure that your legal team is involved, your communication team is involved, certainly the software developers that may be responsible for the bug, or whatever.

Those groups need to be groups that are comfortable working together – they need to have trust.

You can read the tea leaves on the confidence of that organization and their statement and the accuracy of the statements they make.

Because, when those people are working together, they give you the truth accurately, and they continually provide you updates during the incident, and during the crisis.

Those are all really positive signs that a company takes these things very seriously.

I add into that: are there warning signs in their management that they don’t have a tight-knit team?

I go on LinkedIn and if they’re continually rotating in CISOs or CTOs, where they’re there six months and another one comes in, and then they’re there nine months, and then somebody is there three months…

You’re going, “Well, that doesn’t sound like a program that’s well-integrated and mature.”

It sounds like they’re constantly going in different directions.. and of course that usually ends poorly.


PD. Chester, I think that’s a great point on which to end…

The idea that, although we’ll never stop all supply chain attacks, collectively – if we all lift our game a little bit, and we lift our game all the time – we can actually do an awful lot to keep ourselves much safer than perhaps we have been in the past.

So, Chester, thank you so much for your time.

Thanks to everybody for listening.

Until next time…


CW. Stay secure.


PD. Stay secure.

[FX: MORSE CODE SIGNOFF]

Learn more about Sophos Managed Threat Response:
Sophos MTR – Expert Led Response  ▶
24/7 threat hunting, detection, and response  ▶


go top