Category Archives: News

Firefox out-of-band update to 100.0.1 – just in time for Pwn2Own?

Late last week, our Slackware Linux distro announced an update to follow the scheduled-and-expected Firefox 100 release, which came out at the start of the month.

The new version is 100.0.1, and we’re running it happily…

…but when we clicked on What’s new two days later, to see what was new, we were still being told [2022-05-15T19:00Z] to “check back later”:

Similarly, checking for updates via the About dialog in a Firefox version that we had installed directly from Firefox.com informed us that we were currently up-to-date at version 100.0.

Visiting Firefox.com directly didn’t help either, with the 100.0 version shown there as the latest-and-greatest download, too.

Nevertheless, 100.0.1 is available officially from Mozilla’s FTP archive server (though you don’t access it via FTP any more, of course) .

According to Ghacks.com, the most significant change in 100.0.1 is that the point release “improves Firefox’s security sandbox on Windows devices.”

A look at Mozilla’s change log and a recent Mozilla Hacks blog post suggests that Ghacks.com has indeed identified the big deal in this released-but-not-yet-released release.

The blog article, entitled Improved Process Isolation in Firefox 100, actually came out the day before the 100.0.1 release was uploaded to the FTP server, as though the changes were already accomplished in the 100.0 release.

As far as we can tell, however, this long-in-gestation security code was ultimately not enabled (or at least wasn’t fully enabled) in 100.0, because the Mozilla change logs include a fix for Bug 1767999, dated shortly after the 100.0 release came out, entitled Re-enable Win32k Lockdown by Default.

(We’ll explain below how this security sandbox code came to be called Win32k Lockdown.)

What’s new in the sandbox?

The Improved Process Isolation report describes a long-running series of changes in Firefox that aim to take advantage of a Windows security setting known long-windedly as PROCESS_MITIGATION_­SYSTEM_CALL_­DISABLE_POLICY.

This isn’t a new security feature – it arrived in Windows 8 – but it’s not a mitigation that you can trivially apply to visual, interactive, graphics-rendering products such as browsers.

Greatly simplified, the SYSTEM_CALL_­DISABLE setting allows a process to relinquish the right to make certain risky system calls, notably those Windows API functions that are implemented directly in the kernel for performance reasons.

Firefox already splits itself into many separate processes, so that if the browser goes haywire in one tab, the compromised code doesn’t immediately have access to the same memory space as all the other tabs.

Early browsers often ran as a single, monolithic process that dealt with making network connections, managing downloads, rendering remotely-supplied content, executing untrusted JavaScript code, and displaying as many windows or tabs as you had open.

Generally speaking, this boosted performance, because moving data around inside one big process is much easier and faster (albeit more error prone) that transmitting it between separate processes.

But it meant that exploit code triggered in one browser tab could lead directly to attackers sniffing out passwords, cookies and other confidential content from any other browser tab or window open at the time.

Divide and conquer

Splitting up the browser into multiple co-operating but separate processes means that each has its own memory area that the others can’t see.

Separate processes also allow different parts of the browser to run with different access rights, in accordance with the principle of least privilege (never give yourself more power than you really need, if only to protect you from yourself).

You’d think, therefore, that implementing SYSTEM_CALL_­DISABLE would be an obvious and easy change to make to a browser’s web content rendering processes, given that their job is to decode, wrangle, process and display content based on untrusted data received from outside.

That untrusted data could include deliberately booby-trapped images, cunningly crafted font files, malevolent and misbehaving JavaScript code and much more, so voluntarily preventing those processes from making risky in-kernel Windows function calls seems like a must-have security setting.

After all, a bug or a crash in the kernel is much more dangerous than a crash in userland, given that it’s the kernel itself that is supposed to clamp down on misbehaviour in userland code.

If you are looking for a dramatic metaphor, you can think of an exploit in userland as tampering with a witness in a court case, but you can think of an exploit in kernel-land as bypassing the witnesses and subverting the judge and jury instead.

Unfortunately, as the Mozilla coders have had a long time to reflect, the Windows API functions that Microsoft decided to shift from userland to kernel-land …

…were those functions that affected real-time performance the most, and were therefore the most obvious to (and the most complained-about by) users, such as writing to the screen, displaying graphics, and even, as Mozilla discovered, deciding on where to add line breaks into formatted text in languages with complex text-formatting rules.

In other words, any browser rendering process that wants to wrap itself in the added safety of SYSTEM_CALL_­DISABLE needs to be able to call on yet another special-purpose process that is itself allowed to call Windows kernel-level API functions in a well-controlled way.

If the helper processes that act as “insulators” between the rendering processes and the kernel miss out any functions that the browser ultimately relies upon (even if they’re only needed occasionally, like those special-case line-break rules), then some websites will simply stop working, or will work incorrectly.

O! What a tangled web we weave, when first we practise to relinquish certain access rights on purpose, and to separate our complex applications into lots of co-operating parts such that each gives up as many risky privileges as it can!

Why now?

Mozilla refers to its use of the SYSTEM_CALL_­DISABLE option as Win32k Lockdown, after the name of the Windows driver (win32k.sys) that implements the various kernel-accelerated Windows API calls.

Given that the code was a long time in the making, and apparently nearly-but-not quite ready to be turned on by default in Firefox 100…

…why rush to enable it in an out-and-band 100.0.1 update instead of simply waiting for a future scheduled release?

One guess is hinted at in the summary of the latest Mozilla Channels Meeting, which says, “Reminder: pwn2own is next week (Wed-Fri) and we expect to ship chemspills [Mozilla’s curious metaphor for security-driven rapid release updates] in response… We’ll know the exact schedule closer to the start of the event.”

Pwn2Own, of course, is a famous big-money hacking contest in which products such as browsers, teleconferencing apps and automotive software (where this year’s biggest individual prizes are on offer, topping out at $500,000 ) are deliberately attacked.

Competitors each get a 30-minute slot on a freshly-imaged computer with the latest operating system and application updates installed to demonstrate a working exploit live in front of the judges.

Lots are drawn to determine the order in which the entrants compete, and the first to “pwn” a product wins the prize.

This means, of course, that only the first exploit that works properly gets disclosed.

The other competitors don’t get the money, but they do get to keep their attacks under their hats, so no one knows whether they found a different type of exploit, or whether it would have worked if they’d drawn an earlier hacking slot.

Was the urgency to get 100.0.1 out because of the proximity of Pwn2Own, in the hope that at least some of the exploits that competitors might bring along would be thwarted by the new Win32k Lockdown protection?

What to do?

You don’t need to do anything, though we sympathise if you were confused by seeing reports that Firefox 100.0.1 was officially available, only to find that it won’t show up as an official update until Monday 2022-05-16 at the earliest.

If you want to update ahead of the majority, you can download 100.0.1 from Mozilla’s FTP server and deploy it yourself, instead of waiting until Firefox’s internal update mechanism decides it’s time.


He cracked passwords for a living – now he’s serving 4 years in prison

What does the word Glib mean to you?

Does it make you think of a popular programming library from the GNOME project?

Do you see it as a typo for glibc, a low-level C runtime library used in many Linux distros?

Do you picture someone with the gift of the gab trying to sell you a product of a type you don’t need with a quality you wouldn’t accept anyway?

In this article, it turns out to be the first name (in Latin script, anyway) of a convicted cybercriminal called Glib Oleksandr Ivanov-Tolpintsev.

Originally from Ukraine, Tolpintsev, who is now 28, was arrested in Poland late in 2020.

He was extradited to the US the following year, first appearing in a Florida court on 07 September 2021, charged with “trafficking in unauthorized access devices, and trafficking in computer passwords.”

In plain English, Tolpintsev was accused of operating what’s known as a botnet (short for robot network), which refers to a collection of other people’s computers that a cybercriminal can control remotely at will.

A botnet acts as a network of zombie computers ready to download instructions and carry them out without the permission, or even the knowledge, of their legitimate owners.

Tolpintsev was also accused of using that botnet to crack passwords that he then sold on the dark web.

[embedded content]

The trouble with zombies

Zombie networks can typically be ordered around by their so-called botherder in many different ways.

Co-opted computers can be controlled individually, so each can be set to a different task; groups of zombies can each be assigned one of a set of tasks; or all the zombies can be harnessed simultaneously.

(Don’t forget that the tasks that crooks can and do launch on infected computers include spying on their owners to log keystrokes, take screenshots and identify interesting files, followed by uploading any and all interesting information collected during the data gathering phase.)

When all the bots in a botnet co-operate on the same task, the botherder ends up with what is essentially a massively distributed “cloud supercomputer” that can split up one time-consuming project, such as trying to crack a million different passwords, into hundreds, thousands or even millions of subtasks.

Password cracking is a computer science problem that is sometimes referred to in the jargon as embarrassingly parallel, because the algorithmic process involved in cracking the password hash 499a5cb2 7ca65c36 d239ebce 7af641e5 is entirely independent of cracking, say, 800e8536 0c6997fa 909bb9f5 d0fabe46.

In contrast, in applications such as modelling river flows or making weather forecasts, each computer or node in the network needs to share intermediate results with its neighbours, and they with theirs, and so on, to model the highly dynamic nature of fluids and gases.

This makes the processor interconnections in most supercomputer applications at least as important as the raw computing power of each processor node in the system.

But password cracking in its simplest form can trivially be sliced up into as many sub-tasks as you have processor cores available.

Each processing node needs to communicate with the botherder just twice – once at the start to receive its part of the password list to work on, and once at the end to send back a list of any successful cracks.

Quite literally, the problem scales linearly, so that if it would take you 100 years to crack 1,000,000 passwords on your own computer, then it would take only one year using 100 computers; just over a month with 1000; and under an hour if you had 1,000,000 computers at your disposal.

How big is your botnet?

The US Department of Justice (DOJ) doesn’t say how big Tolpintsev’s botnet was, but does say that he ran a dark web password forum known simply as The Marketplace, and claimed to add about 2000 newly-cracked usernames and passwords to his “sales stock” every week.

If we assume that many, if not most, of Tolpintsev’s illegally-acquired passwords were cracked from password databases stolen from various cloud services, then it’s reasonable to assume that many of the new passwords added to his online catalogue each week came from a randomly chosen pool of users.

In other words, we’re assuming that those 2000 new passwords probably weren’t the logins of 2000 users who all happened to work for the same organisation.

Instead, he probably gave potential password purchasers the chance to buy access to accounts associated with large numbers of different companies. (A cybercriminal doesn’t need a password for every user in your network to break in – one password on its own might be enough for a beachhead inside your business.)

We’re also guessing that Tolpintsev had sources beyond his botnet, because the DOJ’s press release claims that he had a total of 700,000 compromised accounts for sale, including 8000 in the US state of Florida alone, which is presumably why Florida was chosen for his trial.

The DOJ says that the servers for which Tolpintsev claimed to have access credentials…

…spanned the globe and industries, including local, state, and federal government infrastructure, hospitals, 911 and emergency services, call centers, major metropolitan transit authorities, accounting and law firms, pension funds, and universities.

Tolpintsev pleaded guilty in February 2022.

He’s now been sentenced to four years in prison, and ordered to pay up $82,648 that the DOJ could show he’d “earned” by selling on the passwords he’d cracked.

What to do?

Tolpintsev’s ill-gotten gains, at just over $80,000, may sound modest compared to the multi-million dollar ransoms demanded by some ransomware criminals.

But the figure of $82,648 is just what the DOJ was able to show he’d earned from his online password sales, and ransomware criminals were probably amongst his customers anyway.

So, don’t forget the following:

  • Pick proper passwords. For accounts that require a conventional username and password, choose wisely, or get a password manager to do it for you. Most password crackers use password lists that put the most likely and the easiest-to-type passwords at the top. These list generators use a variety of password construction rules in an effort to generate human-like “random” choices such as jemima-1985 (name and year of birth) ahead of passwords that a computer might have selected, such as dexndb-8793. Stolen password hashes that were stored with a slow-to-test algorithm such as PBKDF2 or bcrypt can slow an attacker down to trying just a few passwords a second, even with a large botnet of cracking computers. But if your password is one of the first few that gets tried, you’ll be one of the first few to get compromised.
  • Use 2FA if you can. 2FA, short for two-factor authentication, usually requires you to provide a one-time code when you login, as well as your password. The code is typically generated by an app on your phone, or sent in a text message, and is different every time. Other forms of 2FA include biometric, for example requiring you to scan a fingerprint, or cryptographic, such as requiring you to sign a random message with a private cryptographic key (a key that might be securely stored in a USB device or a smartcard, itself protected by a PIN). 2FA doen’t eliminate the risk of crooks breaking into your network, but it makes individual cracked or stolen passwords much less useful on their own.
  • Never re-use passwords. A good password manager will not only generated wacky, random passwords for you, it will prevent you from using the same password twice. Remember that the crooks don’t have to crack your Windows password or your FileVault password if it’s the same as (or similar to) the password you used on your local sports club website that just got hacked-and-cracked.
  • Never ignore malware, even on computers you don’t care about yourself. This story is a clear reminder that, when it comes to malware, an injury to one really is an injury to all. As Glib Oleksandr Ivanov-Tolpintsev showed, not all cybercriminals will use zombie malware on your computer directly against you – instead, they use your infected computer to help them attack other people.

When it comes to cybersecurity, you can’t sit around on the sidelines taking a shrug-your-shoulders-and-see-what-happens approach.

As we’ve said before many times, if you aren’t part of the solution, then you are part of the problem.

Don’t be that person!

[embedded content]


Serious Security: Learning from curl’s latest bug update

You may not have heard of Curl (or curl, as it is more properly written), but it’s one of those open source toolkits that you’ve almost certainly used anyway, probably very often, without knowing.

The open source world provides numerous tools of this sort – ubiquitous, widely used in software projects all over the globe, but often invisible or hidden under the covers, and therefore not perhaps as well-appreciated as they ought to be.

SQLite, OpenSSL, zlib, FFmpeg, Minix…

…the list of supply-chain components that are built into hardware and software that you use all the time, often under completely different names, is long.

Curl is one of those tools, and as its own website explains, it’s a “command line tool and library for transferring data with URLs (since 1998).”

It’s part of almost every Linux distribution on the planet, including many if not most embedded IoT devices, which use it to script things like updates and data uploads; it’s shipped with Apple’s macOS; and it’s handily included with Windows 10 and Windows 11.

You can also build and use curl as a shared library (look for files named libcurl.*.so or CURL*.DLL), so that you can call curl’s code without running a separate process and collecting the output from that, but that still counts as “using curl”.

Latest update

The project just pushed out its latest update, fixing six medium-level CVE-numbered bugs, and bringing curl to version 7.83.1.

You can check what version you’ve got with the command curl --version, like this:

$ curl --version
curl 7.83.1 (x86_64-pc-linux-gnu) libcurl/7.83.1 OpenSSL/1.1.1o zlib/1.2.12 brotli/1.0.9 zstd/1.5.2 c-ares/1.18.1 libidn2/2.3.2 libpsl/0.21.1 (+libidn2/2.3.0) libssh2/1.10.0 nghttp2/1.47.0 OpenLDAP/2.6.2
Release-Date: 2022-05-11
Protocols: dict file ftp ftps gopher gophers http https imap imaps ldap ldaps mqtt pop3 pop3s rtsp scp sftp smb smbs smtp smtps telnet tftp Features: alt-svc AsynchDNS brotli GSS-API HSTS HTTP2 HTTPS-proxy IDN IPv6 Kerberos Largefile libz NTLM NTLM_WB PSL SPNEGO SSL TLS-SRP UnixSockets zstd $ # Details in your build may vary depending on what was compiled in

The bugs were:

  • CVE-2022-30115. HSTS bypass via trailing dot. HSTS is short for HTTP Strict Transport Security. It’s a cookie-like system by means of which a website that you visit using HTTPS can tell you and the software you use, “Always do this in future! Never use plain old HTTP again, even if the user has an old http:// link buried in a web page or a script somewhere and keeps on using it.” The idea is that by redirecting your HTTP server to the equivalent HTTPS pages, visitors who use HTTP by mistake will only do so once. After the first redirect, their browser will remember the HSTS flag set by the page they were redirected to, and start off with HTTPS in future. Unfortunately, given that server names can be written with or without a dot at the end (strictly speaking, the domain example DOT com is actually example DOT com DOT), curl could be tricked into treating the two variants as if they were different websites, potentially giving attackers a way of luring curl into unexpectedly using an insecure connection that could therefore be redirected, spied upon or modified in transit. The updated code now recognises these “name pairs” as referring to the same server.
  • CVE-2022-27782. TLS and SSH connection too eager reuse. Curl tries to keep existing web connections open for re-use, given that it’s very common to make repeated requests to the same site, and even to the same directory, when downloading bunches of new files such as updates, image galleries, and so on. Generally speaking, there aren’t any security problems in doing this, as long as re-used connections involve connecting in the exactly the same way to the same site. But curl didn’t always check that all the connection settings were the same, so that some security details could have been swapped out in the interim. Those details could include information such as username and password, thus theoretically allowing a sneaky user to piggy-back on a previous connection, even though the authentication credentials they supplied in their own request wouldn’t pass muster if the connection were established from scratch. (This is another slip twixt authentication cup and activation lip bug, like the RubyGems security bypass we wrote about earlier this week.) The updated code now forces a fresh connection unless all the connection settings exactly match those of its previous use.
  • CVE-2022-27781. CERTINFO never-ending busy-loop. This bug could be triggered if you asked curl to extract full details of the so-called chain-of-trust of web certificates in a TLS-protected connection. Curl could get stuck in an infinite loop while chasing its way through the list. Given that you can instruct curl to validate the chain-of-trust without fetching all the certificates for yourself (in fact, you can argue that it’s safer to let curl do the verification for you, in the same way that it’s safer to used a trusted cryptographic library than to knit your own), this bug is unlikely to affect much real-life code. Security logging tools are often interested in examining and recording chain-of-certificate details, but we’re hoping that tools of this sort, developed as they are to look for and record potential anomalies, would include their own infinite-loop protection to detect and kill off security spelunking attempts that never finish.
  • CVE-2022-27780. Percent-encoded path separator in URL host. This is a fascinating bug that reminds you just how hard it is to deal with all the variations and vagaries of how data gets presented in web requests. For example, URLs use the SLASH character in a special way, as a separator, so if you want to encode a slash character into a search term, for instance, you need to encode it in a special way too, using a percent sign followed by its hexadecimal code: %2F. Unfortunately, you could sneak a %2F into the server name part of a curl URL, for example by writing dodgy.invalid%2Fmy.nice.example, which looks to a URL filter as though it’s a server hosted under a domain called nice.example). But when curl actually went to make the connection, the string would be converted into dodgy.invalid/my.nice.example, where the decoded SLASH character would suddenly act as a separator between the server name and the rest of the URL, so the connection would be made to the top-level domain invalid instead.
  • CVE-2022-27779. Cookie for trailing dot TLD. This was a similar sort of string mismatch error to the first item. Browsers and downloaders are supposed to ignore cookies set for so-called public prefixes, such as .com or .co.uk, so that unscrupulous operators can’t trick you into setting cookies at a domain level under which each subdomain is likely to be under the control of a different owner. As you can imagine, this would mess up the “same origin” policy, and could leak cookie data between teo sites that were owned and operated by completely different people. A dot at the end of a domain name could confuse curl’s check, and allow cookies to be set for dangerously high-level domains.
  • CVE-2022-27778. Curl removes wrong file on error. This is a reminder of how different product options may end up competing with one another in ways the programmer never expected. Curl has an option --no-clobber, which prevents a download from overwriting an existing file by mistake. The second and subsequent downloads of the same file have a number appended to create a new and unique name that won’t clobber any existing files. But curl also has --remove-on-error, which says to delete the file you just downloaded if anything goes wrong, to avoid leaving partial or damaged files behind. If you used both options, then a failed download of a file that already existed would leave behind incomplete data in the new-and-unique file (the one with a number at the end), and wrongly delete the original file that --no-clobber was supposed to protect.

What to do?

  • Update to curl 7.83.1 if you can on your own systems. If curl comes as part of your operating system distro, as it typically does on Windows, Macs and Linux, you can probably expect an patch in the next scheduled update. (The fact that none of these bugs are critical, and that none of them have been seen “in the wild”, means that emergency or out-of-band updates are unlikely for closed-source products.)
  • If you have an appliance or other network device, check with your vendor to see if they use curl (it’s a good bet they do) and, if so, when you can expect an update.
  • If you look after software projects of your own, take a look at the curl security advisories page. It’s an excellent example of how to document security updates clearly, cleanly and correctly, with plain-English explanations and a helpful list showing the version number when each bug first entered the codebase, and when it was fixed.

The curl project makes it easy to find out how to report bugs; tells what you can expect when you report them; and even includes a Security item in its drop-down Documentation menu, thus making it clear that security reports are first class citizens in its software development ecosystem.

One little thing you can do that the curl team hasn’t done yet. Add a security.txt file, in a standard format, at a standard well-known place on your website. That way, there’s a canonical place, in a canonical format, where security researchers can find your offical bug-reporting channels. You can use ours as an example by looking at sophos.com/security.txt and at sophos.com/.well-known/security.txt.

Colonial Pipeline facing $1,000,000 fine for poor recovery plans

If you were in the US this time last year, you won’t have forgotten, and you may even have been affected by, the ransomware attack on fuel-pumping company Colonial Pipeline.

The organisation was hit by ransomware injected into its network by so-called affiliates of a cybercrime crew known as DarkSide.

DarkSide is an example of what’s known as RaaS, short for ransomware-as-a-service, where a small core team of criminals create the malware and handle any extortion payments from victims, but don’t perform the actual network attacks where the malware gets unleashed.

Teams of “affiliates” (field technicians, you might say), sign up to carry out the attacks, usually in return for the lion’s share of any blackmail money extracted from victims.

The core criminals lurk less visibly in the background, running what is effectively a franchise operation in which they typically pocket 30% (or so they say) of every payment, almost as though they looked to legitimate online services such as Apple’s iTunes or Google Play for a percentage that the market was familiar with.

The front-line attack teams typically:

  • Perform reconnaisance to find targets they think they can breach.
  • Break in to selected companies with vulnerabilties they know how to exploit.
  • Wrangle their way to administrative powers so they are level with the official sysadmins.
  • Map out the network to find every desktop and server system they can.,
  • Locate and often neutralise existing backups.
  • Exfiltrate confidential corporate data for extra blackmail leverage.
  • Open up network backdoors so they can sneak back quickly if they’re spotted this time.
  • Gently probe existing malware defences looking for weak or unprotected spots.
  • Pick a particularly troublesome time of day or night…

…and then they automatically unleash the ransomware code they were supplied with by the core gang members, sometimes scrambling all (or almost all) computers on the network within just a few minutes.

Now it’s time to pay up

The idea behind this sort of attack, as you know, is that the computers aren’t wiped out completely.

Indeed, after most ransomware attacks, the Windows operating system still boots up and and the primary applications on each computer will still load, almost as a taunt to remind you just how close you are to, yet how far away from, normal operation.

But all the files that you need to keep your business running – databases, documents, spreadsheets, system logs, calendar entries, customer lists, invoices, bank transactions, tax records, shift assignments, delivery schedules, support cases, and so on – end up encrypted.

You can boot your laptop, load up Word, see all your documents, and even try desperately to open them, only to find the digital equivalent of shredded cabbage everywhere.

Only one copy of the decryption key exists – and the ransomware attackers have it!

That’s when “negotiations” start, with the criminals hoping that your IT infrastructure will be so hamstrung by the scrambled data as to be dysfunctional.

“Pay us a ‘recovery fee’,” say the crooks, “and we’ll quietly provide you will the decryption tools you need to unscramble all your computers, thus saving you the time needed to restore all your backups. If you even have any working backups.”

Of course, they don’t put it quite that politely, as this chilling recording supplied to the Sophos Rapid Response team reveals:

That’s the sort of wall against which Colonial Pipeline found itself about 12 months ago.

Even though law enforcement groups around the world urge ransomware victims not to pay up (as we know only too well, today’s ransomware payments directly fund tomorrow’s ransomware attacks), Colonial apparently decided to hand over what was then $4.4 million in Bitcoin anyway.

Sadly, as you’ll no doubt remember if you followed the story at the time, Colonial ended up in the same sorry state as 4% of the ransomware victims in the Sophos Ransomware Survey 2021: they paid the crooks in full, but were unable to recover the lost data with the decryption tool anyway.

Apparently, the decryptor was so slow as to be just about useless, and Colonial ended up restoring its systems in the same way it would have if it had turned its back on the crooks altogether and paid nothing.

In a fascinating “afterlude” to Colonial’s ransomware payment, the US FBI managed, surprisingly quickly, to infiltrate the criminal operation, to acquire the private key or keys for some of the bitcoins paid over to the criminals, to obtain a court warrant, and to “transfer back” about 85% of the criminal’s ill-gotten gains into the safe keeping of the US courts. If you are a ransomware victim yourself, however, remember that this sort of dramatic claw-back is the exception, not the rule.

More woes for Colonial Pipeline

Now, Colonial looks set to be hit by a further demand for money, this time in the form of a $986,400 civil penalty proposed by the US Department of Transportation.

Ironically, perhaps, it looks as though Colonial would have been in some trouble even without the ransomware attack, given that the proposed fine comes about as the result of an investigation by the Pipeline and Hazardous Materials Safety Administration (PHMSA).

That investigation actually took place from January 2020 to November 2020, the year before the ransomware attack occurred, so the problems that the PHMSA identified existed anyway.

As the PHMSA points out, the primary operational flaw, which accounts for more than 85% of the fine ($846,300 out of $986,400), was “a probable failure to adequately plan and prepare for manual shutdown and restart of its pipeline system.”

However, as the PHMSA alleges, these failures “contributed to the national impacts when the pipeline remained out of service after the May 2021 cyber-attack.”

What about the rest of us?

This may seem like a very special case, given that few of us operate pipelines at all, let alone pipelines of the size and scale of Colonial.

Nevertheless, the official Notice of Probable Violation lists several related problems from which we can all learn.

In Colonial Pipeline’s case, these problems were found in the so-called SCADA, ICS or OT part of the company, where those acronyms stand for supervisory control and data acquisition, industrial control systems, and operational technology.

You can think of OT as the industrial counterpart to IT, but the SecOps (security operations) challenges to both types of network are, unsurprisingly, very similar.

Indeed, as the PHMSA report suggests, even if your OT and IT functions look after two almost entirely separate networks, the potential consequence of SecOps flaws in one side of the business can directly, and even dangerously, affect the other.

Even more importantly, especially for many smaller businesses, is that even if you don’t operate a pipeline, or an electricity supply network, or a power plant…

…you probably have an OT network of sorts anyway, made up of IoT (Internet of Things) devices such as security cameras, door locks, motion sensors, and perhaps even a restful-looking computer-controlled aquarium in the reception area.

And if you do have IoT devices in use in your business, those devices are almost certainly sitting on exactly the same network as all your IT systems, so the cybersecurity postures of both types of device are inextricably intertwined.

(There is indeed, as we alluded to above, a famous anecdote about a US casino that suffered a cyberintrusion via a “conected thermometer” in a fishtank in the lobby.)

The PHMSA report lists seven problems, all falling under the broad heading of Control Room Management, which you can think of as the OT equivalent of an IT department’s Network Operations Centre (or just “the IT team” in a small business).

These problems distill, loosely speaking, into the following six items:

  • Failure to keep a proper record of operational tests that passed.
  • Failure to test and verify the operation of alarm and anomaly detectors.
  • No advance plan for manual recovery and operation in case of system failure.
  • Failure to test backup processes and procedures.
  • Poor reporting of missing or temporarily suppressed security checks.

What to do?

Any (or all) of the problem behaviours listed above are easy to fall into by mistake.

For example, in the Sophos Ransomware Survey 2022, about 2/3 of respondents admitted they’d been hit by ransomware attackers in the previous year.

About 2/3 of those ended up with their files actually scrambled (1/3 happily managed to head off the denouement of the attack), and about 1/2 of those ended up doing a deal with the crooks in an attempt to recover.

This suggests that a significant proportion (at least 2/3 × 2/3 × 1/2, or just over one-in-five) IT or SecOps teams dropped the ball in one or more of the categories above.

Those include items 1 and 2 (are you sure the backup actually worked? did you formally record whether it did?); item 3 (what’s your Plan B if the crooks wipe out your primary backup?); item 4 (have you practised restoring as carefully as you’ve bothered backing up?); and item 5 (are you sure you haven’t missed anything that you should have drawn attention to at the time?).

Likewise, when our Managed Threat Response (MTR) team get called in to mop up after a ransomware attack, part of their job is to find out how the crooks got in to start with, and how they kept their foothold in the network, lest they simply come back later and repeat the attack.

It’s not unusual for the MTR investigation to reveal numerous loopholes that aided the crooks, including item 5 (anti-malware products that would have stopped the attack turned off “as a temporary workaround” and then forgotten), item 2 (plentiful advance warnings of an impending attack either not recorded at all or simply ignored), and item 1 (accounts or servers that were supposed to be shut down, but with no records to reveal that the work didn’t get done).

We never tire of saying this on Naked Security, even though it’s become a bit of a cliche: Cybersecurity is a journey, not a destination.

Unfortunately for many IT and SecOps teams these days, or for small businesses where a dedicated SecOps team is a luxury that they simply can’t afford, it’s easy to take a “set-and-forget” approach to cybersecurity, with new settings or policies considered and implemented only occasionally.

If you’re stuck in a world of that sort, don’t be afraid to reach out for help.

Bringing in third-party MTR experts is not an admission of failure – think of it as a wise preparation for the future.

Afer all, if you do get attacked, but then remove only the end of the attack chain while leaving the entry point in place, then the crooks who broke in before will simply sell you out to the next cybergang that’s willing to pay their asking price for instructions on how to break in next time.


Not enough time or staff? Learn more about Sophos Managed Threat Response:
Sophos MTR – Expert Led Response  ▶
24/7 threat hunting, detection, and response  ▶


go top