Web vendor CafePress fined $500,000 for giving cybersecurity a low value

CafePress is a web service that lets artists, shops, businesses, fan clubs – anyone who signs up, in fact – turn designs, corporate slogans, logos and the like into fun merchandise they can give away or sell on to others.

The days when you had to put in an order for several hundred coffee mugs (or golf balls, or mousemats, or T-shirts, or hoodies) just to get one with the company name on them are long gone, with even one-off merch orders possible thanks to on-line ordering.

Unfortunately, as the US Federal Trade Commission explained last week in a case report bluntly entitled CafePress, In the Matter of, the company wasn’t up to scratch when it came to looking after the personal data of its customers and signed-up sellers.

According to the FTC, the CafePress service experienced a data breach, discovered and reported in early 2019, that was not acted on promptly or effectively, making the ultimate side-effects of the breach much worse than they ought to have been.

In other words, even though the company was itself the victim of a cybercrime, it has nevertheless been censured and fined for what it did (and didn’t do), both before and after this cybercrime took place.

The breach, says the FTC, saw hackers make off with more than 20,000,000 plaintext email addresses and weakly-hashed passwords; millions of unencrypted names, physical addresses, and security questions-and-answers; more than 180,000 unencrypted SSNs (social security numbers); and, for tens of thousands of payment cards, the last four digits of the card plus the expiry date.

The sloppiness of the company’s followup to this sloppiness led to a plain-talking headline on the government’s own press release: FTC Takes Action Against CafePress for Data Breach Cover Up.

Consent order issued

As part of the FTC’s settlement, known in US parlance as a consent order, the owner of Cafe Press at the time – a company with the quizzical name of Residual Pumpkin – will pay a penalty of $500,000.

Both Residual Pumpkin and the website’s new holding company, Planet Art, will be subject to numerous other conditions, including submitting to security assessments every two years for the next 20 years.

Importantly for any businesses out there that still pay little more than lip service to cybersecurity, the FTC wasn’t unsympathetic to CafePress-the-cybercrime-victim.

But the FTC was deeply critical of CafePress-as-a-21st-century-holder-and processor-of-personal-information.

In particular, the FTC censured CafePress for the following:

  • Misrepresenting the measures it took to protect personal Information.
  • Misrepresenting the steps it took to secure consumer accounts following security incidents.
  • Failing to employ reasonable data security practices.
  • Misrepresenting how it would use email addresses.
  • Misrepresenting the company’s adherence to privacy regulations in the US and the EU.
  • Misrepresenting its intention to honour data deletion requests by customers and sellers.

Cybersecurity no-nos

The FTC picked up explicitly on cyberseurity and data protection no-nos such as:

  • Storing password hashes without salting or stretching, making passwords much easier to crack if a password database gets stolen, as happened in this case.
  • Storing password recovery questions and answers in plaintext, making password resets easier for criminals after a breach.
  • Continuing to allow those stolen recovery answers to be used for password resets for at least six months after claiming to have fixed that problem.
  • Failing to notify users of the breach for several months after it was first reported, and even for several weeks after knowing that stolen customer data was up for sale on the dark web.
  • Failing to follow up on malware infection incidents with any sort of threat analysis to see what security holes might have been opened up via that malware.
  • Failing to notice the takeover of the the email account of an employee for several months after that staffer had experienced multiple malware incidents.
  • Failing to investigate efforts to divert employees’ payroll deposits until the third time this criminal activity was reported.
  • Not having any reliable way of receiving and acting on security alerts from bona fide security researchers, customers, or third parties including public sector cybersecurity responders.
  • Neglecting to patch against known vulnerabilities, and continuing to use obsolete software that no longer received patches at all.
  • Charging users a $25 fee for closing down their accounts in the aftermath of the breach.

What to do?

1. Treat cybersecurity as a value to be maximised, not merely as a cost to be minimised. Not only your customers but also the regulators expect you to pay more than lip service to cybersecurity these days.

2. Don’t just remove malware and move on. Cleaning up malware files is a necessary part of your recovery process, but you need to look for other side-effects that the malware could have caused while it was active.

3. Always investigate anomalies. Don’t wait until the third time that cybercriminals try to steal from your staff before you take action to figure out what’s going on.

4. Help security researchers to get hold of you easily. The easiest way is simply to add a text file called security.txt that is visible via your main URL, as you will see if you visit https://sophos.com/security.txt.



If you don’t have the experience or the time to maintain ongoing threat reponse by yourself, consider partnering with a service like Sophos Managed Threat Response. We help you take care of the activities you’re struggling to keep up with because of all all the other daily demands that IT dumps on your plate.

Not enough time or staff? Learn more about Sophos Managed Threat Response:
Sophos MTR – Expert Led Response  ▶
24/7 threat hunting, detection, and response  ▶


OpenSSL patches infinite-loop DoS bug in certificate verification

OpenSSL published a security update this week.

The new versions are 3.0.2 and 1.1.1n, corresponding to the two currently-supported flavours of OpenSSL (3.0 and 1.1.1).

The patch includes a few general fixes, such as error reporting that’s been tidied up, along with an update for CVE-2022-0778, found by well-known bug eliminator Tavis Ormandy of Google’s Project Zero team.

Ormandy himself described the bug as “a fun one to work on”:

The flaw ultimately came down to a program loop that almost always worked correctly, but sometimes didn’t, causing it to iterate inifinitely, thus hanging up the program using the offending code and causing what’s known as a DoS, or denial-of-service attack.

Sloppy security code not affected

Amusingly, if we’re allowed to say that, the bug apparently only gets triggered if a program decides to do the right thing when making or accepting a secure connection (e.g. an HTTPS browsing request), and verifies the cryptographic certificate supplied by the other end.

A browser (or an updater, or a login portal, or whatever it might be) that simply accepted the cryptographic credentials of the other end, and didn’t bother to check whether some moderately trustworthy authority had issued them in the first place…

…would, ironically, be unaffected.

In other words, a crook who wasn’t able to get hold of a working forged-or-stolen certificate to bypass your security checks might nevertheless be able to construct a bogus non-working certificate that your computer would choke on while trying to reject it.

Obviously, that’s much less serious than a hole through which an attacker could deceive you cryptographically, causing you willingly to trust something you shouldn’t.

And it’s much less serious than an exploitable vulnerability that could let an attacker implant unwanted software without permission.

But CVE-2022-0778 is still worth knowing about, and the nature of the bug makes a good “teachable moment” for all programmers out there.

Ubiquitous code

As you probably know, OpenSSL is one of the most popular and widely-used cryptographic libraries.

The library ships as a core component of many Unix and Linux distributions, where it’s automatically used by a wide range of other software you may have installed.

OpenSSL is also bundled into numerous applications, even on operating systems such as Windows which provide their own built-in cryptographic library that the app could have used instead.

In other words, your computer or mobile device might have zero, one, some or many copies of OpenSSL, and they don’t all necessarily get updated at the same time.

That, in turn, means that OpenSSL security updates always make a bit of a news splash:

  • Cryptographic bugs get a lot of attention, because we rely so extensively on encryption algorithms these days for both privacy (to avoid being snooped on) and integrity (to avoid being fed fake data). Browsing, email, software updates and online commerce and many more applications can all sneakily be undermined by exploitable holes in cryptographic code.
  • Bugs in widely-used programming toolkits get a lot of attention, because we’re not always sure how widely used these are in our own networks. As the infamous Log4Shell bug reminded us at the end of 2021, even specific holes in software components that we don’t even realise we were using, and that we have never thought to audit before, may expose general holes in our overall IT setup.

Looping the loop

We’re not going to dig into the mathematical algorithm that the buggy code was trying to compute.

All we’ll say is that the OpenSSL function is one used when verifying Elliptic Curve (EC) digital signatures, which are widely used these days because EC cryptography is faster, uses less memory, and requires shorter cryptographic keys than the old favourite known as RSA, for the same level of security.

This reduces the amount of data that needs to be shuffled back and forth when setting up encrypted network connections, and reduces the load on busy servers that may be handling hundreds, thousands or even hundreds of thousands of secure connections a second.

The algorithm involves a mildly esoteric function called BN_mod_sqrt(), short for “modular square root of a Big Number”.

As you probably know, modular arithmetic, sometimes casually called “clock arithmetic”, involves keeping all your intermediate results to a fixed number of digits by taking the remainder, or modulus, after dividing each result by a number of a fixed size.

This is similar to what happens if you add 25 hours to the present time.

If it’s 15:00, then 25 hours in the future comes out at 40 o’clock, but there are only 24 hours in a day, so you observe that 40 / 24 is 1 remainder 16.

Because you are calculating a new time, and not a new date, you simply discard the 1, and keep the 16, which tells you that in 25 hours’ time, it will be 16 o’clock, or 16:00.

If you are from one of those countries that prefers AM and PM to the frankly far superior 24-hour notation, you can just work modulo 12 instead.

So, you take 15:00 and say “15 divided by 12 and keep only the remainder” to get 3; then you take 3 + 25 to get 28, then do “28 divided by 12 and take the remainder” to get 4pm.

In fact, you can “remainderise” 15 down to 3 and “remainderise” 25 down to 1 up front, and then add 3+1 = 4, so that you’re only ever calculating with inputs limited to the range 0…11.

This sort of calculation is super-handy when working with the sort of numbers that you typically need in cryptography, which may have hundreds of digits each, not merely two digits as in the hour hand of a clock.

For example, if you square an N-digit number conventionally, you get a 2N digit number; square it again to get the fourth power, and you now have 4N digits.

So, if you want to compute an P-digit power of an N-digit number, where P and N are anything but tiny values, you quickly run out of time and memory to compute or store the result.

With modular arithmetic, however, using an M-digit modulus limits every intermediate result to M digits, making complex (and therefore hard-to-reverse) iterative calcuations feasible, even for huge values with hundreds of digits – all your numbers are Big Numbers, but they don’t get bigger and bigger as you proceed.

Each Big Number calculation along the way requires much more work than a conventional computer ADD, MUL or DIV would need, but the calculations never get out of control because the maximum size of each intermediate result is constrained in advance by the repeated modulo operations.

Howewever, some algorithms in modular arithmetic require rather special treatment, and to do modular square roots you typically use a process called Tonelli-Shanks, after the two mathematicians who invented it independently (Tonelli in the 1890s, and Shanks in the 1970s).

One implemented and tested, this sort of code often gets buried into programming libraries, as happened in OpenSSL, and rarely, if ever, gets revisited to look for unlikely (and as-yet-unknown) problems that programmers sometimes refer to quaintly as corner cases.

(Tiling a floor is easy, if repetitious… until you get to the corners of the room, where the usual shapes and sizes just don’t fit.)

Don’t sit on the fence

Unfortunately, the OpenSSL implementation of the Tonelli-Shanks algorithm had a bug that was unlikely to show up in normal use, but could be triggered on purpose by feeding in data that would force the code to misbehave.

See if you can spot the flaw in this pseudocode, an iterative computation that is itself computed repeatedly inside another loop that contains it:

 i = 1 t = bignumtostartfrom() while t <> 1 do i = i + 1 if i == maxloops do then error('no answer possible') end t = boildowntbysimplifying(t) end

Loosely put, this loop counts how many iterations it takes for the number t to “boil down” to the special value 1, based on the function boildowntbysimplifying()

…but with a maximum number of iterations set so that the loop won’t run forever if no answer can be found.

In modular arithmetic, not all whole numbers have square roots that are also whole numbers, in the same way that in regular arithmetic, 36 is a “perfect square” that comes out as exactly 6×6, but 37 can’t be obtained by multiplying two whole numbers together. The loop above is designed to detect this situation by noticing that t has not reduced nicely to 1 after being given a suitable number of iterations to get there.

The problem is that the loop termination only checks whether the ever-increasing loop counter has exactly hit the maximum number of loops allowed.

The first time the code above runs, as you will see in the OpenSSL source file crypto/bn/bn_sqrt.c, the value of maxloops will always be 3 or more, so that the value of i will inevitably approach it upwards from below, until i == maxloops.

But if the code ever run when maxloops starts out at 0 or 1, implying that the loop should never run at all, then the test if i == maxloops will never become true, because i will already be greater than maxloops the first time round, and i will then keep on running away from maxloops for ever more.

Check which side of the fence you’re on

The solution was not to check whether i was exactly “on the fence” that denoted the stopping point, but simply to check which side of the stopping point i was on, and react accordingly:

 i = 1 while i < maxloops do t = nextbignuminsequence(t) if t == 1 then break end i = i + 1 end if i >= maxloops then error('no answer possible') end

This way, if i starts out greater than or equal to maxloops, the while statement (it’s now implemented using a for loop in OpenSSL’s C code, but the nature of the loop is the same) won’t be entered at all, so no infinite loop will occur.

On the other hand, if the “solution reached” outcome t == 1 happens before the loop expires, and the loop exits early, the code will know a result was found in time and the error will not be triggered.

What to do?

  • If you’re an OpenSSL user, consider upgrading to the latest version to shut off this bug. If you have an app that uses OpenSSL but you’re not sure if it relies on the library code managed by your operating system, or if contains a copy of its own and needs updating separately, look at the documentation, ask the community, or check with the vendor.
  • If you’re a programmer, use the clearest and most general condition you can for terminating loops. For example, if you need to report an error when a specific count is exceeded, write your code so that it checks whether it has reached or already gone past the finishing line, rather than relying on spotting it only when it’s exactly on the line. This makes the nature of the algorithm and the stopping condition much clearer, and can help save you from errors caused by the code kicking off from an unexpected starting point that’s already on the wrong side of the line.

S3 Ep74: Cybercrime busts, Apple patches, Pi Day, and disconnect effects [Podcast]

LISTEN NOW

Click-and-drag on the soundwaves below to skip to any point. You can also listen directly on Soundcloud.

With Paul Ducklin and Chester Wisniewski.

Intro and outro music by Edith Mudge.

You can listen to us on Soundcloud, Apple Podcasts, Google Podcasts, Spotify, Stitcher and anywhere that good podcasts are found. Or just drop the URL of our RSS feed into your favourite podcatcher.


Beware bogus Betas – cryptocoin scammers abuse Apple’s TestFlight system

Last year, we wrote about a research paper from SophosLabs that investigated malware known as CryptoRom, an intriguing, albeit disheartening, nexus in the cybercrime underworld.

This “confluence of criminality” saw cybercrooks adopting the same techniques as romance scammers to peddle fake cryptocurrency apps instead of false love, and fleece victims out of millions.

As you probably know, many romance scammers use online dating sites as a starting point for meeting new “friends”, with the aim of luring trusting victims into bogus relationships – often for months, sometimes for years – in which the victims are manipulated into handing over money on a regular basis.

But dating sites, it turns out, are also a handy way of using fake personas and “chance” meetings to charm people into a very different sort of relationship: one based on cryptocurrency.

Trust without romance

Even if there’s no obvious romantic spark with the imposter, and the imposter makes no attempt to construct one…

…victims of this type of scam nevertheless find themselves connected with someone likeable, and are thus willing to listen to what they say, including their chatter and advice about cryptocurrencies.

And before they know it, victims are taking their “friend’s” advice to access and install a brand new app.

Not an app that’s open to everyone, you understand: this is a dedicated app, a special app, an app for insiders only, that isn’t available on Google Play or the App Store.

Going off-market

As you probably know, going off-market on an Android phone is possible, though not by default (you need to enable off-store apps via a special setting), but on an iPhone, it’s effectively impossible.

Short of jailbreaking your phone (which we don’t recommend: it essentially means hacking your own device on purpose to evade Apple’s security sandbox), you’re stuck with the App Store, which is the one-and-only source of iPhone and iPad apps.

As SophosLabs reported last year, however, cybercriminals were nevertheless able to draw iPhone users into their cryptocoin app scams by using Enterprise Provisioning.

That’s a business-centric iPhone feature that allows private, in-house apps developed by a company for its own use to be deployed directly to company devices.

And if that sounds like a dangerous way to access an app suggested by someone you met on a dating site, make no mistake – it is!

As we explained last time:

The technological basis for these scam apps is surprisingly simple: the crooks persuade you, for example on the basis of a friendship carefully cultivated via a dating site, into giving them the same sort of administrative power over your iPhone that is usually reserved for companies managing corporate-owned devices […]

Typically, [this means] they can remotely wipe them, unilaterally or on request, block access to company data, enforce specific security settings such as lock codes and lock timeouts.

[These scammers] exploit this Enterprise Provisioning feature by tricking you into treating them as if they were your employer, and as if they had a reasonable need or right to exercise almost complete control over your device.

The app you’re told to install in a CryptoRom-style scam is utterly bogus.

You’ll be able to invest; the app will show that you’re getting excellent returns; you may even be able to withdraw some of your “earnings” (which means, in reality, that the crooks are merely letting you take back some of your own money that you already paid in).

This may well boost your confidence, and persuade you to put in more and more money, but when you want withdraw your “funds”…

…you’ll find you can’t.

The criminals behind the scam will either encourage you not to withdraw, persuading you the next big thing is coming and you can’t afford to miss out; or they’ll claim they have to withold a substantial “tax” from your withdrawal, to discourage you from taking money out; or they’ll simply run off with everything you’ve invested anyway.

Well, SophosLabs has now revisited the cryptocurrency app-scamming scene, and the latest incarnations of the CryptoRom scam:

Stay off the chopping block

These scams have spread around the world, but are particularly prevalent in South East Asia, from where they get the name 杀猪盘, an unpleasant metaphor that reflects the attitude of the gangs behind this cybercriminality – the words translate roughly as “chopping block”.

Unfortunately, the scammers have introduced numerous new tricks and techniques for seducing users into installing their “this-software-is-by-invitation-only-and-you-are-lucky-to-get-this-chance” apps, including abusing Apple’s Beta-testing service known as TestFlight:

TestFlight makes it easy to invite users to test your apps and App Clips and collect valuable feedback before releasing your apps on the App Store. You can invite up to 10,000 testers using just their email address or by sharing a public link.

Interestingly, you can only join a TestFlight app’s Beta phase if you first install Apple’s TestFlight app, which is used to collect and collate telemetry from and feedback about the new app. (TestFlight builds only work for 90 days after they’re published, on the grounds that Beta releases are expected to be updated regularly with new versions as bugs are fixed.)

Ironically, however, we suspect that some users will end up being more enthusastic about the scam if they have to jump through various Apple-centric hoops first, and to agree to be monitored while using the app.

After all, to someone who’s already interested in getting into cryptocurrency, but is worried they’ve left it too late to be part of the vanguard, the TestFlight process may well:

  • Reinforce the idea that the app really is “new” and “novel”, so they’re getting in on the ground floor.
  • Mislead victims into thinking they’re getting privileged access, not offered to everyone.
  • Encourage victims to believe that the TestFlight process means added trustworthiness and safety in the app itself.

Of course, long before the TestFlight 90-day limit is up, the crooks will either have updated the app as a way of “proving” their committment, or completed what’s known in the jargon as a rug-pull, a metaphor that rather obviously means that the criminals run off with everything.

Flowchart of a typical CryptoRom scam.
Click on the image for the full SophosLabs report.

What to do?

As SophosLabs researcher Jagadeesh Chandraiah warns in the new report:

CryptoRom scams continue to flourish through the combination of social engineering, cryptocurrency, and fake applications. These scams are well-organised, and skilled in identifying and exploiting vulnerable users based on their situation, interests, and level of technical ability. Those who get pulled into the scam have lost tens of thousands of dollars.

To stay clear of online scammers who lure you into trusting relationships with the express purpose of defrauding you, typically over weeks or months, here are our Top Tips:

  • Take your time when “dating site” talk turns from friendship to money. Don’t be swayed by the fact that your new “friend” happens to have a lot in common with you. That needn’t be down to serendipity or because you have a genuine match. The other person could simply have read your various online profiles carefully in advance.
  • Never give administrative control over your phone to someone with no genuine reason to have it. Never click [Trust] on a dialog that asks you to enrol in remote management unless it’s from your employer, and your employer looks after or owns your device.
  • Don’t be fooled by circumstances that imply approval from Apple. The fact that an app is registered with TestFlight doesn’t mean it’s officially vetted and approved by Apple. In fact, it’s the opposite: TestFlight apps aren’t in the App Store yet, because they’re still being developed and could contain bugs, accidentally or deliberately. If anything, you need to trust the developers of a TestFlight app even more than vendors of regular apps, because you’re letting them run experimental code on your device.
  • Don’t be decieved by messaging inside the app itself. Don’t let by icons, names and text messages inside an app trick you into assuming it has the credibility it claims. (If I show you a picture of a pot of gold, that doesn’t mean I own a pot of gold!)
  • Listen openly to your friends and family if they try to warn you. Criminals who use dating apps and friendships as a lure think nothing of deliberately setting you against your family as part of their scams. They may even proactively “warn” you not to let potentially “jealous” friends and family in on your investment “secret”. Don’t let the scammers drive a wedge between you and your family as well as between you and your money.

YOU MIGHT ALSO LIKE:


“Russian actors bypass 2FA” warning – what happened and how to avoid it

The US Cybersecurity and Infrastructure Security Agency (CISA) has just put out a bulletin numbered AA22-074A, with the dramatic title Russian State-Sponsored Cyber Actors Gain Network Access by Exploiting Default Multifactor Authentication Protocols and “PrintNightmare” Vulnerability.

To sidestep rumours based on the title alone (which some readers might interpret as an attack that is going on right now), and instead to reinforce the lessons that CISA hopes this incident can teach us, here’s what you need to know.

Fortunately, the overall story is simply and quickly told.

The attack dates back to May 2021, and the victim was an non-government organisation, or NGO, un-named by CISA.

As far as we can tell, and briefly summarised, the attackers:

  • Got an initial foothold due to a poorly-chosen password.
  • Found an account that had been left inactive for ages, instead of being removed.
  • Re-enrolled the account into the 2FA system, as though the original user were reactivating it.
  • Logged in as this user, sailing past the 2FA part thanks to re-enrolling the account with their own device.
  • Exploited the PrintNightmare vulnerability to get Domain Administrator access.
  • Deliberately broke the 2FA system by messing with its configuration, so it no longer demanded 2FA reponses from anyone.

At this point, as you can imagine, the attackers were able to add new accounts without worrying about 2FA; wander around the network; riffle through organisational data stored in the cloud; and snoop on email accounts.

CISA didn’t give any information about how much data was accessed, how long the attackers stayed inside the network, or what, if anything, was exfiltrated.

Those details would have been interesting to read about, to be sure, but they’re not critical to the story.

What’s important is how the attackers got in, and how the infiltration could have been prevented.

What to do?

Our recommendations are:

  • 1. Pick proper passwords. If your users find good passwords hard to invent and remember (and most people do, leading them to fall back on obvious words or phrases instead), consider investing in a password manager for everyone, and showing your staff how to use it.
  • 2. Fully disable or remove dud accounts as soon as you can. Make sure that you have a clear and complete process for removing users and their accounts if they leave the company, or if they switch to a different part of the organisation with a different network. Review unused accounts regularly and get rid of any that are no longer needed.
  • 3. Don’t set up your 2FA to “fail open”. Failing open means that if the system breaks, it will starts letting everyone bypass that part of the authentication process, instead of keeping everyone out until the problem is fixed. If you have an authentication system that you consider so unimportant that your policy allows you to skip it for convenience, why have it at all? Build the system to be suitably robust that it can fail closed, and devise a robust procedure for recovering it on the rare occasions that it does go wrong.
  • 4. React quickly if key system security features stop working. If you notice that security checks you’d expect to face suddenly stop showing up, don’t treat that as a time-saving treat. Report the anomaly, and investigate why it’s happening as soon as you can.
  • 5. Give staff a single point for reporting problems. The sooner you know something is wrong, the sooner you can investigate. Turn your whole staff into the eyes and ears of your security team by providing an easy-to-remember email address and internal phone number. Encourage reports, investigate promptly, and thank users who do their best to help you, even if what they report turns out to be harmless.
  • 6. Monitor your system logs regularly for risky behaviours such as new account creation. These days, cybercriminals do their best to blend in by choosing account names, computer names and program filenames that match the nomenclature you use yourself. (They typically wander round your network first and make notes, so they can learn how to fit the mould.) Don’t rely on attackers being obvious.

And, of course:

  • 7. Patch early, patch often. We don’t know, given that this incident apparently started in May 2021, whether these attackers knew about the PrintNightware bug just before it was patched, or first adopted it shortly after it was widely reported. Nevertheless, make prompt patching your watchword: get ahead of the cybercrooks whenevever you can, and catch up with patches for zero-day attacks as soon as possible when required.

Things to remember

The title of this CISA bulletin may sound dramatic, but this was not a new type of attack; it did not rely on any previously unknown flaws in 2FA; and it did not rely on hard-to-spot exploits or brand new hacking tools.

(Although the attackers did indeed use the PrintNightmare exploit in this case, they were still able to get inside the network without it.)

Remember that Proactive SecOps + Strong monitoring + Fast response + Safe configuration choices = A better prospect of stopping attackers in time.

If you don’t have the experience or the time to maintain ongoing threat reponse by yourself, consider partnering with a service like Sophos Managed Threat Response.

We help you take care of the activities you’re struggling to keep up with because of all all the other daily demands that IT dumps on your plate.


Not enough time or staff? Learn more about Sophos Managed Threat Response:
Sophos MTR – Expert Led Response  ▶
24/7 threat hunting, detection, and response  ▶


go top