Category Archives: News

Mystery iPhone update patches against iOS 16 mail crash-attack

We use Apple’s Mail app all day, every day for handling work and personal email, including a plentiful supply of very welcome Naked Security comments, questions, article ideas, typo reports, podcast suggestions and much more.

(Keep ’em coming – we get far more positive and useful messages that we get trolls, and we’ve love to keep it that way: tips@sophos.com is how to reach us.)

We’ve always found the Mail app to be a very useful workhorse that suits us well: it’s not especially fancy; it’s not full of features we never use; it’s visually simple; and (so far anyway), it’s been doggedly reliable.

But there must have been a serious problem brewing in the latest version of the app, because Apple just pushed out a one-bug security patch for iOS 16, taking the version number to iOS 16.0.3, and fixing a vulnerability specific to Mail:

One and only one bug is listed:

Impact: Processing a maliciously crafted email message may lead to a denial-of-service Description: An input validation issue was addressed with improved input validation. CVE-2022-22658

“One-bug” bulletins

In our experience, “one-bug” security bulletins from Apple, or at least N-bug bulletins for small N, are the exception rather than the rule, and often seem to arrive when there’s a clear and present danger such as a jailbreakable zero-day exploit or exploit sequence.

Perhaps the best known recent emergency update of this sort was a double zero-day fix in August 2022 that patched against a two-barrelled attack consisting of a remote code execution hole in WebKit (a way in) followed by a local code execution hole in the kernel itself (a way to take over completely):

Those bugs were officially listed not only as known to outsiders, but also as being under active abuse, presumably for implanting some sort of malware that could keep tabs on everything you did, such as snooping on all your data, taking secret screenshots, listening in to phone calls, and snapping images with your camera.

About two weeks later, Apple even slipped out an unexpected update for iOS 12, an old version that most of us assumed was effectively “abandonware”, having been conspicuously absent from Apple’s official security updates for almost a year before that:

(Apparently, iOS 12 was affected by the WebKit bug, but not by the follow-on kernel hole that made the attack chain much worse on more recent Apple products.)

This time, however, there’s no mention that the bug patched in the update to iOS 16.0.3 was reported by anyone outside Apple, or else we’d expect to see the finder named in the bulletin, even if only as “an anonymous researcher”.

There’s also no suggestion that the bug might already be known to attackers and therefore already being used for mischief or worse…

…but Apple nevertheless seems to think that it’s a vulnerability worth issuing a security bulletin about.

You’ve got mail, got mail, got mail…

So-called denial-of-service (DoS) or crash-me-at-will bugs are often regarded as the lightweights of the vulnerability scene, because they generally don’t provide a pathway for attackers to retrieve data they’re not supposed to see, or to acquire access privileges they shouldn’t have, or to run malicious code of their own choosing.

But any DoS bug can quickly turn into a serious problem, especially if it keeps happening over and over again once it’s triggered for the first time.

That situation can easily arise in messaging apps if simply accessing a booby-trapped message crashes the app, because you typically need to use the app to delete the troublesome message…

…and if the crash happens quickly enough, you never quite get enough time to click on the trash-can icon or to swipe-delete the offending message before the app crashes again, and again, and again.

Numerous stories have appeared over the years about iPhone “text-of-death” scenarios of this sort, including:

Of course, the other problem with what we jokingly refer to as CRASH: GOTO CRASH bugs in messaging apps is that other people get to choose when to message you, and what to put in the message…

…and even if you use some kind of automated filtering rule in the app to block messages from unknown or untrusted senders, the app will typically need to process your messages to decide which ones to get rid of.

(Note that this bug report explicitly refers to a crash due to “processing a maliciously crafted email message”.)

Therefore the app may crash anyway, and may keep crashing every time it restarts as it tries to handle the messages it didn’t manage to deal with last time.

What to do?

Whether you’ve got automatic updates turned on or not, go to Settings > General > Software Update to check for (and, if needed, to install) the fix.

The version you want to see after the update is iOS 16.0.3 or later.

Given that Apple has pushed out a security patch for this one DoS bug alone, we’re guessing that something disruptive might be at stake if an attacker were to figure this one out.

For example, you could end up with a barely usable device that you would need to wipe completely and reflash into order to restore it to healthy operation…


LEARN MORE ABOUT VULNERABILITIES

Click-and-drag on the soundwaves below to skip to any point. You can also listen directly on Soundcloud.


Serious Security: OAuth 2 and why Microsoft is finally forcing you into it

Naked Security meets Sophos X-Ops! (Read or listen according to your preference.)

We dig into OAuth 2.0, a well-known protocol for authorization.

Microsoft calls it “Modern Auth”, though it’s a decade old, and is finally forcing Exchange Online customers to switch to it.

We look at the what, the why and the how of the switch.

With Paul Ducklin and Chester Wisniewski

Click-and-drag on the soundwaves below to skip to any point. You can also listen directly on Soundcloud.

Find us on Soundcloud, Apple Podcasts, Google Podcasts, Spotify, Stitcher and anywhere that good podcasts are found. Or just drop the URL of our RSS feed into your favourite podcatcher.

(Intro and outro music by Edith Mudge.)


READ THE TRANSCRIPT

[MUSICAL MODEM]

DUCK.  Hello everybody.

Welcome to another Naked Security Podcast minisode!

I’m Paul Ducklin, joined as usual by my friend and colleague Chester Wisniewski from Vancouver.

Hello, Chet.


CHET.  Hey Duck, good to be back!


DUCK.  Now, I chose this topic because it just happened to coincide, inadvertently if you like, with the ProxyNotShell/ExchangeDoubleZeroDay problem that Microsoft ran into at the beginning of October 2022…

…and because it involves a thing called OAuth 2, which I know that you are [A] well-informed about, and [B] keen on.

So I figured, “What better confluence of issues than that?”

Exchange Online is finally forcing people to switch from what Microsoft referred to as Basic Auth to a thing called Modern Auth.

So, run us through what this change is all about, and why it is important.


CHET.  Well, I like the word Modern, despite the fact that the RFC that we’re discussing is now ten years old… doesn’t feel incredibly modern! [LAUGHS]

But compared to HTTP Authentication, which was invented in the 1990s in the early browser days, I guess it *does* feel modern in comparison.

As you say, in OAuth, the “Auth” is not authentication, rather it’s authorization.

There’s a lot of complexity, but a lot of benefits that come along with that.

And so if we’re looking at HTTP Authentication, all we’re really talking about is asking you to present a credential ,which is, for most of us, a username and password in order to gain access to something.


DUCK.  And, literally, you just take the username, then put a colon (so you’d better not have a colon in your username), hen you put your actual password, then you base64 it…

…and you send it along with the HTTP request and jolly well hope that it’s using TLS and that it’s encrypted, because your password is actually in the request every time.


CHET.  Exactly.

And that’s problematic for all kinds of reasons, not to mention, like you say, that if somebody is able to decrypt the traffic then they in essence have access to your password.

The other problem, of course, is that the same password probably authenticates to many other things in your environment, especially if we’re talking about Microsoft Exchange, because that password is definitely my Active Directory password, which I also use to authenticate to every other service in the environment in most cases.

So it’s a very high risk operation to be transmitting [the password] that way.

OAuth decouples all of this a little bit, and says, “We’re not going to tell you how to do authentication, but you should probably do something more rigorous than just asking for a username and password. We’ll leave that up to the implementer.”

Because, as we’ve talked about in many other podcasts, there’s lots of different types of multifactor authentication – text messages, apps that show you six-digit codes, push apps, pull apps, tokens…

..there’s a lot of different things to do.

“We’re not going to tell you how to do it. We’re going to say you should do one of these strong authentication methods, and then, once you know who you’re talking to, we’ll use OAuth to grant you a token that’s independent of your proof of identity, that says what type of access you should have, and how long you should have it.”

And I think that’s the really key part here.

Your password hopefully never expires when you authenticate normally, whereas in this case you can have some expirations involved, you can set limits, and you can also not just grant access to everything a user has access to.

Rather, you can say, “I only want to grant access to a subset or a specific set of permissions.”

And that’s really where the authorization is different than authentication.


DUCK.  If you were trying to do the same thing with Basic Auth…

…if you wanted to have two ways of accessing the email system one where you could just read the messages, and one where you could read and send messages, or maybe a third mode where you can read, write, and go and delete old messages.

With Basic Auth, you’d essentially need three separate usernames and passwords, wouldn’t you?

You’d need a duck-read, duck-readwrite, and a duck-dothelot.


CHET.  Precisely.

And many of us have experienced this using social media apps or services like Google or Yahoo or other things, where you may authenticate using OAuth, and you’ll get a popup in your browser that says, “This application would like access to read your tweets, but not write your tweets.”

Or,”This application wants to be able to send tweets as you and access your address book.”

It’s basically, literally, listing or enumerating all the different permissions that you’re agreeing that you want this third party to be able to do on your behalf.

And that’s really what all this is about: being able to grant different programs different access to things, in a time-limited fashion as well.

“I only want them to have access for seven days, or 1 hour, or forever, as long as I don’t tell you to revoke it.”


DUCK.  So it’s almost as though the authorization is designed to work bidirectionally, isn’t it?

Which is very different from Basic Auth, where you log in and the other end says, “You need to prove who you are, put in your username and password”, and then you’re in.

Here, with OAuth, the idea is that the server is giving you, the client, the chance to decide whether you agree with the kind of access that you would like that server to grant, possibly to somebody else.

So, that could be a Facebook app run on another server, or it could be authorizing some third party to do some stuff with your data, but not “all or nothing”.

You don’t have to grant somebody access to *everything* in order to grant them access to *something*.


CHET.  Absolutely.

That “division of permission” is really critical.

A lot of listeners to the podcast are probably administrators, so they’re familiar with having to log into their Domain Admin account in order to do administrative stuff, and then log out and log back in as their regular user to do other things, so that they’re not being over-privileged.

And I think there’s a real issue with overprivilege, and when we’re only using usernames and passwords, you’re sort of over-privileged by default.

And OAuth is meant to resolve this, so I think it’s really important when you’re thinking about something like Exchange as well.

Clearly, when you’re logging in from Outlook as a user, you want to be able to read mail, send mail, etc.

But in a forensic investigation, say the lawyers subpoena someone’s email, you could grant an account access to read people’s mail but not tamper with it.

Or you could do different things like that that allow you a lot more granularity.


DUCK.  And I guess another particular benefit is, because the authorization is granted via this access token, that means that whoever’s got that access token doesn’t need to know your password.

It also means that the access token could be revoked, or have an expiry time.

And when it expires, it doesn’t forcibly reset your password at the same time… which would really be the only way to do that with Basic Auth, wouldn’t it?


CHET.  Yes, and it works the exact opposite direction as well.

You may have granted the app on your phone access to something like your email or your Twitter, but you need to change your Twitter password for some reason…

…now you can change your password independently of those tokens being expired, so you don’t automatically necessarily get logged out of everything just because you changed your password.

So that knife can cut both ways.


DUCK.  And another feature, Chester, that OAuth 2 has is the idea of a thing called a “refresh token”, where you can have access tokens that are only valid for a limited time, just in case something goes wrong.

But to renew them, possibly even on a regular basis, the user doesn’t have to deal with a password pop-up or, “Hey, stick your Yubikey in all over again” prompt.

There’s a secure way of dealing with that as well, isn’t there?


CHET.  Yes.

You can, in essence say, “Every half-an hour, I want to expire the token you have, and you can request a new one.”

But also means that if something fishy is going on and you suspect you may have something wrong, you can invalidate those tokens and intentionally force somebody to reauthenticate, just in case.


DUCK.  So you have a mechanism for making long or medium term access what I guess you would call “frictionless”, but not to the point that you decide that, “Well, once I’ve seen the person’s password, it will remain valid until they decide to log out, at some possibly distant future time.”


CHET.  Yes, that’s what the protocol calls for.

Now, it’s important to remember that some of these details are up to the implementer… so sometimes these tokens are signed, sometimes they’re not.

It really depends on how it’s implemented.

There are some new standards that they’re moving toward, which I believe is going to be called OAuth 2.1, and the goal of that is to take more of these “implementer details” out, and put more of them into the specification to make it more uniform.

Not all the things we’re talking about are necessarily used in every OAuth transaction: some will have refresh tokens, some may not; some may digitally sign tokens, others may not.

And, obviously, those things all lead to different levels of security and flexibility.

But all of this is within the specification, and much of this is implemented in the examples we’ve used today, especially with regard to Microsoft, and social media networks, and Google, etc.


DUCK.  I guess part of the reason that changes like this do take a long time, and can be controversial, is that Basic Auth *really is* basic; it really is easy.

It’s one RFC – once you’ve read it, you know how to do it; once you’ve implemented it, it’ll work everywhere.

Whereas OAuth 2 is indeed quite complicated, isn’t it?

I’m looking at the oauth.net site now, at the page to do with access tokens…

…and I’ve got a page about one RFC, reference to four other RFCs, and then three other articles I can read that are, “These are up to you, we’re not telling you how to do it”.

So it is a lot more complicated!


CHET.  I think the good news is, because OAuth 2 is now ten years old, cloud providers have been using this for some time.

They’ve made mistakes, they’ve found vulnerabilities, they’ve determined ways they thought were good that aren’t so good, and all of those things have gone into those RFCs that you’re referencing that solidify the best practice that’s been learned through this very flexible protocol.

I think the other issue for Microsoft here is that not all of Microsoft’s clients behave well with Modern Auth, depending on how old they are, and depending on your configuration.

And that can be challenging for a lot of environments as well.

Office 2010 did not support Modern Auth at all.

Office 2013 does support Modern Auth, but it’s turned off, so you need to use group policy or some other way to push registry changes to all the computers to enable it.

Office 2016 has it on, but it doesn’t use it by default, so I’m not quite sure what the thought process there was. [LAUGHTER]

So you still have to push another registry key that says, “Use this first”, or “Use it by default”, rather than failing over to it.

And finally, in Office 2019 in Office 365, we see it being enabled and on by default.

If you have to push out these registry keys, this might be a good time to review other Microsoft Office policies that you might want to modify.

We haven’t had a podcast on this yet, Duck, but maybe this will be the next minisode: talking about things like managing macros, and how and when they might be executed in Office as well.

So this could be a good time to review those policies if you need to push out some registry keys, if you’re still on Office 2016 or earlier.


DUCK.  That’s a very good point and a very good idea, Chester! (So I think I’ve got a good idea for what’s coming in the near future.)

I’d just like to mention quickly a thing called OATH, O-A-T-H, that’s all capitals.

OAuth is capital O, capital A, little u, little t, little h.

Don’t confuse the two!

My understanding is that OATH… it deals with a little bit more than this, but basically it is a specification that defines the authentication procedure that we know as TOTP [Time-based One Time Password].

That’s the six-digit hashed-secret-mixed-in-with-the-time.

So don’t confuse OATH with OAuth.

You might use TOTP two-factor authentication as part of your authentication when you are implementing open authorization.

But they’re two completely different bodies, two completely different groups, and covered by completely different RFCs.


CHET.  One other thing to consider about Exchange Online, if you move to it…

…*when* you move (I shouldn’t say “if”), because you don’t have much choice – you *are* moving to Modern Auth. [LAUGHTER]

The move will likely potentially cut off third-party email programs that only support Basic Authentication.

So there are several apps for Linux, Mac and Windows that allow people to access their Outlook mailboxes without using Microsoft Outlook, but most of those do not support OAuth.

Most of them only do HTTP Basic Authentication.

So those apps will likely break when you move.

You also have the challenge, if you’re still enabling IMAP or POP, that you’ve really made no progress at all.

As much of a fan of IMAP as I am (I’m an old school nerd of IMAP), it is time to move on, especially if you’re in an Exchange Online environment.

And I think you should embrace Modern Auth!


DUCK.  I guess the kind of person who likes to stick to those time-honoured Linux and Unix tools&nbsp- those amongst us who may still have elms and pines and mutts [LAUGHS], and software like that…

…unfortunately they’re the people who are probably most passionate about it retaining those apps.

But it just isn’t going to be possible.

It simply doesn’t bring you the cybersecurity flexibility, the authorization flexibility, that you really need in a zero-trust era.


CHET.  I hear you talking about me… because I was one of those people.

And when Sophos moved to Modern Authentication a few years ago, it broke my cobbled-together solution I had for accessing my mail the way I wanted to access my mail within the Exchange environment.

While I was sad that I lost access using my preferred method of reading my email, I was completely supportive of our team’s move because I knew how much more security it was going to provide to us as users of the product.

And that outweighs any convenience factor I had of playing with Thunderbird in my Outlook mail.


DUCK.  [LAUGHS] Thunderbird?! That’s new-fangled, isn’t it, Chester?

Compared to elm [LAUGHTER], or mailx… or mail, even.

So, Chester, it may be Modern to Microsoft; it’s probably middle-aged to most IT departments…

…but, whatever you do, don’t get left behind, because this flexibility in authorization is really the key to the so-called zero-trust world that we pretty much have to move towards, given that absolutely everything is online these days.

Would you agree with that?


CHET.  Absolutely!

Flexibility in how we manage people’s permissions, and flexibility in how we authenticate them, which of course is decoupled from OAuth, as we talked about…

…those things are really important so that we can continue with the best practice that’s going to keep our data safe.


DUCK.  So this is kind of like a bigger version of the old argument that we eventually won, back in the XP days, of “Don’t make all your users administrators.”

It’s really convenient, because it means they can always do everything…

…but it means *they can always do everything*, and that is very rarely what you actually want.

So, Chester, I think that’s a great point on which to end.

Thank you so much for sharing your expertise, and perhaps, more importantly, your passion for this whole issue of online authorization, as distinct from authentication.

Thanks to everybody for listening.

And until next time…


CHET.  Stay secure!

[MUSICAL MODEM]


WhatsApp goes after Chinese password scammers via US court

If you can’t beat ’em, sue ’em!

Actually, the original quote doesn’t quite go like that, but you get the idea: if you can’t stop people downloading bogus, malware-tainted apps that pretend to be backed by your powerful, global brand…

…why not use your powerful, global brand to sue the creators of these rogue malware-spreading apps instead?

This isn’t a new technique (legal action by IT industry giants has helped to take down malicious websites and malware distribution services before), and it won’t stop the next wave of perpetrators from taking up where the last lot left off.

But anything that makes it more difficult for malware peddlers to operate in plain sight is worth a try.

WhatApp on the offensive

WhatsApp, together with its parent company Meta, has started legal action against three companies whom it claims “misled over one million WhatsApp users into self-compromising their accounts as part of an account takeover attack.”

Loosely speaking, self-compromise in this context refers to app-based phishing: create a bogus login dialog that keeps an unauthorised copy of anything you enter, including personal data such as passwords.

As you can probably imagine, and as WhatsApp claims in its court filing, the primary value of these compromised accounts to the alleged infringers was that they could be used for “sending commercial spam messages”.

Unlike the email ecosystem, where anybody can email anybody (or, in the case of bulk message senders, where somebody can email everybody), messaging and social media apps such as WhatsApp are based on closed groups.

This sort of online world isn’t anywhere near as easy for spammers and scammers to infiltrate.

Indeed, we know plenty of people who hardly use email at all any more, preferring to communicate with friends and family via exactly this sort of closed group, mainly because it sidesteps the flood of intrusive and unwanted garbage they face via email.

Of course, the flip-side of a closed-group messaging ecosystem is that you’re more likely to believe, or at least to take a look at, stuff you receive from people you know.

You’re unlikely to open documents or click on links that clearly came from an email sender you’ve never met before, don’t want to meet, and never will…

…but even if you know that your cousin Chazza is prone to sharing groanworthy memes and eyebrow-lifting videos, you probably still take a look at them, because you know what to expect already, and, hey, it’s your cousin, not some totally random online sender.

In other words, if scammers can get into to your social media accounts, they not only get access to your people-I’m-happy-to-chat-to list, but also acquire the ability to spam that list of people-who-are-happy-to-hear-from-you with messages that were apparently sent with your blessing.

IUnfortunately, it’s not enough just to trust the sender, because you have to trust the sender’s device and their account as well.

Social network spamming and scamming based on compromised accounts is a bit like Business Email Compromise (BEC), where crooks go to the trouble of getting access to an official email account inside a company.

This means they’re in a position to trick the employees of that company much more convincingly than they could as outside senders:

Named and shamed

WhatsApp named three companies in the lawsuit, operating in South East Asia under three different brand names.

The companies are Rockey Tech HK Ltd (Hong Kong), Beijing Luokai Technology Co. Ltd (PRC), and Chitchat Technology Ltd (Taiwan).

The brand names under which WhatsApp alleges they peddled fake apps and addons are HeyMods, Highlight Mobi, and HeyWhatsApp.

Very simply put, WhatsApp is arguing that the defendants knew perfectly well that their behaviour did not comply with Meta’s various terms and conditions, and that the purpose of violating those terms and conditions was to get access to and abuse legitimate users’ accounts.

The court document filed by WhatsApp includes a screenshot of the allegedly rogue app called HeyWhatsApp Android that ended up on alternative Android download market Malavida, where the app description quite openly warns users:

WhatsApp does not authorise the user of these [modification tools] at all, so downloading HeyWhatsApp […] can lead to being banned from the service […] Neither does it guarantee correct functioning, meaning that we often encounter a lack of stability.”

Other rogue apps in the lawsuit, says Meta, were available in the Google Play Store itself, meaning not only that they received Google’s official imprimatur, but also potentially reached a much wider audience (and probably an audience with more cautious attitudes to cybersecurity).

One of these apps was downloaded more than 1,000,000 times, say the plaintiffs, and a second app exceeded 100,000 downloads.

As WhatsApp wryly states, “Defendants did not disclose on the Google Play Store or in its Privacy Policies that this application contained malware designed to collect the user’s WhatsApp authentication information.”

(As an equally wry aside, we can’t help but wonder how many people would have installed the app anyway, even if the defendants had admitted in advance that “this software steals your password”.)

What to do?

  • Avoid going off-market if you can. As this case reminds us, plenty of malware makes it past Google Play’s automated “software vetting” process, but there are at least some basic cybersecurity checks and balances applied by Google. In contrast, many off-market Android download sites quite deliberately take an “anything goes” approach, and some even pride themselves on accepting apps that Google rejected.
  • Consider a third-party cybersecurity app for your Android. Apps from cybersecurity specialists help you detect and block a wide range of rogue websites and malicious apps, even if Google’s Play Store lets them through. (Yes, Sophos has one, and it’s free.)
  • If it sounds too good to be true, it is too good to be true. Do you really need to change the WhatsApp colours? If the official app won’t let you do so, why would you trust one that claims to have discovered a workaround? In particular, don’t pay much, or even any, attention to the crowd-sourced ratings on app download sites, including Google Play itself. Those reviews could have been left by anyone.
  • Regularly remove apps that you don’t really need or aren’t using much. Loosely speaking, the more apps you have on your phone, the bigger your attack surface area, and the more likely you’ll end up giving away personal data you didn’t mean to. Why give house room to apps that aren’t serving a clear and useful purpose?

Be especially wary of apps that claim they’re only available on alterntive download sites for intriguing sounding reasons such as “Google doesn’t want you to have this app because it reduces their ad revenue”, or “this investment app is by invitation only, so don’t share this special link with anyone”.

There are many legitimate and useful apps that don’t align with Google’s business and commercial rules, and that will therefore never make it into the competitive world of Google Play…

…but there are many, many more apps that get rejected by Google because they clearly contain cybersecurity flaws, either due to programmers who were lazy, incompetent or both, or because the creators of the app were unreconstructed cybercriminals.

As we like to say: If in doubt/Leave it out.


S3 Ep103: Scammers in the Slammer (and other stories) [Audio + Text]

SCAMMERS IN THE SLAMMER (AND OTHER STORIES)

With Doug Aamoth and Paul Ducklin.

Intro and outro music by Edith Mudge.

Click-and-drag on the soundwaves below to skip to any point. You can also listen directly on Soundcloud.

You can listen to us on Soundcloud, Apple Podcasts, Google Podcasts, Spotify, Stitcher and anywhere that good podcasts are found. Or just drop the URL of our RSS feed into your favourite podcatcher.


READ THE TRANSCRIPT

DOUG.  Microsoft’s double zero-day, prison for scammers, and bogus phone calls.

All that, and more, on the Naked Security podcast.

[MUSICAL MODEM]

Welcome to the podcast, everybody. I am Doug Aamoth.

He is Paul Ducklin…


DUCK.  It’s a great pleasure, Douglas.


DOUG.  I have some Tech History for you and it goes way back, way, way, way back, and it has to do with calculators.

This week, on 7 October 1954, IBM demonstrated the first-of-its-kind all-transistor calculator.

The IBM Electronic Calculating Punch, as it was called, swapped its 1250 vacuum tubes for 2000 transistors, which halved its volume and used just 5% as much power.


DUCK.  Wow!

I hadn’t heard of that “604”, so I went and looked it up, and I couldn’t find a picture.

Apparently, that was just the experimental model, and it was a few months later thqt they brought out the one you could buy, which was called the 608, and they’d upped it to 3000 transistors.

But remember, Doug, this is not transistors as in integrated circuits [ICs] because there were no ICs yet.

Where you would have had a valve, a thermionic valve (or a “toob” [vacuum tube], as you guys would call it), there’d be a transistor wired in instead.

So although it was much smaller, it was still discrete components.

When I think “calculator”, I think “pocket calculator”…


DOUG.  Oh, no, no, no!


DUCK.  “No”, as you say…

…it’s the size of a very large refrigerator!

And then you need a very large refrigerator next to it, in the photo that I saw, that I think is for input.

And then there was some other control circuitry which looked like a very large chest freezer, next to the two very large refrigerators.

I didn’t realise this, but apparently Thomas Watson [CEO of IBM] at that time made this decree for all of IBM: “No new products are allowed to use valves, vacuum tubes. We’re absolutely embracing, endorsing and only using transistors.”

And so that was where everything went thereafter.

So, although this was in the vanguard of the transistor revolution, apparently it was soon superseded… it was only on the market for about 18 months.


DOUG.  Well, let’s stay on the subject of very large things, and update our listeners about this Microsoft Exchange double zero-day.

We’ve covered it on a minisode; we’ve covered it on the site… but anything new we should know about?


DUCK.  Not really, Douglas.

It does seem not to have taken over the cybercurity world or security operations [SecOps] like ProxyShell and Log4Shell did:

I’m guessing there are two reasons for that.

First is that the actual details of the vulnerability are still secret.

They’re known to the Vietnamese company that discovered it, to the ZeroDay Initiative [ZDI] where it was responsibly disclosed, and to Microsoft.

And everyone seems to be keeping it under their hat.

So, as far as I know, there aren’t 250 proof-of-concept “try this now!” GitHub repositories where you can do it for yourself.

Secondly, it does require authenticated access.

And my gut feeling is that all of the wannabe “cybersecurity researchers” (giant air quotes inserted here) who jumped on the bandwagon of running attacks across the internet with Proxyshell or Log4Shell, claiming that they were doing the world of service: “Hey, if your web service is vulnerable, I’ll find out, and I’ll tell you”…

…I suspect that a lot of those people will think twice about trying to pull off the same attack where they have to actually guess passwords.

That feels like it’s the other side of a rather important line in the sand, doesn’t it?


DOUG.  Uh-huh.


DUCK.  If you’ve got an open web server that’s designed to accept requests, that’s very different from sending a request to a server that you know you are not supposed to be accessing, and trying to provide a password that you know you’re not supposed to know, if that makes sense.


DOUG.  Yes.


DUCK.  So the good news is it doesn’t seem to be getting widely exploited…

…but there still isn’t a patch out.

And I think, as soon as a patch does appear, you need to get it quickly.

Don’t delay, because I imagine that there will be a bit of a feeding frenzy trying to reverse-engineer the patches to find out how you actually exploit this thing reliably.

Because, as far as we know, it does work pretty well – if you’ve got a password, then you can use the first exploit to open the door to the second exploit, which lets you run PowerShell on an Exchange server.

And that can never end well.

I did take a look at Microsoft’s Guideline document this very morning (we’re recording on the Wednesday of the week), but I did not see any information about a patch or when one will be available.

Next Tuesday is Patch Tuesday, so maybe we’re going to be made to wait until then?


DOUG.  OK, we’ll keep an eye on that, and please update and patch when you see it… it’s important.

I’m going to circle back to our calculator and give you a little equation.

It goes like this: 2 years of scamming + $10 million scammed = 25 years in prison:


DUCK.  This is a criminal – we can now call him that because he’s not only been convicted, but sentenced – with a dramatic sounding name: Elvis Eghosa Ogiekpolor.

And he ran what you might call an artisan cybergang in Atlanta, Georgia in the United States a couple of years ago.

In just under two years, they feasted, if you like, on unfortunate companies who were the victims of what’s known as Business Email Compromise [BEC], and unfortunate individuals whom they lured into romance scams… and made $10 million.

Elvis (I’ll just call him that)… in this case, he had got a team together who created a whole web of fraudulently opened US bank accounts where he could deposit and then launder the money.

And he was not only convicted, he’s just been sentenced.

The judge obviously decided that the nature of this crime, and the nature of the victimisation, was sufficiently serious that he got 25 years in a federal prison.


DOUG.  Let’s dig into Business Email Compromise.

I think it’s fascinating – you’re either impersonating someone’s email address, or you’ve gotten a hold of their actual email address.

And with that, once you can get someone on the hook, you can do a whole bunch of things.

You list them out in the article here – I’ll go through them real quick.

You can learn when large payments are due…


DUCK.  Indeed.

Obviously, if you’re mailing from outside, and you’re just spoofing the email headers to pretend that the email is coming from the CFO, then you have to guess what the CFO knows.

But if you can log into the CFO’s email account every morning early on, before they do, then you can have a peek around all the big stuff that’s going on and you can make notes.

And so, when you come to impersonate them, not only are you sending an email that actually comes from their account, you’re doing so with an amazing amount of insider knowledge.


DOUG.  And then, of course, when you get an email where you ask some unknowing employee to wire a bunch of money to this vendor and they say, “Is this for real?”…

…if you’ve gotten access to the actual email system, you can reply back. “Of course it’s real. Look at the email address – it’s me, the CFO.”


DUCK.  And of course, even more, you can say, “By the way, this is an acquisition, this is a deal that will steal a march on our competitors. So it’s company confidential. Make sure you don’t tell anybody else in the company.”


DOUG.  Yes – double whammy!

You can say, “It’s me, it’s real, but this is a big deal, it’s a secret, don’t tell anyone else. No IT! Don’t report this as a suspicious message.”

You can then go into the Sent folder and delete the fake emails that you’ve sent on behalf of the CFO, so no one can see that you’ve been in there rummaging around.

And if you’re a “good” BEC scammer, you will go and dig around in the real employee’s former emails, and match the style of that user by copying and pasting common phrases that person has used.


DUCK.  Absolutely, Doug.

I think we’ve spoken before, when we’ve talked about phishing emails… about readers who’ve reported, “Yes, I got at one like this, but I rumbled it immediately because the person used a greeting in their email that is just so out of character.”

Or there were some emojis in the sign-off, like a smiley face [LAUGHTER], which I know this person just would never do.

Of course, if you just copy-and-paste the standard intro and outro from previous emails, then you avoid that kind of problem.

And the other thing, Doug, is that if you send the email from the real account, it gets the person’s real, genuine email signature, doesn’t it?

Which is added by the company server, and just makes it look like exactly what you’re expecting.


DOUG.  And then I love this dismount…

…as a top notch criminal, not only are you going to rip the company off, you’re also going to go after *customers* of the company saying, “Hey, can you pay this invoice now, and send it to this new bank account?”

You can defraud not just the company, but the companies that the company works with.


DUCK.  Absolutely.


DOUG.  And lest you think that Elvis was just defrauding companies… he was also romance scamming as well.


DUCK.  The Department of Justice reports that some of the businesses they scammed were taken for hundreds of thousands of dollars at a time.

And the flip side of their fraud was going after individuals in what’s called romance scams.

Apparently there were 13 people who came forward as witnesses in the case, and two of the examples that the DOJ (the Department of Justice) mentioned went for, I think, $32,000 and $70,000 respectively.


DOUG.  OK, so we’ve got some advice how to protect your business from Business Email Compromise, and how to protect yourself from romance scams.

Let’s start with Business Email Compromise.

I like this first point because it’s easy and it’s very low hanging fruit: Create a central email account for staff to report suspicious emails.


DUCK.  Yes, if you have security@example.com, then presumably you’ll look after that email account really carefully, and you could argue that it’s much less likely that a Business Email Compromise person would be able to compromise the SecOps account compared to compromising account of any other random employee in the company.

And presumably also, if you’ve got at least a few people who can keep their eye on what’s going on there, you’ve got a much better chance of getting useful and well-intentioned responses out of that email address than just asking the individual concerned.

Even if the CFO’s email hasn’t been compromised… if you’ve got a phishing email, and then you ask the CFO, “Hey, is this legit or not?”, you’re putting the CFO in a very difficult position.

You’re saying, “Can you act as though you’re an IT expert, a cybersecurity researcher, or a security operations person?”

Much better to centralise that, so there’s an easy way for people to report something that looks a little bit off.

It also means that if what you would do normally is just to go, “Well, that’s obviously phishing. I’ll just delete it”…

…by sending it in, even though *you* think it’s obvious, you allow the SecOps team or the IT team to warn the rest of the company.


DOUG.  All right.

And the next piece of advice: If in doubt, check with the sender of the email directly.

And, not to spoil the punchline, probably maybe not via email by some other means…


DUCK.  Whatever the mechanism used to send you a message that you don’t trust, don’t message them back via the same system!

If the account hasn’t been hacked, you’ll get a reply saying, “No, don’t worry, all is well.”

And if the account *has* been hacked, you’ll get back a message saying, “Oh, no, don’t worry, all’s well!” [LAUGHS]


DOUG.  All right.

And then last, but certainly not least: Require secondary authorisation for changes in account payment details.


DUCK.  If you have a second set of eyes on the problem – secondary authorisation – that [A] makes it harder for a crooked insider to get away with the scam if they’re helping out, and [B] mean that no one person, who’s obviously trying to be helpful to customers, has to bear the entire responsibility and pressure for deciding, “Is this legit or not?”

Two eyes are often better than one.

Or maybe I mean four eyes are often better than two…


DOUG.  Yes. [LAUGHS].

Let’s turn our attention to romance scams.

The first piece of advice is: Slow down when dating talk turns from friendship, love or romance to money.


DUCK.  Yes.

It’s October, isn’t it, Doug?

So it’s Cybersecurity Awareness Month once again… #cybermonth, if you want to keep track of what people are doing and saying.

There’s that great little motto (is that the right word?) that we have said many times on the podcast, because I know you and I like it, Doug.

This comes from the US Public Service…


BOTH.  Stop. (Period.)

Think. (Period.)

Connect. (Period.)


DUCK.  Don’t be in too much of a hurry!

It really is a question of “transact in haste, repent at leisure” when it comes to online matters.


DOUG.  And another piece of advice that’s going to be tough for some people… but look inside yourself and try to follow it: Listen openly to your friends and family if they try to warn you.


DUCK.  Yes.

I have been at cybersecurity events that have dealt with the issue of romance scamming in the past, when I was working at Sophos Australia.

It was wrenching to hear tales from people in the police service whose job is to try and intervene in scams at this point…

…and just to see how glum some of these cops were when they’d come back from visiting.

In some cases, whole families had been lured into scams.

These are more of the “financial investment” type, obviously, than the romance sort, but *everybody* was onside with the scammer, so when law enforcement went there, the family had “all the answers” that had been carefully provided by the crook.

And in romance scams, they will think nothing of courting your romantic interest *and* driving a wedge between you and your family, so you stop listening to their advice.

So, just be careful that you don’t end up estranged from your family as well as from your bank account.


DOUG.  All right.

And then there’s a final piece of advice: There’s a great video embedded inside the article.

The article is called Romance Scammer and BEC Fraudster sent to prison for 25 years:

So watch that video – it’s got a lot of great tips in it.

And let’s stay on the subject of scams, and talk about scammers and rogue callers.

Is it even possible to stop scam calls?

That’s the big question of the day right now:


DUCK.  Well, there are scam calls and there’s nuisance calls.

Sometimes, the nuisance calls seem to come very close to scam calls.

These are people who represent legitimate businesses, [ANNOYED] but they just won’t stop calling you, [GETTING MORE AGITATED] no matter that you tell them “I’m on the Do Not Call list [ANGRY] so DO NOT CALL AGAIN.”

So I wrote an article on Naked Security saying to people… if you can bring yourself to do it (I’m not suggesting you should do this every time, it’s a real hassle), it turns out that if you *do* complain, sometimes it does have a result.

And what minded me to write this up is that four companies selling “environmental” products were busted by the Information Commissioner’s Office [ICO, UK Data Privacy regulator] the and fined between tens and hundreds of thousands of pounds for making calls to people who had put themselves on what is rather strangely called the Telephone Preference Service in the UK…

…it’s as though they’re admitting that some people actually want to opt into these garbage calls. [LAUGHTER]


DOUG.  “Prefer”?! [LAUGHS]


DUCK.  I do like the way it is in the US.

The place you go to register and complain is: donotcall DOT gov.


DOUG.  Yes! “Do Not Call!”


DUCK.  Sadly, when it comes to telephony, we still do live in an opt-out world… they’re allowed to call you until you say they can’t.

But my experience has been that, although it does not solve the problem, putting yourself on the Do Not Call register is almost certain not to *increase* the number of calls you get.

It has made a difference to me, both when I was living in Australia and now I’m living in the UK…

…and reporting calls from time to time at least gives the regulator in your country a fighting chance of taking some sort of action at some time in the future.

Because if nobody says anything, then it is as though nothing had happened.


DOUG.  That dovetails nicely into our reader comment on this article.

Naked Security reader Phil comments:

Voicemail has changed everything for me.

If the caller is unwilling to leave a message and most aren’t, then I have no reason to return the call.

What’s more, in order to report a scam phone call, I’d have to waste the time necessary to answer the phone from an unidentified caller and interact with someone solely for the purpose of reporting them.

Even if I do answer the call, I’ll be talking to a robot anyway… no thanks!

So, is that the answer: just never pick up the phone calls, and never deal with these scammers?

Or is there a better way, Paul?


DUCK.  What I’ve found is, if I think that the number is a scammy number…

Some of the scammers or nuisance callers will use a different number every time – it will always look local, so it’s hard to tell, although I’ve been plagued by one recently where it’s been the same number over and over, so I can just block that.

…typically what I do is I just answer the phone, and I don’t say anything.

They’re calling me; if it’s that important, they’ll say, “Hello? Hello? Is that…?”, and use my name.

I find that a lot of these nuisance callers and scammers are using automated systems that, when they hear you answering the call, only then will they try and connect you to an operator at their side.

They don’t have their telephone operators actually placing the calls.

They call you, and while you’re identifying yourself, they quickly find somebody in the queue who can pretend to have made the call.

And I find that is a dead good giveaway, because if nothing happens, if nobody even goes, “Hello? Hello? Anybody there?”, then you know you’re dealing with an automated system.

However, there’s an annoying problem, though I think this is specific to the United Kingdom.

The bureaucracy for reporting what is called a “silent call”, like a heavy-breathing stalker type where no words are spoken…

…the mechanism for reporting that is completely different from the mechanism for reporting a call where someone says, “Hey, I’m John and I want to sell you this product you don’t need and isn’t any good”, which is really annoying.

Silent call reports go through the telephone regulator, and it’s treated as if it were a more serious criminal offence, I presume for historical reasons.

You have to identify yourself – you can’t report those anonymously.

So I find that annoying, and I do hope that they change that!

Where it’s just a robotic system that’s called you, and it doesn’t know you’re on the line yet so it hasn’t assigned anyone to talk to you…

…if you could report those more easily and anonymously, to be honest, I would be much more inclined to do it.


DOUG.  All right.

We have some links in the article for reporting rogue calls in a selection of countries.

And thank you, Phil, for sending in that comment.

If you have an interesting story, comment or question you’d like to submit, we’d love to read it on the podcast.

You can email tips@sophos.com, you can comment on any one of our articles, or you can hit us up on social: @nakedsecurity.

That’s our show for today – thanks very much for listening.

For Paul Ducklin, I’m Doug Aamoth, reminding you until next time to…


BOTH.  Stay secure.

[MUSICAL MODEM]


Former Uber CSO convicted of covering up megabreach back in 2016

Joe Sullivan, who was Chief Security Officer at Uber from 2015 to 2017, has been convicted in a US federal court of covering up a data breach at the company in 2016.

Sullivan was charged with obstructing proceedings conducted by the FTC (the Federal Trade Commission, the US consumer rights body), and concealing a crime, an offence known in legal terminology by the peculiar name of misprision.

The jury found him guilty of both these offences.

We first wrote about the breach behind this widely-watched court case back in November 2017, when news about it orignally emerged.

Apparently, the breach followed a disappointingly familiar “attack chain”:

  • Someone at Uber uploaded a bunch of source code to GitHub, but accidentally included a directory that contained access credentials.
  • Hackers stumbled upon the leaked credentials, and used them to access and poke around in Uber data hosted in Amazon’s cloud.
  • The Amazon servers thus breached revealed personal information on more than 50,000,000 Uber riders and 7,000,000 drivers, including driving licence numbers for about 600,000 drivers and social security numbers (SSNs) for 60,000.

Ironically, this breach happened while Uber was in the throes of an FTC investigation into a breach it had suffered in 2014.

As you can imagine, having to report a massive data breach while you are in the middle of answering to the regulator about an earlier breach, and while you’re trying to reassure the authorities that it won’t happen again…

…has got to be hard pill to swallow.

Indeed, the 2016 breach was kept quiet until 2017, when new management at Uber uncovered the story and admitted to the incident.

That’s when it emerged that the hackers who exfiltrated all those customer records and driver data the year before were paid $100,000 to delete the data and keep quiet about it:

From a regulatory point of view, of course, Uber ought to have reported this breach right away in many jurisdictions around the world, rather than hushing it up for more than a year.

In the UK, for example, the Information Commissioner’s Office variously commented at the time:

Uber’s announcement about a concealed data breach last October raises huge concerns around its data protection policies and ethics. [2017-11-22T10:00Z]

It’s always the company’s responsibility to identify when UK citizens have been affected as part of a data breach and take steps to reduce any harm to consumers. Deliberately concealing breaches from regulators and citizens could attract higher fines for companies. [2017-11-22T17:35Z]

Uber has confirmed its data breach in October 2016 affected approximately 2.7 million user accounts in the UK. Uber has said the breach involved names, mobile phone numbers and email addresses. [2017-11-29]

Naked Security readers wondered how that $100,000 hacker payment could have been made without making matters look even worse, and we speculated:

It’ll be interesting to see how the story unfolds – if the current Uber leadership can unfold it at this stage, that is. I suppose you could wrap the $100,000 up as a “bug bounty payout”, but that still leaves the issue of very conveniently deciding for yourself that it wasn’t necessary to report it.

It seems that’s exactly what did happen: the breach-that-came-at-exactly-the-wrong-time-in-the-middle-of-a-breach-investigation was written up as a “bug bounty”, something that usually depends on the initial disclosure being made responsibly, and not in the form of a blackmail demand.

Typically, an ethical bug bounty hunter wouldn’t steal the data first and demand hush money not to publish it, as ransomware crooks often do these days. Instead, an ethical bounty hunter would document the path that led them to the data and the security weaknesses that allowed them access it, and perhaps download a very small but representative sample to satisfy themselves that it was indeed remotely retrievable. Thus they would not acquire the data in the first place to use as an extortion tool, and any potential public disclosure agreed as part of the bug bounty process would reveal the nature of the security hole, not the actual data that had been at risk. (Pre-arranged “disclose by” dates exist to give companies enough time to fix the problems of their own accord, while setting a deadline to ensure that they don’t try to sweep the issue under the carpet instead.)

Right or wrong?

The fuss over Uber’s breach-and-cover-up eventually led to accusations against the CSO himself, and he was charged with the abovementioned crimes.

Sullivan’s trial, which lasted just under a month, concluded at the end of last week.

The case attracted plenty of interest in the cybersecurity community, not least because numerous cryptocurrency companies, faced with situations where hackers have made off with millions or hundreds of millions of dollars, seem increasingly (and publicly) willing to follow a very similar sort of “let’s rewrite breach history” path.

“Give the money back that you stole,” they beg, often in an exchange of comments via the blockchain of the plundered cryptocurrency, “and we’ll let you keep a sizeable quantity of the money as a bug bounty payment, and we’ll do our best to keep law enforcement off your back.”

If the final outcome of rewriting breach history in this fashion is that stolen data gets deleted, thus sidestepping any immediate harm to the victims, or that stolen cryptocoins that would otherwise be lost forever get returned, does the end justify the means?

In Sullivan’s case, the jury apparently decided, after four days of deliberation, that the answer was “No”, and found him guilty.

No date has yet been set for sentencing, and we’re guessing that Sullivan, who himself used to be a federal prosecutor, will appeal.

Watch this space, because this saga seems sure to get yet more interesting…


go top