Category Archives: News

S3 Ep100: Imagine you went to the moon – how would you prove it? [Audio + Text]

LISTEN NOW

With Doug Aamoth and Paul Ducklin.

Intro and outro music by Edith Mudge.

Click-and-drag on the soundwaves below to skip to any point. You can also listen directly on Soundcloud.

You can listen to us on Soundcloud, Apple Podcasts, Google Podcasts, Spotify, Stitcher and anywhere that good podcasts are found. Or just drop the URL of our RSS feed into your favourite podcatcher.


READ THE TRANSCRIPT

DOUG.  Deadbolt – it’s back!

Patches galore!

And timezones… yes, timezones.

All that, and more, on the Naked Security Podcast.

[MUSICAL MODEM]

Welcome to the podcast, everyone.

I’m Doug Aamoth.

With me, as always, is Paul Ducklin.

Paul, a very happy 100th episode to you, my friend!


DUCK.  Wow, Doug!

You know, when I started my directory structure for Series 3, I boldly used -001 for the first episode.


DOUG.  I did not. [LAUGHS]


DUCK.  Not -1 or -01.


DOUG.  Smart…


DUCK.  I had great faith!

And when I save today’s file, I’m going to be rejoicing in it.


DOUG.  Yes, and I will be dreading it because it’ll pop up to the top.

Well, I’m going to have to deal with that later…


DUCK.  [LAUGHS] You could rename all the other stuff.


DOUG.  I know, I know.

[MUTTERING] Not looking forward to that… there goes my Wednesday.

Anyway, let’s start the show with some Tech History.

This week, on 12 September 1959, Luna 2, also known as the Second Soviet Cosmic Rocket, became the first spacecraft to reach the surface of the Moon, and the first human-made object to make contact with another celestial body.

Very cool.


DUCK.  What was that long name?

“The Second Soviet Cosmic Rocket”?


DOUG.  Yes.


DUCK.  Luna Two is much better.


DOUG.  Yes, much better!


DUCK.  Apparently, as you can imagine, given that it was the space-race era, there was some concern of, “How will we know they’ve actually done it? They could just say they’ve landed on the Moon, and maybe they’re making it up.”

Apparently, they devised a protocol that would allow independent observation.

They predicted the time that it would arrive on the Moon, to crash into the Moon, and they sent the exact time that they expected this to an astronomer in the UK.

And he observed independently, to see whether what they said *would* happen at that time *did* happen.

So they even thought about, “How do you verify something like this?”


DOUG.  Well, on the subject of complicated things, we have patches from Microsoft and Apple.

So what’s notable here in this latest round?


DUCK.  We certainly do – it’s patch Tuesday this week, the second Tuesday of the month.

There are two vulnerabilities in Patch Tuesday that were notable to me.

One is notable because it is apparently in the wild – in other words, it was a zero-day.

And although it’s not remote code execution, it is a little worrying because it’s a [COUGHS APOLOGETICALLY] log file vulnerability, Doug!

It’s not quite as bad as Log4J, where you could not only get the logger to misbehave, you could also get it to run arbitrary code for you.

But it seems that if you send some kind of malformed data into the Windows Common Log File System driver, the CLFS, then you can trick the system into promoting you to system privileges.

Always bad if you’ve got in as a guest user, and you are then able to turn yourself into a sysadmin…


DOUG.  [LAUGHS] Yes!


DUCK.  That is CVE-2022-37969.

And the other one that I found interesting…

…fortunately not in the wild, but this is the one that you really need to patch, because I bet you it’s the one that cybercriminals will be focusing on reverse engineering:

“Windows TCP/IP remote code execution vulnerability”, CVE-2022-34718.

If you remember Code Red, and SQL Slammer, and those naughty worms of the past, where they just arrived in a network packet, and jammed their way into the system….

This is an even lower level than that.

Apparently, the bug’s in the handling of certain IPv6 packets.

So anything where IPv6 is listening, which is pretty much any Windows computer, could be at risk from this.

Like I said, that one is not in the wild, so the crooks haven’t found it yet, but I don’t doubt that they will be taking the patch and trying to figure out if they can reverse engineer an exploit from it, to catch out people who haven’t patched yet.

Because if anything says, “Whoa! What if someone wrote a worm that used this?”… that is the one I would be worried about.


DOUG.  OK.

And then to Apple…


DUCK.  We’ve written two stories about Apple patches recently, where, out of the blue, suddenly, there were patches for iPhones and iPads and Macs against two in-the-wild zero-days.

One was a browser bug, or a browsing-related bug, so that you could wander into an innocent-looking website and malware could land on your computer, plus another one that gave you kernel-level control…

…which, as I said in the last podcast, smells like spyware to me – something that a spyware vendor or a really serious “surveillance cybercrook” would be interested in.

Then there was a second update, to our surprise, for iOS 12, which we all thought had been long abandoned.

There, one of those bugs (the browser related one that allowed crooks to break in) got a patch.

And then, just when I was expecting iOS 16, all these emails suddenly started landing in my inbox – right after I checked, “Is iOS 16 out yet? Can I update to it?”

It wasn’t there, but then I got all these emails saying, “We’ve just updated iOS 15, and macOS Monterey, and Big Sur, and iPadOS 15″…

… and it turned out there were a whole bunch of updates, plus a brand new kernel zero-day this time as well.

And the fascinating thing is that, after I got the notifications, I thought, “Well, let me check again…”

(So you can remember, it’s Settings > General > Software Update on your iPhone or iPad.)

Lo and behold, I was being offered an update to iOS 15, which I already had, *or* I could jump all the way to iOS 16.

And iOS 16 also had this zero-day fix in it (even though iOS 16 theoretically wasn’t out yet), so I guess the bug also existed in the beta.

It wasn’t listed as officially being a zero-day in Apple’s bulletin for iOS 16, but we can’t tell whether that’s because the exploit Apple saw didn’t quite work properly on iOS 16, or whether it’s not considered a zero-day because iOS 16 was only just coming out.


DOUG.  Yes, I was going to say: no one has it yet. [LAUGHTER]


DUCK.  That was the big news from Apple.

And the important thing is that when you go to your phone, and you say, “Oh, iOS 16 is available”… if you’re not interested in iOS 16 yet, you still need to make sure you’ve got that iOS 15 update, because of the kernel zero-day.

Kernel zero days are always a problem because it means somebody out there knows how to bypass the much-vaunted security settings on your iPhone.

The bug also applies to macOS Monterey and macOS Big Sur – that’s the previous version, macOS 11.

In fact, not to be outdone, Big Sur actually has *two* kernel zero-day bugs in the wild.

No news about iOS 12, which is kind of what I expected, and nothing so far for macOS Catalina.

Catalina is macOS 10, the pre-previous version, and once again, we don’t know whether that update will come later, or whether it’s fallen off the edge of the world and won’t be getting updates anyway.

Sadly, Apple doesn’t say, so we don’t know.

Now, most Apple users will have automatic updates turned on, but, as we always say, do go and check (whether you’ve got a Mac or an iPhone or an iPad), because the worst thing is just to assume that your automatic updates worked and kept you safe…

…when in fact, something went wrong.


DOUG.  OK, very good.

Now, something I’ve been looking forward to, moving right along, is: “What do timezones have to do with IT security?”


DUCK.  Well, quite a lot, it turns out, Doug.


DOUG.  [LAUGHING] Yessir!


DUCK.  Timezones are very simple in concept.

They’re very convenient for running our lives so that our clocks roughly match what’s happening in the sky – so it’s dark at night and light in the day. (Let’s ignore daylight saving, and let’s just assume that we only have one-hour timezones all around the world so that everything is really simple.)

The problem comes when you’re actually keeping system logs in an organisation where some of your servers, some of your users, some parts of your network, some of your customers, are in other parts of the world.

When you write to the log file, do you write the time with the timezone factored in?

When you’re writing your log, Doug, do you subtract the 5 hours (or 4 hours at the moment) that you need because you’re in Boston, whereas I add one hour because I’m on London time, but it’s summer?

Do I write that in the log so that it makes sense to *me* when I read the log back?

Or do I write a more canonical, unambiguous time using the same timezone for *everybody*, so when I compare logs that come from different computers, different users, different parts of the world on my network, I can actually line up events?

It’s really important to line events up, Doug, particularly if you’re doing threat response in a cyberattack.

You really need to know what came first.

And if you say, “Oh, it didn’t happen until 3pm”, that doesn’t help me if I’m in Sydney, because my 3pm happened yesterday compared to your 3pm.

So, I wrote an article on Naked Security about some ways that you can deal with this problem when you log data.

My personal recommendation is to use a simplified timestamp format called RFC 3339, where you put a four digit year, dash [hyphen character, ASCII 0x2D], two digit month, dash, two digit day, and so on, so that your timestamps actually sort alphabetically nicely.

And that you record all your time zones as a tme zone known as Z (zed or zee), short for Zulu time.

That means basically UTC or Coordinated Universal Time.

That’s nearly-but-not-quite Greenwich Mean Time, and it’s the time that almost every computer’s or phone’s clock is actually set to internally these days.

Don’t try and compensate for timezones when you’re writing to the log, because then someone will have to decompensate when they’re trying to line up your log with everybody else’s – and there’s many a slip twixt the cup and the lip, Doug.

Keep it simple.

Use a canonical, simple text format that delineates exactly the date and time, right down to the second – or, these days, timestamps can even go down these days to the nanosecond if you want.

And get rid of timezones from your logs; get rid of daylight saving from your logs; and just record everything, in my opinion, in Coordinated Universal Time…

…confusingly abbreviated UTC, because the name’s in English but the abbreviation’s in French – something of an irony.


DOUG.  Yes.


DUCK.  
I’m tempted to say, “Not that I feel strongly about it, again”, as I usually do, laughingly…

…but it really is important to get things in the right order, particularly when you’re trying to track down cyber criminals.


DOUG.  All right, that’s good – great advice.

And if we stick on the subject of cybercriminals, you’ve heard of Manipulator-in-the-Middle attacks; you’ve heard of Manipulator-in-the-Browser attacks…

..now get ready for Browser-in-the-Browser attacks.


DUCK.  Yes, this is a new term that we’re seeing.

I wanted to write this up because researchers at a threat intelligence company called Group-IB recently wrote an article about this, and the media started talking about, “Hey, Browser-in-the-Browser attacks, be very afraid”, or whatever…

You’re thinking, “Well, I wonder how many people actually know what is meant by a Browser-in-the-Browser attack?”

And the annoying thing about these attacks, Doug, is that technologically, they’re terribly simple.

It’s such a simple idea.


DOUG.  They’re almost artistic.


DUCK.  Yes!

It’s not really science and technology, it’s art and design, isn’t it?

Basically, if you’ve ever done any JavaScript programming (for good or for evil), you’ll know that one of the things about stuff that you stick into a web page is that it’s meant to be constrained to that web page.

So, if you pop up a brand new window, then you’d expect it to get a brand new browser context.

And if it loads its page from a brand new site, say a phishing site, then it won’t have access to all the JavaScript variables, context, cookies and everything that the main window had.

So, if you open a separate window, you’re kind of limiting your hacking abilities if you’re a crook.

Yet if you open something in the current window, then you’re significantly limited as to how exciting and “system-like” you can make it look, aren’t you?

Because you can’t overwrite the address bar… that’s by design.

You can’t write anything outside the browser window, so you can’t sneakily put a window that looks like wallpaper on the desktop, like it’s been there all along.

In other words, you’re corralled inside the browser window that you started with.

So the idea of a Browser-in-the-Browser attack is that you start with a regular website, and then you create, inside the browser window you’ve already got, a web page that itself looks exactly like an operating system browser window.

Basically, you show someone a *picture* of the real thing, and convince them it *is* the real thing.

It’s that simple at heart, Doug!

But the problem is that with a little bit of careful work, particularly if you’ve got good CSS skills, you *can* actually make something that’s inside an existing browser window look like a browser window of its own.

And with a bit of JavaScript, you can even make it so that it can resize, and so that it can move around on the screen, and you can populate it with HTML that you fetch from a third party website.

Now, you may wonder… if the crooks get it dead right, how on earth can you ever tell?

And the good news is that there’s an absolutely simple thing you can do.

If you see what looks like an operating system window and you are suspicious of it in any way (it would essentially appear to pop up over your browser window, because it has to be inside it)…

…try moving it *off the real browser window*, and if it’s “imprisoned” inside the browser, you know it’s not the real deal!

The interesting thing about the report from the Group-IB researchers is that when they came across this, the crooks were actually using it against players of Steam games.

And, of course, it wants you to log into your Steam account…

…and if you were fooled by the first page, then it would even follow up with Steam’s two-factor authentication verification.

And the trick was that if those truly *were* separate windows, you could have dragged them to one side of your main browser window, but they weren’t.

In this case, fortunately, the cooks had not done their CSS very well.

Their artwork was shoddy.

But, as you and I have spoken about many times on the podcast, Doug, sometimes there are crooks who will put in the effort to make things look pixel-perfect.

With CSS, you literally can position individual pixels, can’t you?


DOUG.  CSS is interesting.

It’s Cascading Style Sheets… a language you use to style HTML documents, and it’s really easy to learn and it’s even harder to master.


DUCK.  [LAUGHS] Sounds like IT, for sure.


DOUG.  [LAUGHS] Yes, it’s like many things!

But it’s one of the first things you learn once you learn HTML.

If you’re thinking, “I want to make this web page look better”, you learn CSS.

So, looking at some of these examples of the source document that you linked to from the article, you can tell it’s going to be really hard to do a really good fake, unless you’re really good at CSS.

But if you do it right, it’s going to be really hard to figure out that it’s a fake document…

…unless you do as you say: try to pull it out of a window and move it around your desktop, stuff like that.

That leads into your second point here: examine suspect windows carefully.

A lot of them are probably not going to pass the eye test, but if they do, it’s going to be really tough to spot.

Which leads us to the third thing…

“If in doubt/Don’t give it out.”

If it just doesn’t quite look right, and you’re not able to definitively tell that something is strange is afoot, just follow the rhyme!


DUCK.  And it’s worth being suspicious of unknown websites, websites you haven’t used before, that suddenly say, “OK,we’re going to ask you to log in with your Google account in a Google Window, or Facebook in a Facebook window.”

Or Steam in a Steam window.


DOUG.  Yes.

I hate to use the B-word here, but this is almost brilliant in its simplicity.

But again, it’s going to be really hard to pull off a pixel perfect match using CSS and stuff like that.


DUCK.  I think the important thing to remember is that, because part of the simulation is the “chrome” [jargon for the browser’s user interface components] of the browser, the address bar will look right.

It may even look perfect.

But the thing is, it isn’t an address bar…

…it’s a *picture* of an address bar.


DOUG.  Exactly!

All right, careful out there, everyone!

And, speaking of things that are not what they seem, I’m reading about DEADBOLT ransomware, and QNAP NAS devices, and it feels to me like we just discussed this exact story not long ago.


DUCK.  Yes, we’ve written about this several times on Naked Security so far this year, unfortunately.

It’s one of those cases where what worked for the crooks once turns out to have worked twice, thrice, four times, five times.

And NAS, or Network Attached Storage devices, are, if you like, black-box servers that you can go and buy – they typically run some kind of Linux kernel.

The idea is that instead of having to buy a Windows licence, or learn Linux, install Samba, set it up, learn how to do file sharing on your network…

…you just plug in this device and, “Bingo”, it starts working.

It’s a web-accessible file server and, unfortunately, if there’s a vulnerability in the file server and you have (by accident or design) made it accessible over the internet, then crooks may be able to exploit that vulnerability, if there is one in that NAS device, from a distance.

They may be able to scramble all the files on the key storage location for your network, whether it’s a home network or small business network, and basically hold you to ransom without ever having to worry about attacking individual other devices like laptops and phones on your network.

So, they don’t need to mess around with malware that infects your laptop, and they don’t need to break into your network and wander around like traditional ransomware criminals.

They basically scramble all your files, and then – to present the ransom note – they just change (I shouldn’t laugh, Doug)… they just change the login page on your NAS device.

So, when you find all your files are messed up and you think, “That’s funny”, and you jump in with your web browser and connect there, you don’t get a password prompt!

You get a warning: “Your files have been locked by DEADBOLT. What happened? All your files have been encrypted.”

And then come the instructions on how to pay up.


DOUG.  And they have also kindly offered that QNAP could put up a princely sum to unlock the files for everybody.


DUCK.  The screenshots I have in the latest article on nakedsecurity.­sophos.com show:

1. Individual decryptions at 0.03 bitcoins, originally about US$1200 when this thing first became widespread, now about US$600.

2. A BTC 5.00 option, where QNAP get told about the vulnerability so they can fix it, which clearly they’re not going to pay because they already know about the vulnerability. (That’s why there’s a patch out in this particular case.)

3. As you say, there’s a BTC 50 option (that’s $1m now; it was $2m when this first story first broke). Apparently if QNAP pay the $1,000,000 on behalf of anybody who might have been infected, the crooks will provide a master decryption key, if you don’t mind.

And if you look at their JavaScript, it actually checks whether the password you put in matches one of *two* hashes.

One is unique to your infection – the crooks customise it every time, so the JavaScript has the hash in it, and doesn’t give away the password.

And there’s another hash that, if you can crack it, looks as though it would recover the master password for everyone in the world…

… I think that was just the crooks thumbing their noses at everybody.


DOUG.  It’s interesting too that the $600 bitcoin ransom for each user is… I don’t want to say “not outrageous”, but if you look in the comments section of this article, there are several people who are not only talking about having paid the ransom…

…but let’s skip ahead to our reader question here.

Reader Michael shares his experience with this attack, and he’s not alone – there are other people in this comment section that are reporting similar things.

Across a couple of comments, he says (I’m going to kind of make a frankencomment out of that):

“I’ve been through this, and came out OK after paying the ransom. Finding the specific return code with my decryption key was the hardest part. Learned the most valuable lesson.”

In his next comment he goes through all the steps he had to take to actually get things to work again.

And he dismounts with:

“I’m embarrassed to say I work in IT, have been for 20+ years, and got bitten by this QNAP uPNP bug. Glad to be through it.”


DUCK.  Wow, yes, that’s quite a statement, isn’t it?

Almost as though he’s saying, “I would have backed myself against these crooks, but I lost the bet and it cost me $600 and a whole load of time.”

Aaargh!


DOUG.  What does he mean by “the specific return code with his description key”?


DUCK.  Ah, yes, that is a very interesting… very intriguing. (I’m trying not to say amazing-slash-brilliant here.) [LAUGHTER]

I don’t want to use the C-word, and say it’s “clever”, but kind-of it is.

How do you contact these crooks? Do they need an email address? Could that be traced? Do they need a darkweb site?

These crooks don’t.

Because, remember, there’s one device, and the malware is customised and packaged when it attacks that device so that has a unique Bitcoin address in it.

And, basically, you communicate with these crooks by paying the specified amount of bitcoin into their wallet.

I guess that’s why they’ve kept the amount comparatively modest…

…I don’t want to suggest that everyone’s got $600 to throw away on a ransom, but it’s not like you’re negotiating up front to decide whether you’re going to pay $100,000 or $80,000 or $42,000.

You pay them the amount… no negotiation, no chat, no email, no instant messaging, no support forum.

You just send the money to the designated bitcoin address, and they’ll obviously have a list of those bitcoin addresses they’re monitoring.

When the money arrives, and they see it’s arrived, they know that you (and you alone) paid up, because that wallet code is unique.

And they then do what is, effectively (I’m using the biggest air-quotes in the world) a “refund” on the blockchain, using a bitcoin transaction to the amount, Doug, of zero dollars.

And that reply, that transaction, actually includes a comment. (Remember the Poly Networks hack? They were using Ethereum blockchain comments to try and say, “Dear, Mr. White Hat, won’t you give us all the money back?”)

So you pay the crooks, thus giving the message that you want to engage with them, and they pay you back $0 plus a 32-hexadecimal character comment…

…which is 16 raw binary bytes, which is the 128 bit decryption key you need.

That’s how you talk to them.

And, apparently, they’ve got this down to a T – like Michael said, the scam does work.

And the only problem Michael had was that he wasn’t used to buying bitcoins, or working with blockchain data and extracting that return code, which is basically the comment in the transaction “payment” that he gets back for $0.

So, they’re using technology in very devious ways.

Basically, they’re using the blockchain both as a payment vehicle and as a communications tool.


DOUG.  All right, a very interesting story indeed.

We will keep an eye on that.

And thank you very much, Michael, for sending in that comment.

If you have an interesting story, comment or question you’d like to submit, we’d love to read it on the podcast.

You can email tips@sophos.com, you can comment on any one of our articles, or you can hit us up on social: @NakedSecurity.

That’s our show for today – thanks very much for listening.

For Paul Ducklin, I’m Doug Aamoth, reminding you, until next time, to…


BOTH.  Stay secure.

[MUSICAL MODEM]


Serious Security: Browser-in-the-browser attacks – watch out for windows that aren’t!

Researchers at threat intelligence company Group-IB just wrote an intriguing real-life story about an annoyingly simple but surprisingly effective phishing trick known as BitB, short for browser-in-the-browser.

You’ve probably heard of several types of X-in-the-Y attack before, notably MitM and MitB, short for manipulator-in-the-middle and manipulator-in-the-browser.

In a MitM attack, the attackers who want to trick you position themselves somewhere “in the middle” of the network, between your computer and the server you’re trying to reach.

(They might not literally be in the middle, either geographically or hop-wise, but MitM attackers are somewhere along the route, not right at either end.)

The idea is that instead of having to break into your computer, or into the server at the other end, they lure you into connecting to them instead (or deliberately manipulate your network path, which you can’t easily control once your packets exit from your own router), and then they pretend to be the other end – a malevolent proxy, if you like.

They pass your packets on to the official destination, snooping on them and perhaps fiddling with them on the way, then receive the official replies, which they can snoop on and tweak for a second time, and pass them back to you as though you’d connected end-to-end just as you expected.

If you’re not using end-to-end encryption such as HTTPS in order to protect both the confidentiality (no snooping!) and integrity (no tampering!) of the traffic, you are unlikely to notice, or even to be able to detect, that someone else has been steaming open your digital letters in transit, and then sealing them again up afterwards.

Attacking at one end

A MitB attack aims to work in a similar way, but to sidestep the problem caused by HTTPS, which makes a MitM attack much harder.

MitM attackers can’t readily interfere with traffic that’s encrypted with HTTPS: they can’t snoop on your data, because they don’t have the cryptographic keys used by each end to protect it; they can’t change the encrypted data, because the cryptographic verification at each end would then raise the alarm; and they can’t pretend to be the server you’re connecting to because they don’t have the cryptographic secret that the server uses to prove its identity.

An MitB attack therefore typically relies on sneaking malware onto the your computer first.

That’s generally more difficult than simply tapping into the network at some point, but it gives the attackers a huge advantage if they can manage it.

That’s because, if they can insert themselves right inside your browser, they get to see and to modify your network traffic before your browser encrypts it for sending, which cancels out any outbound HTTPS protection, and after your browser decrypts it on the way back, thus nullifying the encryption applied by the server to protect its replies.

What abour a BitB?

But what about a BitB attack?

Browser-in-the-browser is quite a mouthful, and the trickery involved doesn’t give cybercriminals anywhere near as much power as a MitM or a MitB hack, but the concept is forehead-slappingly simple, and if you’re in too much of a hurry, it’s surprisingly easy to fall for it.

The idea of a BitB attack is to create what looks like a popup browser window that was generated securely by the browser itself, but that is actually nothing more than a web page that was rendered in an existing browser window.

You might think that this sort of trickery would be doomed to fail, simply because any content in site X that pretends to be from site Y will show up in the browser itself as coming from a URL on site X.

One glance at the address bar will make it obvious that you’re being lied to, and that whatever you’re looking at is probably a phishing site.

Foe example, here’s a screenshot of the example.com website, taken in Firefox on a Mac:

Genuine browser window: screenshot of Firefox for Mac with example.com website open.

If attackers lured you to a fake site, you might fall for the visuals if they copied the content closely, but the address bar would give away that you weren’t on the site you were looking for.

In a Browser-in-the-Browser scam, therefore, the attacker’s aim is to create a regular web page that looks like the web site and content you’re expecting, complete with the window decorations and the address bar, simulated as realistically as possible.

In a way, a BitB attack is more about art than it is about science, and it’s more about web design and managing expectations than it is about network hacking.

For example, if we create two screen-scraped image files that look like this…

…then HTML as simple as what you see below…

<html> <body> <div> <div><img src='./fake-top.png'></div> <p> <div><img src='./fake-bot.png'></div> </div> </body>
</html>

…will create what looks like a browser window inside an existing browser window, like this:

This looks like a Firefox browser window, and that’s exactly what it is:
a webpage that LOOKS LIKE a browser window.

In this very basic example, the three macOS buttons (close, minimise, maximise) at the top left won’t do anything, because they aren’t operating system buttons, they’re just pictures of buttons, and the address bar in what looks like a Firefox window can’t be clicked in or edited, because it too is just a screenshot.

But if we now add an IFRAME into the HTML we showed above, to suck in bogus content from a site that has nothing to do with example.com, like this…

<html> <body> <div> <div><img src='./fake-top.png' /></div> <div><iframe src='https:/dodgy.test/phish.html' frameBorder=0 width=650 height=220></iframe></div> <div><img src='./fake-bot.png' /></div> </div> </body>
</html>

…you’d have to admit that the resulting visual content looks exactly like a standalone browser window, even though it’s actually a web page inside another browser window.

The text content and the clickable link you see below were downloaded from the dodgy.test HTTPS link in the HTML file above, which contained this HTML code:

<html> <body style='font-family:sans-serif'> <div style='width:530px;margin:2em;padding:0em 1em 1em 1em;'> <h1>Example Domain</h1> <p>This window is a simulacrum of the real website, but it did not come from the URL shown above. It looks as though it might have, though, doesn't it? <p><a href='https://dodgy.test/phish.click'>Bogus information...</a> </div> </body>
</html>

The graphical content topping and tailing the HTML text makes it look as though the HTML really did come from example.com, thanks to the screenshot of the address bar at the top:

Top. Fake window controls and address bar via image.
Middle. Fakery via IFRAME download.
Bottom. Image rounds off the fake window.

The artifice is obvious if you view the bogus window on a different operating system, such as Linux, because you get a Linux-like Firefox window with a Mac-like “window” inside it.

The fake “window dressing” components really do stand out as the images they really are:

The fake window shown clearly as web page,
with the actual window controls and address bar at the very top.

Would you fall for it?

If you’ve ever taken screenshots of apps, and then opened the screenshots later in your photo viewer, we’re willing to bet that at some point you’ve tricked yourself into treating the app’s picture as if it were a running copy of the app itself.

We’ll wager that you’ve clicked on or tapped in an app-in-an-app image at least one in your life, and found yourself wondering why the app wasn’t working. (OK, maybe you haven’t, but we certainly have, to the point of genuine confusion.)

Of course, if you click on an app screenshot inside a photo browser, you’re at very little risk, because the clicks or taps simply won’t do what you expect – indeed, you may end up editing or scribbling lines on the image instead.

But when it comes to a browser-in-the-browser “artwork attack” instead, misdirected clicks or taps in a simulated window can be dangerous, because you’re still in an active browser window, where JavaScript is in play, and where links still work…

…you’re just not in the browser window you thought, and you’re not on the website you thought, either.

Worse still, any JavaScript running in the active browser window (which came from the original imposter site you visited) can simulate some of the expected behaviour of a genuine browser popup window in order to add realism, such as dragging it, resizing it, and more.

As we said at the start, if you’re waiting for a real popup window, and you see something that looks like a popup window, complete with realistic browser buttons plus an address bar that matches what you were expecting, and you’re in a bit of a hurry…

…we can fully understand how you might misrecognise the fake window as a real one.

Steam Games targeted

In the Group-IB research we mentioned above, the real-world BinB attack that the researchers came aross used Steam Games as a lure.

A legitimate looking site, albeit one you’d never heard of before, would offer you a chance to win places at an upcoming gaming tournament, for example…

…and when the site said it was popping up a separate browser window containing a Steam login page, it really presented a browser-in-the-browser bogus window instead.

The researchers noted that the attackers didn’t just use BitB trickery to go for usernames and passwords, but also tried to simulate Steam Guard popups asking for two-factor authentication codes, too.

Fortunately, the screenshots presented by Group-IB showed that the criminals they happened upon in this case weren’t terribly careful about the art-and-design aspects of their scammery, so most users probably spotted the fakery.

But even a well-informed user in a hurry, or someone using a browser or operating system they weren’t familiar with, such as at a friend’s house, might not have noticed the inaccuracies.

Also, more fastidious criminals would almost certainly come up with more realistic fake content, in the same way that not all email scammers make spelling mistakes in their messages, thus potentially leading more people into giving away their access credentials.

What to do?

Here are three tips:

  • Browser-in-the-Browser windows aren’t real browser windows. Although they may seem like operating system level windows, with buttons and icons that look just like the real deal, they don’t behave like operating system windows. They behave like web pages, because that’s what they are. If you’re suspicous, try dragging the suspect window outside the main browser window that contains it. A real browser window will behave independently, so you can move it outside and beyond the original browser window. A fake browser window will be “imprisoned” inside the real window it’s shown in, even if the attacker has used JavaScript to try to simulate as much genuine-looking behaviour as possible. This will quickly give away that it’s part of a web page, not a true window in its own right.
  • Examine suspect windows carefully. Realistically mocking up the look and feel of an operating system window inside a web page is easy to do badly, but difficult to do well. Take those extra few seconds to look for telltale signs of fakery and inconsistency.
  • If in doubt, don’t give it out. Be suspicious of sites you’ve never heard of, and that you have no reason to trust, that suddenly want you to login via a third-party site.

Never be in a hurry, because taking your time will make you much less likely to see what you think is there instead of what seeing what actually is there.

In three words: Stop. Think. Connect.


Featured image of photo of app window containing image of photo of Magritte’s “La Trahison des Images” created via Wikipedia.


Apple patches a zero-day hole – even in the brand new iOS 16

We’ve been waiting for iOS 16, given Apple’s recent Event at which the iPhone 14 and other upgraded hardware products were launched to the public.

This morning, we did a Settings > General > Software Update, just in case…

…but nothing showed up.

But some time shortly before 8pm tonight UK time [2022-09-12T18:31Z], a raft of update notifications dropped into our inbox, announcing a curious mix of new and updated Apple products.

Even before we read through the bulletins, we tried Settings > General > Software Update again, and this time we were offered an upgrade to iOS 15.7, with an alternative upgrade that would take us straight to iOS 16:

An update and an upgrade available at the same time!

(We went for the upgrade to iOS 16 – the download was just under 3GB, but once downloaded the process went faster than we expected, and everything thus far seems to be working just fine.)

Be sure to update even if you don’t upgrade

Just to be clear, if you don’t want to upgrade to iOS 16 just yet, you still need to update, because the iOS 15.7 and iPadOS 15.7 updates include numerous security patches, including a fix for a bug dubbed CVE-2022-32917.

The bug, the discovery of which is credited simply to “an anonymous researcher”, is described as follows:

[Bug patched in:] Kernel Available for: iPhone 6s and later, iPad Pro (all models), iPad Air 2 and later, iPad 5th generation and later, iPad mini 4 and later, and iPod touch (7th generation) Impact: An application may be able to execute arbitrary code with kernel privileges. Apple is aware of a report that this issue may have been actively exploited. Description: The issue was addressed with improved bounds checks.

As we pointed out when Apple’s last emergency zero-day patches came out, a kernel code execution bug means that even innocent-looking apps (perhaps including apps that made it into the App Store because they raised no obvious red flags when examined) could burst free from Apple’s app-by-app security lockdown…

…and potentially take over the entire device, including grabbing the right to perform system operations such as using the camera or cameras, activating the microphone, acquiring location data, taking screenshots, snooping on network traffic before it gets encrypted (or after it’s been decrypted), accessing files belonging to other apps, and much more.

If, indeed, this “issue” (or security hole as you might prefer to call it) has been actively exploited in the wild, it’s reasonable to infer that there are apps out there that unsuspecting users have already installed, from what they thought was a trusted source, even though those apps contained code to activate and abuse this vulnerability.

Intriguingly, macOS 11 (Big Sur) gets its own update to macOS 11.7, which patches a second zero-day hole dubbed CVE-2022-32894, described in exactly the same words as the iOS zero-day bulletin quoted above.

However, CVE-2022-32894 is listed as a Big Sur bug only, with the more recent operating system versions macOS 12 (Monterey), iOS 15, iPadOS 15 and iOS 16 apparently unaffected.

Remember that a security hole that was only fixed after the Bad Guys had already figured out how to exploit it is known as a zero-day because there were zero days during which even the keenest user or sysadmin could have patched against it proactively.

The full story

The updates announced in this round of bulletins include the following.

We’ve listed them below in the order they arrived by email (reverse numeric order) so that iOS 16 appears at the bottom:

  • APPLE-SA-2022-09-12-5: Safari 16. This update applies to macOS Big Sur (version 11) and Monterey (version 12). No Safari update is listed for macOS 10 (Catalina). Two of the bugs fixed could lead to remote code execution, meaning that a booby-trapped website could implant malware on your computer (which could subsequently abuse CVE-2022-32917 to take over at kernel level), although neither of these bugs are listed as being zero-days. (See HT213442.)
  • APPLE-SA-2022-09-12-4: macOS Monterey 12.6 This update can be considered urgent, given that it includes a fix for CVE-2022-32917. (See HT213444.)
  • APPLE-SA-2022-09-12-3: macOS Big Sur 11.7 A similar tranche of patches to those listed above for macOS Monterey, including the CVE-2022-32917 zero-day. This Big Sur update also patches CVE-2022-32894, the second kernel zero day described above. (See HT213443.)
  • APPLE-SA-2022-09-12-2: iOS 15.7 and iPadOS 15.7 As stated at the start of the article, these updates patch CVE-2022-32917. (See HT213445.)
  • APPLE-SA-2022-09-12-1: iOS 16 The big one! As well as a bunch of new features, this includes the Safari patches delivered separately for macOS (see the top of this list), and a fix for CVE-2022-32917. Intriguingly, the iOS 16 upgrade bulletin advises that “[a]dditional CVE entries [are] to be added soon”, but doesn’t denote CVE-2022-23917 as a zero-day. Whether that’s because iOS 16 wasn’t yet officially considered “in the wild” itself, or because the known exploit doesn’t yet work on an unpatched iOS 16 Beta, we can’t tell you. But the bug does indeed seem to have been carried forward from iOS 15 into the iOS 16 codebase. (See HT213446.)

What to do?

As always, Patch Early, Patch Often.

A full-blown upgrade from iOS 15 to iOS 16.0, as it reports itself after installation, will patch the known bugs in iOS 15. (We’ve not yet seen an announcement for iPadOS 16.)

If you’re not ready for the upgrade yet, be sure to upgrade to iOS 15.7, because of the zero-day kernel hole.

On iPads, for which iOS 16 isn’t yet mentioned, grab iPadOS 15.7 right now – don’t hang back waiting for iPadOS 16 to come out, given that you’d be leaving yourself needlessly exposed to a known exploitable kernel flaw.

On Macs, Monterey and Big Sur get a double-update, one to patch Safari, which becomes Safari 16, and one for the operating system itself, which will take you to macOS 11.7 (Big Sur) or macOS 12.6 (Monterey).

No patch for iOS 12 this time, and no mention of macOS 10 (Catalina) – whether Catalina is now no longer supported, or simply too old to include any of these bugs, we can’t tell you.

Watch this space for any CVE updates!


How to deal with dates and times without any timezone tantrums…

Dates, times and timezones are troublesome things.

Daylight saving, or “summer time” as it’s also commonly known, makes matters even worse, because many countries tweak their clocks twice a year in order to shift the apparent time of sunrise and sunset in relation to the regular working day.

In the UK, for instance, our clocks are set to Greenwich Mean Time when New Year comes round, but we wind them forward to GMT+1 at the end of March, and back to GMT again at the end of October.

Much of North America does something very, very similar, yet annoyingly different, setting the dates on which the clocks change to the start of November and the start of March instead. (Our two countries used to be conveniently aligned, but drifted apart slightly earlier this century.)

So our colleagues in Boston are always five hours behind us here in Oxfordshire, except for the brief period each autumn (or fall, given that we can’t even agree on the names of the seasons in our common language, let alone the alignment of our clocks) and spring when they aren’t.

Opponents of daylight savings dismiss it as a pointless complexity that we simply don’t need in the internet era, which is mildly ironic given that internet-era devices generally manager to adjust themselves automatically. Proponents of the system note that, for many people, shifting their working day to suit the season is no longer possible, because their days are ruled by the clock and not by the diurnal position of the sun, so shifting the clock to suit the season is the simplest alternative.

DST shifts considered harmful

Indeed, daylight saving timing trouble raised its head all this week when Chile decided to alter its customary clock-switch date temporarily (to add yet more complexity, the clock changes go the other way below the equator, because the seasons are reversed).

The temporary change was announced to avoid confusion on the day of the the country’s recent constitutional referendum.

The referendum took place on 04 September 2022, the day when clocks would normally go forwards for summer.

The clock change was therefore postponed for a week, lest people who forgot to reset their clocks before going to bed on Saturday night might misread the time on Sunday, and innocently end up arriving at their local voting station after it had closed, not realising they were an hour late because their clocks were an hour slow.

Even Microsoft felt the need to warn its users that Windows clocks, operating system timekeeping, meeting schedules and more might be thrown out of whack, given that the Chilean government didn’t announce this temporary change-to-the-change until last month, thus requiring a last-minute update to the Windows timezone database.

What if your clock is locked?

At least users of traditional operating systems can make temporary timezone adjustments themselves if needed.

Some devices are entirely dependent on firmware updates to display local time correctly.

For example, we have a miniature bicycle “computer” that we use as a compass and distance tracker when taking a journey (it’s amazing how fluidly and naturally you can navigate without ever looking at a map if you can keep track of time, distance and direction)…

…and although you can fiddle with all sorts of settings stored in the device, such as your body mass (apparently used in guessing your power output), map settings, location reference format, font size and more, the one thing you can’t do is set the date and time manually.

There simply isn’t a way to do it, at least for the builtin apps, and you can’t even tweak or hack those yourself, because the whole thing runs a digitally-signed, firmware-locked version of Linux

You can write your own add-on apps, and they can disply the date and time however they like, but they have to be compiled to run in a dedicated virtual machine inside a somewhat limited sandbox.

The theory is that if the device is working properly, it knows the absolute time via its GPS receiver, with an accuracy significantly better than the finest, largest, most expensive and most complex mechanical chronometers ever made.

It also knows where you are on the planet with an accuracy of well under 10 metres (you clearly tell which lane you were in, or see where you overtook buses, when you look back at a journey you recorded), so it can compute which physical timezone you’re in, and set the local time exactly, too.

Well, it can do that if its timezone database, showing the exact location of timezone boundaries, and the necessary displacements from UTC (the modern replacement for GMT on digital timepieces) are up-to-date.

Otherwise, you may have to add or subtract an hour in your head, if the device gets it wrong.

Or half an hour, because some regions use 30-minute timezones.

(It’s astonishing how many people refuse to accept that non-integer timezones exist, insisting that “all legal timezones go in hours”, which will be news to anyone in India or South Australia.)

Or 15 minutes.

(Try visiting Nepal or Eucla.)

Will banning daylight saving help?

Those who don’t like daylight savings, either because they think it’s an affront to the natural order of things, or because they can never remember to change the manually-operated clocks in their household, will assure us all that banning “summer time” will neatly solve all these issues.

But it won’t solve the problem of how to make sense of computer logs, and how to use them in IT troubleshooting, notably in cybersecurity threat response, where the sequence in which things happened can be very important indeed.

For example, if the logs show that crooks almost certainly got in at 03:30 on Tuesday evening, based on an exploit that was first abused at that time…

…did the new account creation timestamped 03:00 really happen before the exploit triggered, or could it have been afterwards?

Are the configuration changes timestamped 04:00 ones that need rolling back, because they happened after the attack started, or are they changes you need to keep because the logs are denominated in different timezones?

What to do?

There’s one thing you can do to help, both as a logfile creator and a logfile consumer.

Always reduce timestamps to UTC (universal co-ordinated time), thus factoring timezones out of your logfiles, and always record timestamps in a simple, unambiguous, alphabetically sortable format.

Simply put: consult, RFC 3339, and stick to Zulu time timestamps everwhere.

These look something like this:

 2022-09-08T17:30:00.00000Z

The date always has four-digit years, so there’s no risk of reinventing the millennium bug.

Times don’t need AM and PM (computers can count to 24 at least as easily as you can count to 12), which removes ambiguity.

And that Z at the end denotes that the date and time shown have no timezone adjustment applied, so that any two Zulu time log entries can directly be compared to determine the sequence in which they took place.

Threat response is much easier, and much safer, when your timestamps are unambiguous, so we recommend this approach to everyone.


FURTHER READING ABOUT
THE IMPORTANCE OF CLARITY IN TIMESTAMP FORMATS


SOME LIGHT-HEARTED (YET GENUINE) REASONS FOR RFC 3339

S3 Ep99: TikTok “attack” – was there a data breach, or not? [Audio + Text]

LISTEN NOW

With Doug Aamoth and Paul Ducklin.

Intro and outro music by Edith Mudge.

Click-and-drag on the soundwaves below to skip to any point. You can also listen directly on Soundcloud.

You can listen to us on Soundcloud, Apple Podcasts, Google Podcasts, Spotify, Stitcher and anywhere that good podcasts are found. Or just drop the URL of our RSS feed into your favourite podcatcher.


READ THE TRANSCRIPT

DOUG.  Zero-days, more zero-days, TikTok, and a sad day for the security community.

All that and more, on the Naked Security podcast.

[MUSICAL MODEM]

Welcome to the Naked Security podcast, everybody.

I am Doug Aamoth.

With me, as always, is Paul Ducklin.

Paul, how are you doing today?


DUCK.  I’m doing very, very well, thank you, Douglas!


DOUG.  Well, let’s start off the show with our Tech History segment.

I’m pleased to tell you: this week on 09 September 1947, a real-life moth was found inside Harvard University’s Mark II computer.

And although using the term “bug” to denote engineering glitches is thought to have been in use for years and years beforehand, it is believed that this incident led to the now ubiquitous “debug”.

Why?

Because once the moth was removed from the Mark II, it was taped inside the engineering logbook and labelled “The first case of an actual bug being found.”

I love that story!


DUCK.  So do I!

I think the first evidence that I’ve seen of that term was none other than Thomas Edison – I think he used the term “bugs”.

But of course, being 1947, this was the very early days of digital computing, and not all computers ran on valves or tubes yet, because tubes were still very expensive, and ran very hot, and required a lot of electricity.

So, this computer, even though it could do trigonometry and stuff, was actually based on relays – electromechanical switches, not pure electronic switches.

Quite amazing that even in the late 1940s, relay-based computers were still a thing… although they weren’t going to be a thing for very long.


DOUG.  Well, Paul, let’s say on the topic of messy things and bugs.

A messy thing that is bugging people is the question of this TikTok thing.

There are breaches, and there are breaches… is this actually a breach?


DUCK.  As you say, Douglas, this has become a messy thing…

Because it was a huge story over the weekend, wasn’t it?

“TikTok breach – What was it really?”

At first blush, it sounds like, “Wow, 2 billion data records, 1 billion users compromised, hackers have got in”, and whatnot.

Now, several people who deal with data breaches regularly, notably including Troy Hunt of Have I Been Pwned, have taken sample snapshots of the data that’s supposed to have been “stolen” and gone looking for it.

And the consensus seems to support exactly what TikTok has said, namely that this data is public anyway.

So what it seems to be is a collection of data, say a giant list of videos… that I guess TikTok probably wouldn’t want you just to be able to download for yourself, because they’d want you to go through the platform ,and use their links, and see their advertising so that they could monetise the stuff.

But none of the data, none of the stuff in the lists seems to have been confidential or private to the users affected.

When Troy Hunt went looking and picked some random video, for example, that video would show up under that user’s name as public.

And the data about the video in the “breach” didn’t also say, “Oh, and by the way, here’s the customer’s TikTok ID; here’s their password hash; here’s their home address; here’s a list of private videos that they haven’t published yet”, and so on.


DOUG.  OK, so if I’m a TikTok user, is there a cautionary tale here?

Do I need to do anything?

How does this affect me as a user?


DUCK.  That’s just the thing. Doug – I guess a lot of articles written about this have been desperate to find some kind of conclusion.

What can you do?

So, the burning question that people have been asking is, “Well, should I change my password? Should I turn on two-factor authentication?”… all of the usual stuff that you hear.

It looks, in this case, as though there’s no specific need to change your password.

There’s no suggestion that password hashes were stolen and could now be getting cracked by a zillion off-duty bitcoin miners [LAUGHS] or anything like that.

There’s no suggestion that user accounts may be easier to target as a result of this.

On the other hand, if you feel like changing your password… you might as well.

The general recommendation these days is routinely and regularly and frequently changing your password *on a schedule* (like, “Once a month change your password just in case”) is a bad idea because [ROBOTIC VOICE] it – just – gets – you – into – a – repetitious – habit that doesn’t really improve things.

Because we know what people do, they just go: -01, -02, 03 at the end of the password.

So, I don’t think you have to change your password, though if you decide that you’re going to do so, good on you.

My own opinion is that in this case, whether or not you had two-factor authentication turned on would have made no difference whatsoever.

On the other hand, if this is an incident that finally persuades you that 2FA has a place in your life somewhere…

…then perhaps, Douglas, that is a silver lining!


DOUG.  Great.

So we’ll keep an eye on that.

But it sounds like not a whole lot that regular users could have done about this…


DUCK.  Except there is maybe one thing that we can learn, or at least remind ourselves from it.


DOUG.  I think I know what’s coming. [LAUGHS]

Does it rhyme?


DUCK.  It might do, Douglas. [LAUGHS]

Darn, I’m so transparent. [LAUGHING]

Be aware/Before you share.

Once something is public, it *really is public*, and it’s as simple as that.


DOUG.  OK, very good.

Be aware before you share.

Moving right along, the security community lost a pioneer in Peter Eckersley, who passed away at 43.

He was the co-creator of Let’s Encrypt.

So, tell us a bit about Let’s Encrypt and Eckersley’s legacy, if you would.


DUCK.  Well, he did a whole load of stuff in his unfortunately short life, Doug.

We don’t often write obituaries on Naked Security, but this is one of the ones that we felt we had to.

Because, as you say, Peter Eckersley, amongst all the other things he did, was one of the co-founders of Let’s Encrypt, the project that set out to make it cheap (i.e. free!), but, most, importantly reliable and easy to get HTTPS certificates for your website.

And because we use Let’s Encrypt certificates on the Naked Security and the Sophos News blog sites, I felt we owe him at least a mention for that good work.

Because anyone who’s ever run a website will know that, if you go back a few years, getting an HTTPS certificate, a TLS certificate, that lets you put the padlock in your visitors’ web browsers not only cost money, which home users, hobbyists, charities, small businesses, sports clubs couldn’t easily afford… it was a *real hassle*.

There was this whole procedure you had to go through; it was very full of jargon and technical stuff; and every year you had to do it again, because obviously they expire… it’s like a safety check on a car.

You’ve got to go through the exercise, and prove that you’re still the person who’s able to modify the domain that you’re claiming to be in control of, and so on.

And Let’s Encrypt not only was able to do that for free, they were able to make it so that the process could be automated… and on a quarterly basis, so that also means certificates can expire fasterin case something goes wrong.

They were able to build up trust quickly enough that the major browsers were soon saying, “You know what, we’re going to trust Let’s Encrypt to vouch for other people’s web certificates – what’s called a root CA, or certificate authority.

Then, your browser trusts Let’s Encrypt by default.

And really, it’s all of those things coming together which to me was the majesty of the project.

It wasn’t just that it was free; it wasn’t just that it was easy; it wasn’t just that the browser makers (who are notoriously hard to persuade to trust you in the first place) decided, “Yes, we trust them.”

It was all of those things put together that made a big difference, and helped get HTTPS almost everywhere on the internet.

It’s just a way to add that little bit of extra safety to the browsing we do…

…not so much for the encryption, as we keep reminding people, but for the fact that [A] you’ve got a fighting chance that you really have connected to a site that’s being manipulated by the person who’s supposed to be manipulating it, and that [B] when the content comes back, or when you send a request to it, it can’t be tampered with easily along the way.

Until Let’s Encrypt, with any HTTP-only website, pretty much anyone on the network path could spy on what you were looking at.

Worse, they could modify it – either what you were sending, or what you’re getting back – and you *simply could not tell* that you were downloading malware instead of the real deal, or that you were reading fake news instead of the real story.


DOUG.  All right, I think it’s fitting to wrap up with a great comment from one of our readers, Samantha, who seems to have known Mr Eckersley.

She says:

“If there’s one thing I always remember about my interactions with Pete, it was his dedication to science and the scientific method. Asking questions is the very essence of being a scientist. I’ll always cherish Pete and his questions. To me, Pete was a man who valued communication and the free and open exchange of ideas among inquisitive individuals.”

Well said, Samantha – thank you.


DUCK.  Yes!

And instead of saying RIP [abbreviation for Rest In Peace], I think I’ll say CIP: Code in Peace.


DOUG.  Very good!

All right, well, we talked last week about a slew of Chrome patches, and then one more popped up.

And this one was an important one…


DUCK.  It was indeed, Doug.

And because it applied to the Chromium core, it also applied to Microsoft Edge.

So, just last week, we were talking about those… what was it, 24 security holes.

One was critical, eight or nine were high.

There are all sorts of memory mismanagement bugs in there, but none of them were zero-days.

And so we were talking about that, saying, “Look, this is a small deal from a zero-day point of view, but it’s a big deal from a security patch point of view. Get ahead: don’t delay, do it today.”

(Sorry – I rhymed again, Doug.)

This time, it’s another update that came out just a couple of days later, both for Chrome and for Edge.

This time, there’s only one security hole fixed.

We don’t quite know whether it’s an elevation of privilege or a remote code execution, but it sounds serious, and it is a zero-day with a known exploit already in the wild.

I guess the great news is that both Google and Microsoft, and other browser makers, were able to apply this patch and get it out really, really quickly.

We’re not talking about months or weeks… just a couple of days for a known zero-day that obviously was found after the last update had come out, which was only last week.

So that’s the good news.

The bad news is, of course, this is an 0-day – the crooks are on it; they’re using it already.

Google has been a little bit coy about “how and why”… that suggests that there’s some investigation going on in the background that they might not want to jeopardise.

So, once again, this is a “Patch early, patch often” situation – you can’t just leave this one.

If you patched last week, then you do need to do it again.

The good news is that Chrome, Edge, and most of the browsers these days should update themselves.

But, as always, it pays to check, because what if you’re relying on auto-updating and, just this once, it didn’t work?

Wouldn’t that be 30 seconds of your time well spent to verify that you do indeed have the latest version?

We have all the relevant version numbers and the advice [on Naked Security] on where to click for Chrome and Edge to make sure that you absolutely do have the latest version of those browsers.


DOUG.  And breaking news for anyone keeping score…

I just checked my version of Microsoft Edge, and it’s the correct, up-to-date version, so it updated itself.

OK, last, but certainly not least, we have a rare but urgent Apple update for iOS 12, which we all thought was done and dusted.


DUCK.  Yes, as I wrote in the first five words of the article on Naked Security, “Well, we didn’t expect this!”

I allowed myself an exclamation point, Doug, [LAUGHTER] because I was surprised…

Regular listeners to the podcast will know that my beloved, if old-but-formerly-pristine iPhone 6 Plus suffered a bicycle crash.

The bicycle survived; I grew all the skin back that I needed [LAUGHTER]… but my iPhone screen is still in a hundred thousand million billion trillion pieces. (All the bits that are going to come out into my finger, I think have already done so.)

So I figured…iOS 12, it’s been a year since I had the last update, so obviously it’s completely off Apple’s radar.

It’s not going to get any other security fixes.

I figured, “Well, the screen can’t get smashed again, so it’s a great emergency phone to take when I’m on the road”… if I’m going somewhere, if I need to make a call or look at the map. (I’m not going to do email or any work related stuff on it.)

And, lo and behold, it got an update, Doug!

Suddenly, almost a year to the day after the previous one… I think 23 September 2021 was the last update I had.

Suddenly, Apple has put out this update.

It relates to the previous patches that we spoke about, where they did the emergency update for contemporary iPhones and iPads, and all versions of macOS.

There, they were patching a WebKit bug and a kernel bug: both zero days; both being used in the wild.

(Does that smell of spyware to you? It did to me!)

The WebKit bug means that you could visit a website or open a document, and it’ll take over the app.

Then, the kernel bug means you put your knitting needle right into the operating system, and basically punch a hole in Apple’s well-vaunted security system.

But there wasn’t an update for iOS 12, and, as we said last time, who knew whether that was because iOS 12 just happened to be invulnerable, or that Apple genuinely wasn’t going to do anything about it because it fell off the edge of the planet a year ago?

Well, it looks like it didn’t quite fall off the edge of the planet, or it’s been teetering on the brink… and it *was* vulnerable.

Good news… the kernel bug that we spoke about last time, the thing that would let somebody essentially take over the whole iPhone or iPad, does not apply to iOS 12.

But that WebKit bug – which remember, affects *any* browser, not just Safari, and any app that does any kind of web related rendering, even if it’s only in its About screen…

…that bug *did* exist in iOS 12, and obviously Apple felt strongly about it.

So, there you are: if you’ve got an older iPhone, and it’s still on iOS 12 because you can’t update it to iOS 15, then you do need to go and get this.

Because this is the WebKit bug we spoke about last time – it has been used in the wild.

Apple patches double zero-day in browser and kernel – update now!

And the fact that Apple has gone to these lengths to support what seemed to be a beyond-end-of-life operating system version suggests, or at least invites you to infer, that this has been discovered to have been used in nefarious ways for all sorts of naughty stuff.

So, maybe only a couple of people got targeted… but even if that’s the case, don’t let yourself be the third person!


DOUG.  And to borrow one of your rhyming phrases:

Don’t delay/Do it today.

[LAUGHS] How about that?


DUCK.  Doug, I knew you were going to say that.


DOUG.  I’m catching on!

And as the sun begins to slowly set on our show for today, we would like to hear from one of our readers on the Apple zero-day story.

Reader Bryan comments:

“Apple’s Settings icon has always resembled a bicycle sprocket in my mind. As an avid biker, an Apple device user, I expect you like that?”

That’s directed at you, Paul.

Do you like that?

Do you think it looks like a bike sprocket?


DUCK.  I don’t mind it, because it’s very recognisable, say if I want to go to Settings > General > Software update.

(Hint, hint: that’s how you check for updates on iOS.)

The icon is very distinctive, and it’s easy to hit so I know where I’m going.

But, no, I have never associated it with cycling because if that were front chainrings on a geared bicycle, they’re just all wrong.

They’re not connected properly.

There’s no way to put power into them.

There are two sprockets, but they have teeth of different sizes.

If you think about how gears work on the jumpy-gear type bicycle gears (derailleurs, as they’re known), you only have one chain, and the chain has specific spacing, or pitch as it’s called.

So all the cogs or sprockets (technically, they’re not cogs, because cogs drive cogs, and chains drive sprockets)… all the sprockets have to have teeth of the same size or pitch, otherwise the chain won’t fit!

And those teeth are very spiky. Doug.

Somebody in the comments said they thought it reminded them of something to do with clockwork, like an escapement or some kind of gearing inside a clock.

But I’m pretty sure that clockmakers would go, “No, we wouldn’t shape the teeth like that,” because they use very distinctive shapes to increase the reliability and precision.

So I’m quite happy with that Apple icon, But, no, it does not remind me of bicycling.

The Android icon, ironically…

…and I thought of you when I thought of this, Doug [LAUGHTER], and I thought, “Oh, golly, I’ll never hear the end of this. If I mention it”…

..that does look like a rear cog on a bicycle (and I know it’s not a cog, it’s a sprocket, because cogs drive cogs, and chains drive sprockets, but for some reason you call them cogs when they’re small at the back of a bicycle).

But it only has six teeth.

The smallest rear bicycle cog I can find mention of is nine teeth – that’s very tiny, a very tight curve, and only in special usages.

BMX guys like them because the smaller the cog, the less likely it is to hit the ground when you’re doing tricks.

So… that has very little to do with cybersecurity, but it’s fascinating insight into what I believe is known these days not as “the user interface”, but “the user experience”.


DOUG.  All right, thank you very much, Bryan, for commenting.

If you have an interesting story, comment or question you’d like to submit, we’d love to read it on the podcast.

You can email tips@sophos.com, you can comment on any one of our articles, or you can hit us up on social: @Naked Security.

That’s our show for today – thanks very much for listening.

For Paul Ducklin, I’m Doug Aamoth, reminding you until next time to…


BOTH.  Stay secure!

[MUSICAL MODEM]


go top