Category Archives: News

Cryptocoin “token swapper” Nomad loses $200 million in coding blunder

Cryptocurrency protocol Nomad (not to be confused with Monad, which is what PowerShell was called when it first came out) describes itself as “an optimistic interoperability protocol that enables secure cross-chain communication,” and promises that it’s a “security-first cross-chain messaging protocol.”

In plain English, it’s supposed to let you swap cryptocurrency tokens of one sort for another, in a trade known in the jargon as bridging.

The service is operated by a company going by the name of Illusory Systems, Inc.

Unfortunately, when it comes to cybersecurity, the word illusory seems to fit rather well.

Indeed, if you visit the Nomad “app page” right now [2022-08-02T14:25Z], you’ll notice that the service is entirely suspended, with the button you’d usually use to trade one cryptotoken for another replaced with the words BRIDGING UNAVAILABLE:

As the company’s Twitter feed notes:

Plainly told, it looks as though numerous persons unknown were able to trigger a series of transactions that paid out an enormous quantity of various cryptocoins, without first paying in an equivalent amount of any other cryptocurrency.

According to cryptocurrency researcher @samczsun, the attackers were able to grab the funds by using what’s known as a replay attack, which is exactly what it sounds like: you simply re-use the data from a previous transaction, but with the original recipient’s account details replaced with your own.

According to @samczsun, a recent update in the Nomad source code inadvertently bypassed the critical test at the point system asked itself, “Has this transaction been approved?”

As long as the transaction data was correctly structured, the transfer would go through…

…so that simply copying an existing transaction, but modifying just the “payee” field, turned out to be the simplest and easiest way to pass muster and drain out funds.

Hanlon’s Razor

As you can probably imagine, not everyone is ready to accept that this was “just a programming blunder”, albeit a dreadfully expensive one, with reports suggesting that about $200,000,000 in cryptotokens were leeched from the system in what @samczsun described as “a frenzied free-for-all”:

Some Twitterati are already using the word rugpull, a pejorative phrase in the cryptocoin world, used to imply that a cryptocurrency hack was some sort of inside job, enabled or carried out on purpose. (To be clear, there’s no evidence to support any of these suggestions.)

But, as a principle known as Hanlon’s Razor jocularly puts it, there is no need to assume malice when incompetence is an alternative explanation.

What to do?

We don’t really know what advice to offer, other than to urge two sorts of caution:

  • Don’t be in a hurry to join the so-called DeFi revolution. Decentralised finance, or Web 3.0, is a vehicle for online trading that aims to escape from the traditional world of highly regulated, centralised financial services. DeFi services aim to allow individuals to trade directly and almost immediately with one another through online payment instructions, often expressed in the form of specialised program code. But without the regulatory frameworks that surround traditional financial insutitions, your chances of recovering any money following blunders (or, for that matter, after insider roguery) is slim. If the company genuinely has no money left because cybercriminals found a loophole and made off with all of it, then bankruptcy is almost inevitable. There is no government recovery fund to provide basic restitution, as there is with mainstream banks in many countries.
  • Watch out for self-styled recovery experts who contact you after a DeFi catastrophe. One of the most common types of comment scam we see on the Naked Security site (we moderate comments both automatically and manually in an effort to stop these getting through) is the “unsolicited funds recovery testimonial”. These comments, usually aimed at articles in which we discuss cryptocoin blunders, pretend that the commenter lost out badly in a cryptocurrency sting, yet recovered most or all of their funds by contacting company X, or individual Y, or social media account Z. These fake adverts for fraudulent money-back services may sound tempting, especially if they claim to offer some sort of “no-win-no-fee” service. The truth is, however, that cryptocoin funds siphoned off in pseudo-anonymous attacks of this sort are rarely recovered, even when law enforcement and the courts are actively involved. Don’t throw good money after bad.

Remember: if it sounds too good to be true, it IS too good to be true.

And that goes for cryptographic and data security promises, just as much as it goes for financial returns.


GnuTLS patches memory mismanagement bug – update now!

The best-known cryptographic library in the open-source world is almost certainly OpenSSL.

Firstly, it’s one of the most widely-used, to the point that most developers on most platforms have heard of it even if they haven’t used it directly.

Secondly, it’s probably the most widely-publicised, sadly because of a rather nasty bug known as Heartbleed that was discovered more than eight years ago.

Despite being patched promptly (and despite reliable workarounds existing for developers who couldn’t or wouldn’t update their vulnerable OpenSSL versions quickly), Heartbleed remains a sort of “showcase” bug, not least because it was one of the first bugs to be turned into an aggressive PR vehicle by its discoverers.

With an impressive name, a logo all of its own, and a dedicated website, Heartbleed quickly became a global cybersecurity superstory, and, for better or worse, became inextricably linked with mentions of the name OpenSSL, as though the danger of the bug lived on even after it had been excised from the code.

Life beyond OpenSSL

But there are several other open-source cryptographic libraries that are widely used as well as or instead of OpenSSL, notably including Mozilla’s NSS (short for Network Security Services) and the GNU project’s GnuTLS library.

As it happens, GnuTLS just patched a bug known as CVE-2022-2509, reported in the project’s security advisory GNUTLS-SA-2022-07-07.

This patch fixes a memory mismanagement error known as a double-free.

Double-free explained

Simply put, a double-free vulnerability is created when a programmer asks the operating system to allocate a block of memory to use temporarily…

…and hands it back so it can be deleted from the list of loaned-out blocks to be freed up for use by other parts of the program…

…and then accidentally asks the system to free up the very same memory block all over again.

Ideally, the memory allocation software will detect that the block no longer belongs to the part of the program that’s “returning” it, will figure out that the offending block has already been recycled, and won’t deallocate it a second time, thus sidestepping the risks of “freeing” it again.

Dealing gently with a double-free that’s detected proactively is a tricky issue. The C function that hands back memory is prototyped as void free(void *ptr); so that you pass in the address of a block you want to free up, but don’t get back a return code. (A C function with a void return value is what other programming languages call a procedure: it does something for you, but it has no way of reporting a result.) Thus even carefully-written C code has no standard way of detecting that something went wrong in free(), and therefore no way of handling the error by trying to shut down gracefully. Terminating the offending program unilaterally is the only safe solution for the system.

But if the memory allocaor doesn’t realise (perhaps because that very same block has since been handed out to another part of the same program, so it’s back in the “loaned-out” list in exactly the same form as it was before), then bad things are likely to happen.

Notably, the memory manager might inadvertently and unexpectedly “confiscate” the double-freed block from the code that’s now legitimately using it, and reassign it to yet another part of the program, perhaps even malicious code that an attacker has timed carefully to take advantage of the mismanagement.

So, you could end up with two parts of the same program manipulating the same chunk of memory.

One part of the program assumes it can trust the memory content implicitly, because it considers itself the legitimate “owner” of the block.

At the same time, another part of the program knows it can mess with the data (or can be tricked into messing with it) in order to trip up the first part deliberately.

Doing the wrong thing does the right thing

Ironically, the CVE-2022-2509 bug exists in the certificate verification code in GnuTLS.

(The irony, in case you’re wondering, is that software that’s insecure in general because it doesn’t bother checking for trustworthy TLS connections is immune to this specific security bug.)

For example, when you visit a website (or other type of server) that’s secured with TLS, the other end will typically send you a web certificate that asserts that the server really is owned and operated by the organisation you expect.

Of course, given that anyone can create a certificate in any name they like, a raw certificate on its own doesn’t tell you much, so the certificate owner usually gets it digitally signed by a company that your browser already trusts.

In practice, certificates are usually signed by a certificate that is, in turn, signed by a certificate that your browser trusts, but the end result is what’s called a chain of trust that can be securely traced to a certificate that’s already installed in a list of so-called Trusted Authorities, also known as Roots, that’s managed by your browser or your operating system.

To simplify and speed up the process of validating the certificate chain, many servers don’t just send their own certificate and leave it to the browser to “chase the chain” to a trusted root.

The server typically includes the chain of trust it’s relying on, which it only needs to construct once, so that your browser, or whatever software is verifying the certificate, can simply check that the chain is digitally valid, and then verify that the last certificate in the chain matches one that’s already trusted.

In that case, GnuTLS will correctly and safely validate the supplied certificate, before freeing up the memory block just used to store it.

But if the other end doesn’t provide a pre-generated certificate chain, thus leaving GnuTLS to create and check the chain on its own, then the GnuTLS code accidentally frees up the memory used to store the supplied certificate before it starts the chain-checking process…

…and then frees it up again after the check is complete.

This causes a double-free mishap, which could lead to lead to memory corruption, followed by a program crash.

Shepherding a crash to implant malware

Usually, or at least often, crashes cause such wayward behaviour that the operating system detects the offending program has lost control of the flow of program execution – for example, if the program leaps off to a random memory address and tries to run code from a memory block that hasn’t been allocated at all.

In this case, the crash would provoke a system error, and although this sort of bug could be abused for what’s called a Denial of Service (DoS) attack, where the entire goal is simply to disrupt the program being attacked, it doesn’t lead to Remote Code Execution (RCE), where untrusted and unwanted software code gets triggered instead.

But whenever there’s a program crash that attackers can provoke at will, based on untrusted data that they supplied themselves, there’s always a risk that the crash could be shepherded in such a way as to misdirect the crashing program so that it jumps into executable code provided by the attackers.

As you can imagine, attackers can often exploit such vulnerabilities to implant malware, either temporarily or permanently, given that they get to inject untrusted code into your computer without producing any popup warnings asking for permission first.

What to do?

Update to the latest version of GnuTLS, which is 3.7.7 at the time of writing.

(This bug was apparently introduced in GnuTLS 3.6.0, and exists in every version from then, up to and including 3.7.6.)

Note that many popular applications and programming toolkits either include or may be built to make use of GnuTLS, even though you may not be aware of it, including but by no means limited to: FFmpeg, GnuPG, Mplayer, QEMU, Rdesktop, Samba, Wget, Wireshark and Zlib.

Many Linux or *BSD packages that use GnuTLS will rely on a central version managed by your distro itself, so be sure to update as soon as your distro has this version available.

Happy patching!


How to celebrate SysAdmin Day!

If you’ve ever watched a professional plumber at work, or a plasterer, or a bricklayer, or the people who deftly use those improbably long sticks to craft paper-thin pancakes the size of a bicycle wheel…

…you’ve probably had the same thoughts that we have.

I could do that. I really could. But there would be an AWFUL lot of cleaning up afterwards, and the final result would nevertheless still leak for evermore / be horribly uneven / wobble disconcertingly / taste terrible.” (Delete as inapplicable.)

Well, it’s much the same with computers, mobile phones and all the other digital devices that we rely on so much, and that we blithely assume will work perfectly tomorrow, on the grounds that they’re fine today.

Except that digital devices don’t break down tomorrow, do they?

They inevitably let you down RIGHT NOW, just when you need them most.

That’s how you know they’ve let you down, after all – when your presentation file goes blank live on air, or you get kicked out of a meeting and can’t get back in to explain why you’re no longer there.

What do you do?

Do you try to replace your own drainage pipe / re-render your own ceiling / rebuild the garden wall on your own / cook yourself a crepe / fix your own computer? (Delete as inapplicable.)

No!

You simply Summon A SysAdmin, and hand the problem over to them, carefully avoiding any first-person pronouns and using only the passive voice.

Don’t say: I couldn’t remember how to save the file so I clicked on a few of the icons randomly until a blue screen appeared, and then I panicked and yanked out the power plug.

Do say: While the computer was in use, it became subject to an error condition and got shut down.


Don’t say: In the middle of a Zoom meeting, I decided to wipe off the cake crumbs from the birthday celebration you weren’t invited to. With hindsight, I used far too much cleaning spray, because there was a loud BANG from under the keyboard, followed by the smell of magic smoke escaping.

Do say: What can be done? So much care has been lavished on this laptop! You can see how scrupulously neat and tidy it’s been kept!


Don’t say: To be honest, I lost my padded carry-case during lockdown so I’ve just been shoving the laptop carelessly into my backpack ever since we returned to the office, along with my bike chain, two padlocks, and a bunch of metalworking tools I keep meaning to return to my brother-in-law.

Do say: They’re not made like they used to be!

Folks, it’s the last Friday of July, and that means it’s SAAD, or SysAdmin Appreciation Day!

So why not pop round with a smile and something to help your sysadmins celebrate the fact that you do appreciate them after all?

Why not openly acknowledge all the hard and hidden work they put into keeping your computers, servers, cloud systems, laptops, phones and networking gear in working order, online and secure…

..even in the face of random icon clicking / cord yanking / fluid spilling / equipment bashing that gets inflicted on them? (Delete as inapplicable.)

If your mouse is out of batteries Or your webcam light won't glow If you can't recall your password Or your email just won't show If you've lost your USB drive Or your meeting will not start If you can't produce a histogram Or draw a nice round chart If you hit [Delete] by accident Or formatted your disk If you meant to make a backup But instead just took a risk If you know the culprit's obvious And the blame points back to you Don't give up hope or be downcast There's one thing left to do! Take chocolates, wine, some cheer, a smile And mean it when you say: "I've just popped in to wish you all The best SysAdmin Day!"

Happy System Administrator Appreciation Day!


S3 Ep93: Office security, breach costs, and leisurely patches [Audio + Text]

Click-and-drag on the soundwaves below to skip to any point. You can also listen directly on Soundcloud.

With Doug Aamoth and Paul Ducklin.

Intro and outro music by Edith Mudge.

You can listen to us on Soundcloud, Apple Podcasts, Google Podcasts, Spotify, Stitcher and anywhere that good podcasts are found. Or just drop the URL of our RSS feed into your favourite podcatcher.


READ THE TRANSCRIPT

DOUG.  Data breach fines.

Macros.

And leisurely bug fixes… all that, and more, on the Naked Security Podcast.

[MUSICAL MODEM]

Welcome to the podcast, everybody.

I am Doug Aamoth, and he is Paul Ducklin.

Paul, how do you do?


DUCK.  I’m very well, Douglas.

Not that you’re ever unchipper… but that was a super-upbeat introduction, Doug!

I’m guessing you’ve got a very excellent Fun Fact/Tech Tip coming up.


DOUG.  It’s true… thank you for the segue! [LAUGHTER]

Let’s talk about This Week in Tech History.

This week, in 1963, Syncom 2, which is short for Synchronous Communications Satellite, was launched into geosynchronous orbit, facilitating the first satellite-based phone call and one of the first satellite TV transmissions.

Syncom 2 was also used by NASA for voice, teletype and fax testing.

Syncom 1 launched a few months earlier and made it into orbit as well, but an electronics failure rendered it inoperable.

Can you imagine sending Sycnom 1 up there and going, “Oh, someone forgot to seat the RAM properly?”


DUCK.  I believe that the payload was just 25kg!

I saw a picture of Syncom 2, and it looks like a giant space object out of a 1950s scifi movie…

…but apparently it was just 71cm in diameter.

It’s really, really tiny… what’s 71cm? Just over 2 feet?

And it could support one phone call – very low power – so it was just an experiment.


DOUG.  We talked about an Office macro security feature that people were asking for for the better part of 20 years.

Microsoft turned it on, and then people commented that they didn’t like it.

So Microsoft turned it off, but said, “It will be back sometime.”

And now it’s back – that was quick!


DUCK.  It was.

When we spoke about this last on the podcast, Doug, I was very upbeat about, “Yes, it’s coming back, but it’ll be a while.”

I was imagining maybe it would be a sort of Easter Egg for 2023 – a literal Easter Egg, you know, sometime in the Northern Hemisphere spring.

I was imagining, “It won’t be weeks;it’s probably going to be months.”

And how long was it? A couple of weeks!


DOUG.  
Yes.


DUCK.  So 20 years to turn it on, 20 weeks to turn it off and then just a couple of weeks to turn it back on.

So, good for Microsoft!

But if only, Doug, they had done it in 1998… that’s more than the better part of 20 years, that’s better than 20 years.

If they’d done it, say, the day before the Melissa virus came out, that would have been really handy, so that macros arriving over the internet would not have triggered unless you really wanted them to.

Although I imagine, in those days, it wouldn’t have been fully off.

There would have probably been a button [Allow anyway].

And the big deal here is that there is no more [Allow anyway] button.

So, it’s not that it warns you, “This is a bad idea. Do you want to hoist yourself by our own petard [Yes/Yes]?”

It’s just, “Sorry, macro came over the Internet. You can’t do that.”


DOUG.  Did Microsoft change anything meaningfully between now and 20 days ago when they had to turn it back off?


DUCK.  My understanding, Doug, is that the main thing they did – just reading this into what they wrote – is that they fulfilled their promise that they would document more clearly: how this worked, why it worked, and most importantly what you could do about it if you really wanted to have non-local or non-LAN servers that you treated as though they were local.

Because people go, “Oh, well, I’m a small biz, I use SharePoint, One drive, some cloud service, so I’ve got some random domain name that was issued to me… but to me that’s a local server, and that’s my trusted corporate repository for stuff.”

And so Microsoft now has some quite decent documentation saying, “Here’s how you can tell your users that a certain external server is to be treated as a trusted one.”

Although that *is* essentially an exclusion, and exclusions in cybersecurity can be dangerous, like people with their antivirus going, “Hey, it’s much faster if I exclude the C: drive. [LAUGHTER] Who knew?”

So you do need to be cautious, but it does mean that you then have a definitive list saying, “These are the servers that I actually trust, and I treat these as a place where people can go to get official work content.”

And that is very different from just relying on people not clicking the [Oh, go on then, she'll be right] button every time they get a macro from anywhere on the internet.

What Microsoft did is they went out and produced a document that is fairly easy to read and gives a number of ways of telling your company: “This is what we trust, and this is what we don’t.”

So, it’s a slightly more formal way of doing it than just relying on people not clicking the right button at the wrong time.


DOUG.  OK, we have links to those two documents in the article which you can find on Naked Security.

It’s called: Office macro security: on-again-off-again feature now BACK ON AGAIN.

Hooray!

And then, moving right along to something that’s not so fun: T-Mobile had a big data breach in 2021 and they are now being ordered to cough up $500 million, which, after lawyer fees, shakes out to about $25 per victim.


DUCK.  Yes, and it seems that half-a-billion dollars (wow, that’s a large amount!) is loosely split into two parts.

There’s $350,000,000 that is part of a class action lawsuit, which you have in the US… we don’t have those in the UK.

My understanding is a class action is where anybody can join in and say, “Oh, yes, I’m a customer.”

And the idea is… if you were to sue and you would only get $40 or $50 or $100, then it would be too risky to sue on your own, so you band together, “Power to the People”.

And the lawyers go after the big company on behalf of potentially millions of people.

So, it’s a $350,000,000 settlement for that.

Unfortunately, there are so many claimants that’s only $25 per person, after you take out the (gulp!) 30% of that… 105 million of your US dollars go to the lawyers.

The rest goes to the actual people who were T-Mobile’s customers.

But it does show that there aren’t zero consequences to a data breach.

And whether you like class actions or not, there is this sense that people do get injured when their data is breached, even if there’s no obvious connection between the breach and then suffering identity theft.

And then there’s another $150,000,000.

I don’t fully understand how this works in the US legal system, but my understanding is this is essentially a commitment from T-Mobile USA that they will spend that money on cybersecurity, whereas they might not have done so otherwise.

And if only they had seen cybersecurity as a value, not as a cost, beforehand!

If they’d invested the $150,000,000 upfront, they could probably have saved the $350,000,000… because they’re spending both those sums of money now anyway.


DOUG.  So that’s probably the better part of the outcome here: that they’re being forced to spend on upgrading their security.

The $25 per person is great, whatever, but the earmarked money to upgrade their security is probably a good thing to come out of a bad situation.


DUCK.  I’d say so, because that’s always the problem when you get a big fine of this sort, isn’t it, for not doing cybersecurity properly?

That’s money that now cannot be spent on cybersecurity because it’s gone elsewhere.

I guess the flip side of that is that you can’t just say, “Well, wait till you have a data breach and then there’ll be a massive penalty, but you get to spend it on cybersecurity anyway”, because that’s almost inviting people to delay until they’re forced to do it.

So, I can see the point that there’s the carrot part and there’s the stick part.

Together, half-a-billion dollars!

And to all the people who like to say, “Oh, well, for a multi-billion dollar company, that’s chump change”…

Really?

Sounds like a lot of money to me!

I guess if you’re a shareholder, you probably have a different view of just how chump-changy $500 million is.

It’s a reminder that data breaches aren’t something that you suffer, and you report, and you get shouted at, and you get a nasty report sent to you, but doesn’t cost you anything.

And like I said – and I know that working for a cyber security company, I would say this, but I’m saying it because I think it’s true, not just because I’ve got something to sell you…

You really need to think of cybersecurity as a *value*, because customers are increasingly expecting to find that as part of what they consider the package.

My take on this is I probably wouldn’t have joined the class action suit, but I would very strongly consider taking my business elsewhere, as a different way of proving the point.


DOUG.  Well, we’ll keep an eye on that.

That is: T-Mobile to cough up $500 million over 2021 data breach, on nakedsecurity.sophos.com.

And we move right along to Apple patching a zero-day browser bug that we talked about from the Pwn2Own contest.

So, a little bit laggy as far as the patch goes, but we don’t know how bad it actually was on Apple’s side of the fence.


DUCK.  In fact, there were two browser related bugs fixed in the latest slew of Apple updates, which in Apple’s traditional way are kind of like Microsoft Patch Tuesday in that they cover all possible Apple devices: tvOS Watch, OS, iOS, iPadOS, Mac OS, etc.

But, unlike patch Tuesday, they come when they feel like it… snd I think this one was actually on a Thursday, if I remember, so it wasn’t even on a Tuesday, it just arrived.

Now, Safari is patched by Apple in the operating system update for all supported operating systems except the previous and pre-previous versions of macOS, where you actually need to get *two* updates, one for the OS and one for Safari.

So, Safari goes to version 15.6.

And what’s interesting is it’s not just that Pwn2Own zero-day, where Mozilla famously patched the equivalent bug in Firefox within two days of finding out about it at Pwn2Own…

If you remember, the same chap, Manfred Paul, a German hacker, poned Firefox in a sort of double pwnage for $100,000 and he pwned Safari for $50,000.

Mozilla patched their bug or bugs within two days, if you remember.

But Apple took a couple of months to get round to theirs!

It was disclosed responsibly, of course, so we don’t know how likely it was that anyone else would find it.

But the other bug that was fixed in Safari was apparently the same flaw that emerged as that zero-day in Chrome we talked about on the podcast not too long ago, I think it was a couple of weeks ago.

That bug that was found in the wild by a security company that was investigating some suspicious behaviour that a customer had reported to them.

As sometimes happens with Managed Threat Response… you’re looking around, and you can see all the symptoms and the side effects of what the crooks have been doing, and you think, “Where did it start?”

And sometimes it’s obvious, “Oh, they logged in because you had a silly password, or they logged in because you’d forgotten to patch this, that or the other server.”

And occasionally you can’t quite work it out, but you might get lucky and stumble across what looks like a weird web page,: “Oh my golly, I found a zero-day in the browser!”

And then it’s a good guess that either a very niche group of cybercrooks have got it, or one of those so-called lawful spyware companies – the people who do the government interception stuff have found, and they’re using it in a targeted way.

That was the zero-day in Chrome, and Chrome fixed it.

Turns out that the same bug, it seems, was in WebKit – Apple’s code – and they took another two weeks to fix it, and didn’t say they were working on it.

So, go figure.

But that makes this patch for Apple at least as important as any other we’ve spoken about.

And I know we always say, “Don’t delay/Do it today.”

But in this case, there’s one bug that we know somebody already found because they demonstrated it working 100% at Pwn2Own, two months ago; and there’s another bug that’s related to code that was fixed by Google in Chrome because somebody found it being used for surveillance purposes in the wild.


DOUG.  It is interesting how you described the process by which Pwn2Own shows the actual contest, but they take steps to not actually show how the attacks work while the responsible disclosure process is going on.


DUCK.  Yes, it’s quite amusing, if you watch the video of Manfred Paul pwning Firefox.

He obviously was very confident that whatever he’d put together was going to work.

So, the camera is pointing at his face, and the adjudicator’s face, and then you see the commentator kind of sticks his head and said, “Here we go, folks.”

And there’s a little timer – he’s got 30 minutes.

“Everyone ready?”

Yes, they’re ready… and all you can see is the back of two screens, one for the server and the client.

And then you see the adjudicator say, “OK, Go!”

The timer starts counting down, and Manfred Paul clicks a button – obviously, he’s got a little [Do it now] button in his browser window…

…and then you see everybody nodding as the timer clicks over to just 7 seconds!

So you know that it worked – you can just see on their faces.

To be fair, in this case of Apple taking their time, you have to come to Pwn2Own prepared.

You have to come with full details, so we don’t know how long it took Manfred Paul to put the attack together.

He could have been working on it for months, in which case saying, “Apple should have fixed it in two days”…

…well, maybe they could have, but maybe they felt they didn’t need to, given the complexity.

And perhaps they wanted to make sure, in testing, that the fix was going to work well.

Anyway, although Pwn2Own has a live video feed, that should not give enough hints for somebody to figure out anything about the actual vulnerability.


DOUG.  We’ve got some instructions about how to update your iPhones, iPads and Macs over on the site.

And we round out the show with a two-pack of Firefox bugs.


DUCK.  Yes, and the good news is that for the latest version of Firefox, there’s a total of eight CVE numbers, but two of those are CVE numbers that cover all the bugs of which you can say, “These could probably be exploited and we’re fixing them in bulk anyway, without actually going into the detail of finding out how you might exploit them.”

So,those are things that are found automatically, for example through fuzzing or the automated tools that probe for vulnerabilities that you might have to wait years and years to find by accident.

The other six bugs… none of those are rated even High.

They’re all Medium or lower, which is kind of good news.

Two of them I thought were worth calling out individually, and we’ve written these up on Naked Security because it’s a fascinating part of understanding what kind of bug-related security risks can exist in browsers.

It’s not just, “Oh, the cooks can run arbitrary code and implant malware.”

There are two bugs that relate to potentially allowing attackers to trick you into clicking something that looks safer than it is.

And one of them is, I guess, good old clickjacking, which is where you click on object X, but actually you activate object Y.

The mouse position on the screen and where the browser *thinks* it is can be tricked into diverging.

So, you move the mouse, and you click… but actually the click registers somewhere else on the screen.

You can see how that could be quite dangerous!

It doesn’t guarantee remote code execution, but you can imagine: an ad fraudster would love that, wouldn’t they?

They get you to click on, “No, I definitely want to decline,” and in fact, you’d be racking up clicks saying, “Yes, I really want to view this ad.”

And it also means that for things like phishing attacks and fake downloads, you can make a download look legit when in fact the person is clicking on something they don’t realize.

And the other bug relates to a good old LNK link files on Windows, so that’s a Windows only firefox bug – it doesn’t affect other products.

And the idea is that if you open a local link that appears to go to a Windows link file…

…remember, a link file is a Windows shortcut, so they’re a security problem in their own right.

Because a link file is a tiny little file that says, when the person clicks on it, “Actually, don’t open the link. Open a file or a network location that’s listed inside the link. Oh, by the way, what icon would you like the link to display as?”

So you can have a link file with an icon that, say, looks like a PDF.

But when you click, it actually launches a EXE.

And in this case, you can take that even further.

You can have a link file which you “know” is local, so it’s going to open a local file.

But when you click the link, it actually triggers a network connection.

Of course, whenever there’s a network connection from a browser – even if nothing truly dangerous happens with what comes back, such as remote code execution – every outbound connection gives away information, possibly even including cookies, about the current session; about your browser; about you; about your network location.

And so you can see, with both of those bugs, it’s a great reminder that it’s really important that your browser presents you the unvarnished truth of what happens when you click on any point on the screen.

It’s vital that it gives you an accurate and useful rendition of what will happen next, such as, “You will go off site. You will go to this link that you wouldn’t have clicked if we’d made it obvious.”

So it’s important that the browser gives you at least a way of figuring out where you’re going next.

Anyway, these have been patched, so if you get the update, you will not be at risk!


DOUG.  Excellent.

All right, that is called: Mild monthly security update from Firefox, but update anyway.

I found that more than mildly interesting, especially the Mouse position spoofing with CSS transforms.


DUCK.  Yes, lots of potential for mischief badness there!


DOUG.  OK, in that vein, we have a reader who’s written in.

Naked Security Podcast listener Nobody writes the following… I love this one:

Hi.

I like the show a lot and have heard almost every episode since the beginning. I work in security, but right now, in my private life, I’m cat-sitting for a family with a house alarm.


DUCK.  When I started reading that email, I thought, “Oh, I know what happens! Every time the cat walks around, the alarm goes off. And now he’s faced with this thing, ‘Do I turn the security off even though I was told not to?’ But it’s much worse than that!”


DOUG.  It’s even *better* than that. [LAUGHTER]

He writes:

The numbers that match their code are wearing off, while all the wrong numbers are clearly untouched.

So it’s easy to guess which numbers are in the code.

I considered telling them that it’s time to change their code, but then I noticed that the alarm code is also written on a piece of paper taped right next to the alarm.

So the security hole I found is clearly not worth mentioning to them.

[LAUGHTER]

You shouldn’t laugh!

Don’t write your security code next to your security alarm panel!

Joshua, thank you for writing that in.

I would advise you to advise them to change the code, and throw away the paper with the code written on it.


DUCK.  Yes.

And, in fact, if they do that, you could argue that then the keypad would be like a nice decoy.


DOUG.  Yes, exactly!


DUCK.  Because the cooks will keep trying all permutations of the wrong code.

And if there’s like a ten-trial lockout or something…


DOUG.  Well, if you have an interesting story, comment, or question you’d like to submit, we’d love to read it on the podcast.

You can email tips@sophos.com, you can comment on any one of our articles, and you can hit us up on social: @NakedSecurity.

That’s our show for today.

Thanks very much for listening.

For Paul Ducklin, I’m Doug Aamoth, reminding you, until next time to…


BOTH.  Stay secure!

[MUSICAL MODEM]


Critical Samba bug could let anyone become Domain Admin – patch now!

Samba is a widely-used open source toolkit that not only makes it easy for Linux and Unix computers to talk to Windows networks, but also lets you host a Windows-style Active Directory domain without Windows servers at all.

The name, in case you’ve ever wondered, is a happy-sounding and easy-to-say derivation from SMB, short for Server Message Block, a proprietary file-sharing protocol that goes way back to the early 1980s.

Anyone with a long enough memory will recall, probably without a tremendous amount of affection, hooking up OS/2 computers to share files using SMB over NetBIOS.

Samba started life in the early 1990s thanks to the hard work of Australian open source pioneer Andrew Tridgell, who figured out from first principles how SMB worked so that he could implement a compatible version for Unix while he was busy with his PhD at the Australian National University.

(Tridge’s PhD, by the way, was rsync, another software toolkit that you’ve probably used in some guise, even if you don’t realise it.)

SMB turned into CIFS, the Common Internet File System, when it was made public by Microsoft in 1996, and has since spawned SMB 2 and SMB 3, which are still proprietary network protocols, but with specifications that are officially published so that tools such as Samba no longer have to rely on reverse engineering and guesswork to provide compatible implementations.

As you can imagine, Samba’s usefulness means that it’s widely used in the Linux and Unix worlds, including in-house, in the cloud, and even on network hardware such as home routers and NAS devices.

(NAS is short for network attached storage, typically a box full of hard disks that you plug into your LAN and that automatically shows up as a file server that all your other computers can access.)

Print Your Own Passport!

Samba just got updated to fix a number of security vulnerabilities, including a critical bug related to password resets.

As detailed in the latest Samba release notes, there are six CVE-numbered bugs patched, including these five…

…along with this one, which is the most serious of the lot, as you will see immediately from the bug description:

In theory, the CVE-2022-32744 bug could be exploited by any user on the network.

Loosely put, attackers could wrangle Samba’s password-changing service, known as kpasswd, through a series of failed password change attempts…

…until it finally accepted a password change request that was authorised by the attackers themselves.

In slang terms, this is what you might call a Print Your Own Passport (PYOP) attack, where you’re asked to prove your identity, but are able to do so by presenting an “official” document that you created yourself.

The holy trinity of cybersecurity

As the Samba bug report puts it (our emphasis):

Tickets received by the kpasswd service were decrypted without specifying that only that service’s own keys should be tried. By setting the ticket’s server name to a principal associated with their own account, or by exploiting a fallback where known keys would be tried until a suitable one was found, an attacker could have the server accept tickets encrypted with any key, including their own.

A user could thus change the password of the Administrator account and gain total control over the domain. Full loss of confidentiality and integrity would be possible, as well as of availability by denying users access to their accounts.

As you’ll remember from almost any cybersecurity introduction you’ve ever seen, availability, confidentiality and integrity are the “holy trinity” of computer security.

Those three principles are meant to ensure: that you alone can view your private data (confidentiality); that no one else can mess with it, even if they can’t read it themselves, without making you aware that it’s been nobbled (integrity); and that unauthorised parties can’t prevent you accessing your own stuff (availability).

Clearly, if anyone can reset everyone’s password (or perhaps we mean if everyone can reset anyone’s password), none of those security properties apply, because attackers can getting into your account, changing your files, and lock you out.

What to do?

Samba comes in three supported flavours: current, previous and pre-previous.

The updates you want are as follows:

  • If using version 4.16, update from 4.16.3 or earlier to 4.16.4
  • If using version 4.15, update from 4.15.8 or earlier to 4.15.9
  • If using version 4.14, update from 4.14.13 or earlier to 4.14.14

If you can’t update, some of the bugs listed above can be mitigated with configuration changes, although some of those changes turn off functionality that your network might rely upon, which would prevent you from using those particular workarounds.

Therefore, as always: Patch Early, Patch Often!

If you use a Linux or BSD distro that provides Samba as an installable package, you should already have (or should soon receive) an update via your distro’s package manager; for network devices such as NAS boxes, check with your vendor for details.


go top