S3 Ep62: The S in IoT stands for security (and much more) [Podcast+Transcript]

LISTEN NOW

Click-and-drag on the soundwaves below to skip to any point. You can also listen directly on Soundcloud.

With Doug Aamoth and Paul Ducklin. Intro and outro music by Edith Mudge.

You can listen to us on Soundcloud, Apple Podcasts, Google Podcasts, Spotify, Stitcher and anywhere that good podcasts are found. Or just drop the URL of our RSS feed into your favourite podcatcher.

READ THE TRANSCRIPT

DOUG AAMOTH. Cryptographic bugs, sensible cybersecurity regulations, a cryptocurrency conundrum, and a new Firefox sandbox.

All that and more on the Naked Security podcast.

[MUSICAL MODEM]

Welcome to the podcast, everybody.

I am Doug. He is Paul…


PAUL DUCKLIN. I wouldn’t have said “conundrum”, Doug.

I might have said “catastrophe” or “business as usual”… but let’s leave that until later, shall we?


DOUG. I was slightly diplomatic, but yes, “catastrophe” probably would have been better… stay tuned for that one.

Well, we like to start the show with a Fun Fact, and the Fun Fact for this week is that on its patent application, the name for the computer mouse was not-quite-as-succinct: “X-Y position indicator for a display system.”

When asked about the origin of the mouse name, its inventor, Douglas Engelbart, recalled, “I just looked like a mouse with a tail, and we all called it that.”


DUCK. The other name to remember, there is, of course, Bill English, who is essentially the co-inventor.

Engelbart came up with the idea of the mouse, based on a device called a planimeter, which had fascinated him when he was a kid.

And he went to Bill English, his colleague, and said, “Can you build one of these?”

Apparently it was carved out of mahogany… you’ve seen the pics, Doug.


DOUG. It’s lovely, yes.


DUCK. It’s quite chunky!

And is it true – I think you’ve said this on a previous podcast – that they had the cable coming out of the wrong side at first?


DOUG. At first they did, coming out of the wrist end, yes.


DUCK. And when they flipped it round, obviously, it’s a tail… it can only be a mouse!


DOUG. Well, thank you for that, Mr. Engelbart.

Despite the instances of repetitive stress injury and carpal tunnel syndrome… other than that, the mouse has gone swimmingly.

It is an aptly named peripheral, and speaking of things that are aptly named: we have a Mozilla bug called “BigSig”.

So, I wonder what that could be about?


DUCK. Strictly speaking, it’s CVE-2021-43527.

It was found by well known serial bug-hunting expert from Google, Tavis Ormandy.

It was an old school buffer overflow that nobody had noticed for years and years and years, inside the cryptographic library called NSS, short for Network Security Services.

Mozilla has always used NSS in all of its products, instead of using something like OpenSSL, which many of our listeners will know about, and instead of using the native implementations on each operating system.

Microsoft has its Schannel, or Secure Channel; Apple has Secure Transport; but Mozilla, wherever it can, has said,”We’re going to stick with this one particular library.”

They’re not the only organisation to use it – it turns out there are quite a few other products that have included NSS.

There’s a point when it allocates an area in memory to store all the data it needs to do a signature verification, and one of the things you need when you’re verifying a signature is a public key.

The biggest key you’d *ever* need is *surely* going to be an RSA key of 16 kilobits, which nobody really needs because it’s way bigger than you need even today to be secure.

[IRONIC TONE]. It’s very time consuming to create 16 kilobit keys, so it’s *bound* to be big enough, Doug.


DOUG. So it’s essentially there’s a size limit to the key.

The keys in the wild, even the biggest RSA ones that we’ve typically seen, are a quarter of the maximum size.


DUCK. Yes.


DOUG. But if you send over a key that’s bigger than the allotted size, there’s no size check to say this key is too big?


DUCK. There is now!


BOTH. [LAUGHTER]


DUCK. There’s a function added…

Sadly, as Tavis Ormandy pointed out, the data that immediately follows in memory – in other words, the stuff that’s going to get overwritten – does include what are called function pointers.

Function pointers are data objects that determine how the program behaves – where it goes in memory to execute code in the future – and when you get an overwrite like that, [A] a crash is almost guaranteed, and [B] there is always a possibility, because you can decide how to divert the program at the other end, that you could get remote code execution.


DOUG. That answers the “Who cares?” question that I was going to ask in a more tactful way, but…


DUCK. Let’s go back to that “who cares?”

Really, what we’ve answered is, “Why care?”

The “who cares?” is, obviously, anybody using Firefox, which is probably the best known and most widely used Mozilla product.

Except that, for reasons that I don’t fully understand and weren’t disclosed by Mozilla, the one product that just happens not to be vulnerable to this (maybe it does the size check somewhere else?) is Firefox – good news!


DOUG. Yes!


DUCK. However, even in their own security advisory, the Mozilla team members explicitly listed as vulnerable:

  • Thunderbird, which is Mozilla’s email client,
  • Evolution, which is an open source calendar app that I think a lot of Linux desktop users probably have, and
  • A document viewer widely used on Linux called Evince.

But perhaps the most concerning is LibreOffice, probably the most popular free and open source alternative to Microsoft Office, that not only uses NSS, but also, at least on Windows, includes its own version of the DLL where the bug exists.

So if you are using LibreOffice, then last week, when the bug notification came, you probably ignored it because you thought, “Mozilla doesn’t affect me. LibreOffice has got nothing to do with them.”

But it turns out that you do need to upgrade.

If you are using LibreOffice, they have now put out an update: 7.2.4 is what you want.


DOUG. [QUIET TYPING SOUNDS] Just searching my own system here.

Would you say the NSS3.DLL file that I found in my Tor browser that hasn’t been modified since 1999… would that be something I might want to look into?


DUCK. That’s worrying, because when I checked my Tor browser version, it didn’t have the latest NSS, but it had a more recent one than 1999, so that timestamp may be wrong.

Maybe re-download Tor, Doug, and see?


DOUG. Yes, maybe I’ll do that.

It’s been quite a while since I’ve used that or updated it.


DUCK. Yes, of all the browsers that you probably want to avoid having [LAUGHS] exploitable privacy violating holes in…


DOUG. Yesssss… [LAUGHS}


DUCK. …Tor may be the one that you start with.


DOUG. It will be right at the top of that list, actually.


DUCK. Depending on what you’re using it for.


DOUG. We’ll add that to my to-do list!

If you’d like to read more, and see some sample code you can use to check the NSS versions on your systems, that article is called: Mozilla patches critical BigSig cryptographic bug – here’s how to track it down and fix it.

And on the theme of fixing things, we move on to what seems like sensible legislation to protect consumers from lazy, lazy security on IoT devices.


DUCK. That’s correct, Doug.

The US was probably the first country to try and get serious about this, and the US can be very influential when it comes to telling device manufacturers, “Thou shalt do the right thing,” without having laws that are unpopular.

Because the US can just go, “OK you can do what you like. But if you wish to sell to the Federal Government, here are the standards that we’ve decided we want you to stick to.”

They can influence things without saying, “We’re going to have a law that applies to everyone.”

They’re saying you can sell, but you can’t sell where the real money is, into the Federal Government market.

This is the UK, where the government doesn’t quite have that kind of purchasing power, particularly for IoT devices.

So they’ve been dancing around this for a couple of years, and they’ve got a parliamentry Bill.

Remember, a Bill is what it’s called before it actually gets enacted in parliament and then gets Royal Assent.

So, a Bill means it’s a proposed legislation, like in the US, and it’s called “PSTI”, for Product Security and Telecommunications infrastructure.

And I admit, when I first saw that, I thought, “Uh-oh, here we go. It’s going to be about backdooring encryption all over again. Telecoms!”


DOUG. Indeed.


DUCK. Quite the opposite.

It’s basically saying that we’re just going to set three minimum things: “Must be at least *this* tall to go on the ride if you want to sell IoT devices.”

It’s still a long way off – it still has to become an Act, get its Royal Assent, and then apparently they’re talking about having a 12-month sunrise period while you get your act in gear.

Tell us what you think of these, Doug… there are three simple things that they want you to bring to the party.


DOUG. They start out very simple and get slightly more complex, but not really that hard.

I mean, the first one is just a no-brainer.


DUCK. “Default passwords. Can’t have them!”


DOUG. The problem it solves is someone like me, back when I was getting interested in cybersecurity, I shouldn’t have been able to sit in a coffee shop, and find a Linksys router, and know that the username was admin and the password was admin.

Most people don’t change that because they don’t know anything about that when they’re setting up their router.


DUCK. Or they know perfectly well about it…


DOUG. And they don’t care.


DUCK. It warns them right at the end, And it says at that some future time, you may want to change this…

…and users think, “That’s a true statement,” but doesn’t make you do it, does it?


DOUG. No. [LAUGHS]


DUCK. But if you followed Douglas Aamoth’s advice and got a password manager?

10 seconds work to do it.


DOUG. Yes. Do it!


DUCK. And then when your ad device magically starts working, it is at least a bit different from everybody else’s.

So that’s a start, “No default passwords.”


DOUG. And the next, one slightly more complicated but still important: a reliable way to disclose vulnerabilities to you.

If you’re a company, you need to be able to take those, and act upon them.


DUCK. It’s not that difficult.

We spoke about it, didn’t we, on the podcast not long ago: yourwebsitename forward-slash security.


DOUG. Easy!


DUCK. And people go there and it says, “Here’s how you can tell us.”

I understand people’s frustration, in some cases, where they literally cannot send a bug report that they don’t even want money for – they just would love to tell somebody, and can’t!

How do you police that? I have no idea.

But at least they’re saying, “Come on, guys. How hard is it to have a standardised email address that actually works?”


DOUG. It’s also probably not a bad place to put… almost much like you’d find the ingredients on the side of a box of food, you put your security ingredients on the security page to tell people how you are securing your devices in the first place.

“Here’s what we’re doing. Here’s how to contact us. Here’s what to look for in a bug report.”


DUCK. Yes, Chester and I spoke about that in a recent podcast, I think when you were on vacation, Doug.

About moves in the US to require hardware and software manufacturers to provide, if you like, a Security Bill of Materials.

I think this Bill is a baby step that leads to the possibility of actually knowing what’s in your product.

Doesn’t seem too much to ask, does it?


DOUG. It does not!

OK, so, the third item on this list: we talked about no universal default passwords; a reasonable way to disclose vulnerabilities; the third thing, this might be the simplest.

It’s just probably a resourcing issue for most companies: you need to tell your buyers how long you’re going to provide security fixes for the products that they’re buying.


DUCK. I suspect that will be the most controversial with manufacturers, because they’ll go, [WHINY VOICE] “Well, we don’t know. It depends. We might not sell many of that device, and then we’ll make another one, and that sells brilliantly. And we don’t have to put the same amount of security effort into both of them.”

That’s where I can envisage manufacturers pushing back on the grounds of cheapness.

And I think this will become an ever increasing issue – or I hope it will – for environmental reasons, as well.

I think it was on that same podcast with Chester, where he was describing some IoT hacking research he did several years ago…

He went out and bought all these devices: light bulbs, this, that and the other.

Some of them were out of support *before he even opened the box*! [LAUGHS]

He he has these Internet-enabled light bulbs, and he said, “They’re quite nice, but basically, they’re all stuck on purple…


DOUG. [LAUGHS]


DUCK. …from when I was playing around with controlling them.”

And there isn’t even a way that you could connect to them locally and reprogram them: they’re basically lost in space.

Of course, the critics of this law say, “You need more teeth than that,” because all that’s going to happen is that manufacturers will flood the market with a cheap device, and then they’ll dissolve that company and come back with a new one.

They’ll let their vendor say, “Sorry, we can’t help you with updates. The manufacturer’s out of business.”

Now, I’m sure that we already have laws that protect consumers from people deliberately folding their company in order to evade regulations… but policing this is obviously going to be the hard thing.

At least it’s waving some placards in the face of the IoT marketplace.

In the discussion that they’ve got about this Bill, the UK government has come up with some examples, and I think that it was only one-in-five of the vendors that they surveyed had any sort of vulnerability disclosure process.

And if you don’t have a vulnerability disclosure process, then you can’t have any commitment to upgrades!

Because you go, “I’ve done all the upgrades I think we need.”


DOUG. Right!


DUCK. But 50 people have been trying to tell you about 49 different vulnerabilities.

It’s amazing how complicated this simple thing gets when, or if, you are dealing with a part of the market that is determined not to comply.


DOUG. Yes, we will keep an eye on that.

Lots of great comments on the article, so head on over there if you want to read and reply.

The article is called IoT devices must protect consumers from cyber harm, says UK government, on nakedsecurity.sophos.com.

Now, time for “This Week in Tech History.”

While we talked about the handy-dandy mouse earlier in the show, this week, on December 9, 1968, the mouse’s inventor Douglas Engelbart gave the first public demo of the mouse to a crowd of about 1000 at a computing conference.

The mouse demo was part of a longer 90-minute presentation that also touched on subjects such as hypertext and video conferencing.

In fact, the mouse demo may have almost been something of an afterthought.

The main presentation was for a “Computer Based Interactive Multi-Console Display System for Investigating Principles by which Interactive Computer Aids can Augment Intellectual Capability.”

So it sounds like the early early days of AI…


DUCK. [WHISTLE OF APPRECIATION]. That’s when press releases were press releases, Doug.


DOUG. Oh, yes, sir!


DUCK. Wowee! Capital letters! That is quite a title!

Basically, it was, “In 50 years, I jolly well hope there’s an Internet. Try and make it happen, guys.” [LAUGHS]


DOUG. Yes!

I saw the flyer – there’s a photo of the flyer for this speech.

They said that there would be a demo room available, because they were basically streaming this presentation to a remote location.


DUCK. [AMAZEMENT] In 1968?!


DOUG. Yes, how about that!?


DUCK. “The Mother of all Demos,” it is now known as.

You can find the whole thing on YouTube… you think, “Oh, that was obvious,” but it jolly well wasn’t obvious in 1968!


DOUG. Exactly!

[IRONIC] And thanks to pioneering technologies such as that, we have things like cryptocurrency and the ability to sell some of it and buy some of it at the same time, while not actually selling any of it, and just making free money.

Right, Paul?

Is that how it works in this story?


DUCK. “Cryptocurrency Company Catastrophe,” who would have thought?

MonoX is the company in this case.

As recently as, I think, the 23 November – they weren’t quite live as far as I know, but they have a blog article from that date – they were saying. “We’re not trading publicly yet, but we’re nearly there, and we’re going to revolutionise decentralised finance. We’re going to open up to everybody. We’ve had three software audits. We’ve been live testing for three months. We’re ready to go.”

And sadly, it already looks as though the roof has caved in.

Because like you said, they allowed you to trade the MonoX token, and it turned out that if you just withdrew the money from yourself and paid it back to yourself – and it really does seem to be as simple as this – they did the subtraction of the amount that was taken out of your balance, *but they didn’t commit that yet*.

And then they took the balance you had *before the subtraction*, and they added in the new amount and that’s what got finalised.

So you basically got the plus (less a fee, I suppose), *without the minus going through*.

So apparently somebody just wrote a contract that did a load of transactions with a script in a loop that sold their own tokens to themselves over and over again, accumulating value.

And then once they’d got all the value available, they went, “Let’s spend it.”

And they mopped up by buying a whole load of other cryptocoins and trying to cash them out.

$31 million later… oh, dear!


DOUG. Unreal.


DUCK. Yes. Blunders can be expensive!

Just because you’ve had a software audit, and you’ve done a bit of testing, doesn’t mean that someone isn’t ready for you.

[ORATORICALLY] “The price of not losing your $31 million is eternal vigilance.”


DOUG. [LAUGHS] That’s the problem: the $31 million mistake!

It’s good to catch it early like this, but not to the tune of $31 million.

So, they’re talking about either getting the authorities involved, and/or they’ve made a plea to the attacker saying, “Please give us our money back. Please.”


DUCK. I’m guessing that they’re remembering that Poly Networks hack that we spoke about a few weeks back, where somebody pinched $600 million, if you don’t mind, and then started bragging about it.

And then they ended up being nice to the person and calling him – what did they call him? – “Mr. White Hat.”

They said, “You can keep half a millionn But please give us the rest back.”

Lo and behold, they got almost all of it back!

So I think that MonoX… they’re kind of hoping that the person will do the same thing.

But I suspect they’re dreaming, Doug, because by all accounts, from people who have been tracking this, at least some of the money that whoever it was made off with has already been shoved through what’s called a tumbler.

That’s one of those cryptocurrency exchanges that does a whole load of redundant loopy-bloopy transactions that mix cryptocoins together so they can’t easily be traced back.

So it’s a wait and see…


DOUG. They did say “please”, and the power of please got Poly Networks off the hook!

So we’ll keep an eye on this story.

But if you want to read up on the initial ramifications, that article is called: Cryptocurrency startup fails to subtract before adding – loses $31 million on nakedsecurity.sophos.com.

And our final story of the day: Firefox. A new update!


DUCK. Oh, yes!


DOUG. A lot of fixes, and a new fun sandbox.


DUCK. That’s correct, Doug.

There’s a whole lot of bugs fixed – security holes – as you would expect: Mozilla is pretty good at that.

So there are:

  • Possible remote code execution holes, though nobody knows how to exploit yet that we know of.
  • Components that didn’t uninstall correctly, leaving behind bits even after you’ve removed them.
  • Tricks that could allow a website to figure out which apps you had installed on your computer – information that was not supposed to leak out, because every little bit helps crooks mapping your network.

I understand there’s also an interesting bug where an attacker could create a web page that made your cursor appear in the wrong place.

That just sounds like an annoyance, doesn’t it?

Except that if the crooks can get you to think you’re clicking on “No! Cancel! DEFINITELY DO NOT do this,” when in fact you are clicking on “Like this very much indeed,” that could be a serious security hole!.


DOUG. [LAUGHS]


DUCK. They fixed all that stuff, so go to Help > About and check you’ve got the latest Firefox.

If you’re on the bleeding-edge version, that should be “95.0” from Tuesday of this week.

The other thing they’ve done, as you say, they’ve introduced yet another sandboxing technology into Firefox.

It’s called “RLBox” – and I searched high and low, left and right, and I couldn’t find who or what RL was, so I’m assuming it just means runtime library.


DOUG. Yes, I was going to say, “runtime library”…


DUCK. It’s an interesting technology for the programmers amongst our listeners.

It allows you to separate an application from the shared libraries it loads: in Windows that’s something like a DLL; in Linux or Unix, it would be a .so, for “shared object file”; on macOS, they’re usually called .dylib, “dynamic library”.

The idea is that they are program fragments, if you like, that you suck into memory at runtime, so you don’t need to have them built into the program.

That way, if you don’t need a video player, for example, then it doesn’t have to be in memory with the program.

But the whole problem with a shared library is that, when you load it into memory, it interacts with the rest of your code as though it had been compiled right into the application in the first place.

So, they’re what’s called “in-process” libraries.

In other words, once you’re using a shared library, it’s very hard to say, “Oh, I want to load the shared library, but I want to run it in a completely separate operating system process, where it has its own memory space so that it can’t do whatever it wants; it can’t misbehave and start peeking at other web pages already in memory in the main app.”

So, a shared library essentially becomes part of the app.

If you want to have two processes that run separately, you have to design your app like that in the first place, or go and do an awful lot of retrofitting.

My understanding is what they’ve tried to do with RLBox is they’ve provided a way that you can load a shared library, but it gets put into a little safe space of its own, and then the RLBox sandbox manages the function calls, the subroutine calls, that go between the main program and the shared library.

Those calls are no longer quite as tightly coupled, memory and security wise, as they otherwise would have been.

You have to fiddle with your program a bit, but you don’t have to go and rip the whole thing apart and start again.

So it’s a way of retrofitting security where previously that would have been very difficult indeed.

So far, it’s only a few things that get dealt with in this way: they’ve got a part of the font rendering process separated; they have the spelling checker that’s built into Firefox separated; and anything to do with playing OGG-format files.

So that’s all they’ve done so far – it’s not a lot, but it’s a start.

And, apparently, in the next month they will add this separation for XML file parsing, which is another rich source of bugs in any applications that process XML files, and also more general protection for font rendering.

Many, if not most websites these days don’t rely on the fonts that you’ve set in your browser.

They actually say, “No, I want you to use this cool looking font that I chose,” and they package the font into the web page and send it across.

And the format is called WOFF: Web Open Font Format.

Of course, parsing fonts that come from an untrusted source is really, really complicated.

So if you have a bug in your font processing, it means somebody could use a boobytraped font to take over a web page, and suck data out of it.

That RLBox protection is coming next.

So it’s a baby-steps start, but in my opinion, it is both an interesting and an important one.


DOUG. Very cool!

OK, so you can download the latest Firefox, or head over to Naked Security and read this article called: Firefox update brings a whole new sort of security sandbox.


DUCK. And if that doesn’t work for you, Doug…


DOUG. [LAUGHS] Download Lynx!


DUCK. Absolutely.

I did a check, actually, and the Firefox that I was running while I was writing that article…

I checked how many shared libraries were actually loaded: 205, and those things are all over-and-above what was compiled into the program itself.

Lynx? That has 14.

How times change!


DOUG. Still in development!

Well, it is time for our “Oh! No!”

This could almost be termed a “No! No!”…


DUCK. [LAUGHS]


DOUG. Reddit user CyberGuy writes:

I worked for an MSP, and the other day I had a client report that multiple computers couldn’t print.

I connected one of the devices and tried to ping the printer, and was unsuccessful; then tried to ping the print server, and was also unsuccessful.

I thought this was odd because the user wasn’t remote – they were sitting maybe 20 feet away from their wireless access point.

I decided to hit the gateway, and it almost immediately dawned on me what the problem was.

This client uses Ubiquiti access points, and upon accessing the web management portal, I was greeted by a login page for Netgear.

I called the client and asked if they possibly knew why this device was connected to a Netgear access point.

The client told me, “Ah, Sally, the receptionist, brought that in two weeks ago because her Internet was running slow.”

I was stunned that they decided to allow a low-level employee to bring in their own wireless access point from home, plug it in, and allow half of the users to connect to it.”

So, as I said, a “No! No!”


DUCK. She actually plugged it into a socket?


DOUG. And then all the people around her connected to it for internet.


DUCK. Oh, because word got around, “Hey, Sally’s, access point is really cool.”


DOUG. “It’s faster,” yes!


DUCK. The thing is, why would it be *faster*?

Probably, “Hey, it only has half the restrictions!”


DOUG. Exactly, yes.


DUCK. All the social media sites that are normally banned! Online gaming downloads!

So, 10/10 for initiative?


DOUG. Yes.


DUCK. But 3.5/10 for cybersecurity.


DOUG. And I can tell you, as a former MSP myself, without even looking up, the default username for a Netgear router is admin and the default password is password.

So, if those hadn’t been changed? Big trouble!

Well, if you have an “Oh! No!” – or a “No! No!” – you’d like to submit, we’d love to read it on the podcast.

Email tips@sophos.com; comment on any of our articles on nakedsecurity.sophos.com; or hit us up on social @NakedSecurity.

That is our show for today, thanks very much for listening.

For Paul Ducklin, I’m Doug Aamoth, reminding you until next time, to…


BOTH. Stay secure!

[MUSICAL MODEM]

Learn more about Sophos Managed Threat Response here:
Sophos MTR – Expert Led Response  ▶
24/7 threat hunting, detection, and response  ▶


Firefox update brings a whole new sort of security sandbox

Today’s a Firefox Tuesday, when the latest version of Mozilla’s browser comes out, complete with all the security updates that have been merged into the product since the previous release.

We used to call them Fortytwosdays, because Mozilla followed a six-weekly coding cycle, instead of monthly like Microsoft, or quarterly like Oracle, and seven days multiplied by six weeks gave you the vital number 42.

These days, Mozilla mostly goes for four-week cycles, so that updates shift around steadily in the monthly calendar in the same sort of way that lunar months slide gradually across the solar year.

This update brings the mainstream version to 95.0, and includes a bunch of security fixes, listed in Mozilla Foundation Security Advisory MFSA-2021-52, including vulnerabilities leading to:

  • Numerous crashes that could potentially be wrangled into exploitable holes.
  • WebExtensions that could leave behind unwanted components after official uninstallation.
  • Tricks to permit remote sites to find out some of the apps installed on your computer.
  • Sandbox bypasses that could allow untrusted scripts to do more than intended.
  • Tricks to put the cursor in the wrong place, potentially disguising risky clicks.

To make sure you have the latest version, go to Help > About and wait for the animated line Checking for updates... to tell you if there’s an update available.

Note that on Linux and some Unixen, Firefox might be delivered as part of your distro, so check there for the latest version if Firefox doesn’t offer to update itself.

A whole new sandbox

The big change in Firefox 95.0, however, is the introduction of a new sandboxing system, developed in academia and known as RLBox.

(We have to admit that we can’t find an official explanation of the letters RL in RLBox, so we’re assuming they stand for Runtime Library, rather than denoting the initials of the person who initiated the project.)

Strict sandboxing inside a browser is often achieved by splitting the browser into separate system procesess for each tab, which end up isolated from each other by the operating system itself.

By default, processes can’t read or write each other’s memory, so that a remote code execution hole triggered by a criminally-minded site such as dodgy.example doesn’t automatically get the ability to snoop on the content of a tab that’s logged into your email server or hooked up to a social networking account.

But not all parts of a browser’s rendering functionality are easy to split into separate processes, notably if an existing process loads what’s known as a shared library – typically a .DLL file on Windows, .so on Unix and Linux, and .dylib on macOS.

Shared libraries, for example to render a specific sort of font or to play a specific sort of sound file, are designed to run “in-process”.

That means they’re loaded into the memory space of the current process, pretty much as if they’d been compiled into the application right from the start.

In other words, a web page that can be tricked into loading a booby-trapped font will typically end up processing the risky font file right inside the same process that’s handling the rest of the page.

You’d get better protection if the web renderer and the font handler could run separately, and didn’t have access to each others’ memory and data, but that’s tricky to do in a world in which you’re already using shared libraries to provide additional per-process features.

You’d need to go back to the drawing board and reimplement all the functions currently implemented via shared libraries (which, as the name suggests, share memory and other run-time resources with the parent process) in some other way.

Gallia est omnis divisa in partes tres

RLBox is a way to simplify the process of splitting your processes into separate parts, so that your code doesn’t need a complete rewrite.

Nevertheless, RLBox calls into shared libraries pass through a “separation layer” that keeps apart the inner workings of the main program and at least some of its libraries.

Your code still needs changing to let RLBox intervene in how data is passed back and forth between the main application and its shared-library subroutines, but the amount of upheaval in adding these security checks is, at least if the RLBox team and the Firefox developers are to believed, comparatively modest and easy to get right.

Notably, according to the RLBox team:

Rather than migrating an application to use RLBox […] in a single shot, RLBox allows ‘incremental migration’ […] Migrating existing code to use RLBox APIs can be performed one [operation] at a time. After each such migration, you can continue to build, run [and] test the program with full functionality to make sure the migration step is correct.”

Unfortunately, not many of Firefox’s rendering functions have yet been switched to RLBox.

Apparently, only a few special font-shaping operations, the spelling checker, and the media-playing code for OGG files have been moved into this more secure mode.

OGG files are the ones you often find on Wikipedia and zealous free-and-open-source websites, because the OGG codecs have never been encumbered by patents, unlike many other audio and video formats. (Codec isn’t as high-tech a word as you might expect, by the way: it’s short simply for coder-and-decoder, in the same way that a modem is a signal modulator-and-demodulator.)

What next?

If all goes well, RLBoxed handling of XML files and WOFF fonts (the now-ubiquitous file format for embedded web fonts) will follow in Firefox 96.0.

Presumably, if that all goes well, the Mozilla team will continue to divide and conquer its browser code in order to create ever-smaller “zones of compromise” associated with each programming library (of which a typical browser session may require hundreds) that is needed to process untrusted content from outside.

Of course, if that doesn’t work, there’s always Lynx, as we discussed in a recent Naked Security Podcast.

Lynx is a browser so old-school and so stripped down that it doesn’t do fonts, JavaScript or even graphics: just 100% terminal-style text-mode browsing with a minimal reliance on shared libraries…

THE WORLD’S {COOLEST,OLDEST} BROWSER: LISTEN NOW

Click-and-drag on the soundwaves to move around. Lynx section starts at 2’10”.
You can also listen directly on Soundcloud.


Cryptocurrency startup fails to subtract before adding, loses $31m

Two weeks ago, after three software audits and three months of live testing, a cryptocurrency startup called MonoX introduced what it described as “the premier bootstrap decentralized exchange, Monoswap”.

In an announcement on 23 November 2021, the company declared:

MonoX will revolutionize the DeFi ecosystem by fixing the capital inefficiencies of current protocol models. With lower trading fees, capital efficiency, and zero-capital token launching — MonoX will expand the capabilities of DeFi.

DeFi, as you probably know, is an acronym for (or, for the linguistically strict amongst us, an ellipsis of) the term decentralised finance, and is typically used to refer to electronic trading that doesn’t rely on any individual company or government department for record keeping.

By using distributed ledgers known as a blockchains, a sort of community-operated bookkeeping venture where transactions are agreed and recorded by consensus, cryptocurrencies and digital contracts don’t need to be managed by a single authority such as a central bank or a payment card company.

Blockchain technology therefore brings lots of opportunity, as you are no doubt aware from the number of Why Not Inve$t In Our Brand New Cryptocoin Deal$ Right Now emails that are getting caught up in your spam filter these days.

And plenty of risk, too, as MonoX discovered almost as soon as it went live last month,

Despite the audits and the testing, MonoX seems to have made an interesting blunder in how it handled balance changes during transactions.

This has apparently already cost the startup a massive $31,000,000 in lost funds, thanks to an automated series of rogue transactions that the company failed to think of, and therefore didn’t program against.

Paying yourself considered harmful

As far as we can see, the software flaw that MonoX overlooked was triggered if you transferred value from one of your own MonoX cryptocoins…

…back to yourself, a bit like doing a bank transfer from your own account directly back into your own account.

You’d imagine that your regular bank would prevent you doing such a thing, on the grounds that it would [a] be pointless and [b] probably be a mistake.

If you were absolutely determined to do it anyway, perhaps in a misguided attempt to get a bunch of deposits on the record to make your business look busier that it really was, you could always try doing it as two separate transactions.

For example, you could withdraw $100 in cash from a teller, then join the back of the queue and pay the $100 straight back in, assuming you were willing to accept a modest overall loss from any withdrawl and deposit fees that might apply.

These days, you’d expect your balance to go down by $100 as soon as you did the withdrawal, and you’d certainly expect, in the time it took to return to the teller to pay the $100 back in, that the previous transaction would have gone through already.

Even if that didn’t happen, you’d ultimately expect to see both transactions listed on your statement, in the same order you conducted them: $100-plus-fees out, and $100-less-fees back in.

What you wouldn’t expect, however (not least because your bank wouldn’t still be in business if it let people get away with this), is that if you could get the second transaction processed quickly enough then it would overwrite the first transaction altogther, leaving your account credited with a $100 deposit, but with no record of the immediately preceding withdrawal.

Holed below the waterline

Sadly, it seems that something along the lines described above is what holed MonoX’s ship below the waterline:

The exploit was caused by a smart contract bug that allows the sold and bought token to be the same. In the case of the attack, it was our native MONO token. When a swap was taking place and tokenIn was the same as tokenOut, the transaction was permitted by the contract.

Any price updates from swap from tokenIn and tokenOut were independently verified by the contract. With tokenOut being verified last, this caused a massive price appreciation of MONO. The attacker then used the highly priced MONO to purchase all the other assets in our pool and drained the funds.

The explanation isn’t entirely clear, perhaps because English isn’t the author’s first language, but it does indeed sound as though the “smart contract” code went something like this:

As you can see, the code above doesn’t work if tokenIn and tokenOut refer to the same account, because the last two lines then become equivalent to:

The deduction in the first line is immediately undone by the variable assignment used to effect payment in the second, so you’re up by (amount - fee) cryptocoins.

You’re supposed to end up with an overall outcome of (amount - amount - 2*fee), which simplifies to a debit of (2*fee) – one fee for the withdrawal; the other for the deposit – as you would expect.

According to MonoX, some of the funds acquired in this way have been pushed through a so-called tumbler or transaction mixer, presumably to attempt to disguise their source so they can be spent again without arousing suspicion.

What next?

Perhaps encouraged by the recent $600m Poly Networks hack, where the company somehow manged to woo the perpetrator sufficiently well that most of the the funds were returned, MonoX says that it has “[t]ried to make contact with the attackers to open a dialogue through submitting a message via transaction on ETH Mainnet”.

In other words, the MonoX team have used the comment field in an Ethereum transaction as a way of asking for the appropriated funds back.

MonoX also stated that it “will file a formal police report”, though it’s not clear whether that has happened yet.

We’re guessing that it might complicate MonoX’s negotiations with the perpetrators if the matter is now in the hands of the police.

Indeed, the next question is, “Did the attacker actually break any laws?”

In some jurisdictions, knowingly exploiting software bugs to circumvent protection or to achieve results that are clearly at odds with expected behaviour can leave you open to criminal or civil action.

No less a company than Google found that out back in 2012, when it was fined for sneakily circumventing anti-tracking protection in Apple’s Safari browser.

Also, in many if not most countries, you’re expected to report and return any bank deposits that clearly weren’t intended for you, instead of being allowed to profit from the bank’s mistake.

But the whole point of DeFi is its decentralised, freewheeling, libertarian, not-regulated-by-the-man nature.

So, as non-lawyers, we have absolutely no idea what the regulatory situation is likely to be in this case, if indeed we ever find out which jurisdictions and which regulations would apply anyway.

What do you think? Let us know in the comments (you may remain anonymous if you wish)…


Mozilla patches critical “BigSig” cryptographic bug: Here’s how to track it down and fix it

Renowned bug-hunter Tavis Ormandy of Google’s Project Zero team recently found a critical security flaw in Mozilla’s cryptographic code.

Many software vendors rely on third-party open source cryptographic tools, such as OpenSSL, or simply hook up with the cryptographic libraries built into the operating system itself, such as Microsoft’s Secure Channel (Schannel) on Windows or Apple’s Secure Transport on macOS and iOS.

But Mozilla has always used its own cryptographic library, known as NSS, short for Network Security Services, instead of relying on third-party or system-level code.

Ironically, this bug is exposed when affected applications set out to test the cryptographic veracity of digital signatures provided by the senders of content such as emails, PDF documents or web pages.

In other words, the very act of protecting you, by checking up front whether a user or website you’re dealing with is an imposter…

…could, in theory, lead to you getting hacked by said user or website.

As Ormandy shows in his bug report, it’s trivial to crash an application outright by exploiting this bug, and not significantly more difficult to perform what you might call a “controlled crash”, which can typically be wrangled into an RCE, short for remote code execution.

The vulnerability is officially known as CVE-2021-43527, but Ormandy has jokingly dubbed it BigSig, because it involves a buffer overflow provoked by submitting a digital signature signed with a cryptographic key that is bigger than the largest key NSS is programmed to expect.

Buffer overflow

A buffer overflow is triggered when a memory area that only has space for X bytes is inadvertently stuffed with Y bytes of data, where Y > X.

Those superfluous extra (Y-X) bytes of “overflow” typically end up overwriting an adjacent block of memory that is already in use for something else, like a surfeit of ill-behaved guests at a hotel room party who end up spilling out into the corridor, barging into neighouring rooms, and generally making a nuisance of themselves.

Typically, this sort of memory corruption causes the vulnerable application to veer off course into some unchartered and unknown memory region where the operating system has no choice but to shut it down right away, causing a simple crash.

But in an RCE, the attackers orchestrate the crash in such a way as to misdirect the application into code they’ve supplied themselves.

An RCE is like a rogue hotel partygoer who not only barges into your room and creates a disturbance that wakes you up, but also deliberately takes advantage of your temporary confusion by stealing your laptop and your wallet under cover of pretending to apologise while you chase them out.

The bad news is that any application that uses the NSS library could be affected by this bug, including most Mozilla apps and several other popular open source programs.

Mozilla explicitly lists the following as impacted:

  • Thunderbird, Mozilla’s own email client.
  • LibreOffice, a popular free alternative to Microsoft Office.
  • Evolution, an open source calendaring app.
  • Evince, a popular multi-format document viewer for PDFs and images.

The good news, if you like to think of it that way, is that this bug can’t be triggered in Firefox, so Mozilla’s popular browser is not affected.

Of course, there many be other apps that are vulnerable too – for example, we’re not sure whether the still-active Seamonkey project, which is essentially a Firefox-like browser and a Thunderbird-like email client packaged into a single app, is at risk.

What happened?

The bug is down to code that made the infamous, and so often dangerous, assumption that “this is so unlikely that it it almost certain never to happen, therefore it will never happen, therefore there is no need to check if it has”.

When verifying a digital signature, NSS allocates a chunk of memory to store all the data relevant to the calculations, including the cryptographic public key required for the validation.

The space reserved for the public key is chosen by working out the size of the largest possible DSA key supported by NSS, the largest possible Elliptic Curve (EC) key supported by NSS, and the largest RSA key, and then using the largest of those values to ensure a buffer that is “always big enough”.

RSA keys are notoriously much larger that those of other cryptographic algorithms (this is one reason why EC cryptography is taking over from RSA), typically reaching 2048 or even 4096 bits, instead of the 256 or 512 bits typically required for EC keys.

But RSA keys bigger than 4096 bits are astonishingly rare, not only because they would be much larger than is strictly needed to resist today’s cracking tools, but also because they’re much slower to create and use than smaller keys, even on fast computers.

We’ve never seen, or even heard of, RSA keys of 16384 bits in real-life use, given that they’re typically between 500 and 1000 times slower to generate than 2048 bit keys, which are still currently considered acceptably large to resist attack.

Indeed, the public key buffer allocated for NSS signature verification is 16384 bits long, a size that ought to be more than enough for many years to come…

…and the code that copies an incoming public key into that buffer therefore assumes that no one would go to the trouble of generating a larger RSA key, so it doesn’t bother checking that the key it just received actually fits.

The bug fix was to add in the size-checking code that ought to have been there all along.

What to do?

  • Update NSS. Many Linux distros will have a central copy of the NSS library, but some installed apps may include and use their own versions of the library. You can search for the file libnss3.so to find how many NSS instances are on your computer. Windows apps that use NSS will typically include their own versions; search for NSS3.DLL. You need version 3.73 or later, or 3.68.1 ESR if you are using the extended support release. For advice on how to locate any NSS library files on your computer, and how to check what version you have, see below.
  • Never skimp on error checking. Just because most people won’t generate huge cryptographic keys doesn’t mean that no one will, whether they do so by accident (which in this case would cause a Denial of Service attack by crashing your app) or by design (in order to hack into your computer on purpose).

TIPS FOR FINDING AND VERSIONING NSS FILES

On Linux, you can search for copies of the NSS library code with the find command. The output from our system is shown as an example.

We have Firefox, Tor and LibreOffice installed, so we conclude from this output that Firefox and Tor have their own NSS library copies, while LibreOffice is relying on the one provided by our distro in /usr/lib64:

 $ find / -type f -name 'libnss3.so' 2>/dev/null /usr/lib64/libnss3.so /opt/firefox/libnss3.so /opt/tor-browser_en-US/Browser/libnss3.so

On Windows, try the DIR command shown below, from a regular command prompt window (i.e. run CMD.EXE, not PowerShell).

We have installed Firefox and LibreOffice, both of which contain their own copy of the NSS3 library file, and will therefore need updating via their own download sources. Remember that Firefox is not affected by this bug, but LibreOffice is.

 C:\Users\duck> DIR C:\NSS3.DLL /S [. . .] Directory of c:\Program Files\LibreOffice\program 19/11/2021 11:18 1,089,680 nss3.dll 1 File(s) 1,089,680 bytes Directory of c:\Program Files\Mozilla Firefox 19/11/2021 15:31 2,186,168 nss3.dll 1 File(s) 2,186,168 bytes Total Files Listed: 2 File(s) 3,275,848 bytes [. . .]

Identifying the internal version numbers of the NSS files that turn up in tour search can be tricky, given that the only reliable way to do so is to load the library and ask it to report on itself.

On Linux

The code below worked for us on Linux. Save as nsschk.c, compile with gcc -o nsschk nsschk.c -ldl, and run ./nsschk with the NSS library file you wish to check on the command line:

#include <stdio.h>
#include <stdlib.h>
#include <dlfcn.h> void bail(char *msg) { fprintf(stderr,"%s\n",msg); exit(1);
} int main(int argc, char **argv) { /* Use the command argument as the NSS library name, */ /* otherwise pick a sensible default for your distro. */ char *libname = argc>1 ? argv[1] : "/usr/lib64/libnss3.so"; printf("Using library file: %s\n",libname); void *nsslib = dlopen(libname,RTLD_LAZY); if (nsslib == NULL) { bail("Can't dlopen() that file"); } int (*initfn)(char *dir) = dlsym(nsslib,"NSS_NoDB_Init"); char *(*getvfn)(void) = dlsym(nsslib,"NSS_GetVersion"); if (initfn == NULL) { bail("Can't find NSS_NoDB_Init function"); } if (getvfn == NULL) { bail("Can't find NSS_GetVersion function"); } if ((*initfn)(".") != 0) { bail("Failed to initialise NSS"); } printf("NSS Version: %s\n",(*getvfn)()); return 0;
}

Our NSS files (see above) showed up as follows:

$ ./nsschk Using library file: /usr/lib64/libnss3.so
NSS Version: 3.73 $ ./nsschk /opt/firefox/libnss3.so
Using library file: /opt/firefox/libnss3.so
NSS Version: 3.71 $ ./nsschk /opt/tor-browser_en-US/Browser/libnss3.so
Using library file: /opt/tor-browser_en-US/Browser/libnss3.so
NSS Version: 3.68

Our distro-managed version, as used by the vulnerable LibreOffice, is up to date. Firefox and Tor will presumably be updated soon by Mozilla and the Tor Project respectively, but as they are both apparently immune to this bug, we consider them safe.

On macOS

On a Mac, you can use the same code, but you will explicitly need to tell macOS what directory to use for the NSS library files, or change the current directory to the location of the libnss3 file first. Also, search for both libnss3.so and libnss3.dylib, because both extensions are used in macOS builds.

On our test Mac, for example, we searched the /Applications folder for NSS libraries:

 $ find /Applications -type f -name 'libnss3.*' /Applications/Firefox.app/Contents/MacOS/libnss3.dylib /Applications/LibreOffice.app/Contents/Frameworks/libnss3.dylib /Applications/Thunderbird.app/Contents/MacOS/libnss3.dylib /Applications/TorBrowser.app/Contents/MacOS/libnss3.dylib $ DYLD_LIBRARY_PATH=/Applications/Firefox.app/Contents/MacOS ./nsschk libnss3.dylib Using library file: libnss3.dylib NSS Version: 3.71 $ DYLD_LIBRARY_PATH=/Applications/Thunderbird.app/Contents/MacOS ./nsschk libnss3.dylib Using library file: libnss3.dylib NSS Version: 3.68 $ DYLD_LIBRARY_PATH=/Applications/TorBrowser.app/Contents/MacOS ./nsschk libnss3.dylib Using library file: libnss3.dylib NSS Version: 3.53.1 $ DYLD_LIBRARY_PATH=/Applications/LibreOffice.app/Contents/Frameworks ./nsschk libnss3.dylib Using library file: libnss3.dylib NSS Version: 3.55

On Windows

A few modifications produced code that worked for us on Windows. To ensure that Windows finds all the additional DLLs that the NSS3.DLL library needs, change directory to where the NSS3.DLL version resides, and run the NSSCHK.EXE command in that directory.

#include <windows.h>
#include <stdio.h>
#include <stdlib.h> void bail(char *msg) { fprintf(stderr,"%s\n",msg); exit(1);
} int main(int argc, char **argv) { /* On Windows, we look for NSS3.DLL in the current */ /* directory only, to help ensure we find its friends */ char *libname = "./NSS3.DLL"; printf("Using library file: %s\n",libname); HMODULE nsslib = LoadLibrary(libname); if (nsslib == NULL) { fprintf(stderr,"Error: %d\n",GetLastError()); bail("LoadLibrary() failed on that file"); } int (*initfn)(char *dir) = GetProcAddress(nsslib,"NSS_NoDB_Init"); char *(*getvfn)(void) = GetProcAddress(nsslib,"NSS_GetVersion"); if (initfn == NULL) { bail("Can't find NSS_NoDB_Init() function"); } if (getvfn == NULL) { bail("Can't find NSS_GetVersion() function"); } if ((*initfn)(".") != 0) { bail("Failed to initialise NSS"); } printf("NSS Version: %s\n",(*getvfn)()); return 0;
}

Our results were as follows:

C:\Users\duck>cd "\Program Files\Mozilla Firefox" C:\Program Files\Mozilla Firefox>\Users\duck\NSSCHK.EXE
Using library file: ./NSS3.DLL
NSS Version: 3.71 C:\Program Files\Mozilla Firefox>cd "\Program Files\LibreOffice\program" C:\Program Files\LibreOffice\program>\Users\duck\NSSCHK.EXE
Using library file: ./NSS3.DLL
NSS Version: 3.55

We infer from the output above that LibreOffice on Windows is currently vulnerable (we downloaded the latest version to do this test), so watch out for an update notification and grab the new version as soon as a patched build is avilable.

Go to the Options > LibreOffice > Online Update dialog and click [Check Now] to see if a new version is available.

You can also right-click on the NSS3.DLL file in Windows Explorer and choose Properties > Details, but the version string seems to depend on how the application package was built, so it may not reveal the actual NSS version number.

For example, on our Windows computer, the NSS3.DLL delivered as part of the Firefox app was labelled with the top-level Firefox version details; the LibreOffice DLL revealed the NSS-specific version string:

Left: NSS3.DLL properties in Firefox build.
Right: NSS3.DLL properties in LibreOffice build.

>

S3 Ep61: Call scammers, cloud insecurity, and facial recognition creepiness [Podcast]

[00’23”] Fun Fact: Ebooks reach their half-century.
[00’58”] Call scammers and cryptocoin treachery.
[07’34”] Cloud insecurity and yet more cryptocoin treachery.
[16’15”] Tech History: The interwoven story of Mary Shelley, Ada Lovelace and AI ethics.
[18’26”] Facial recognition creepiness.
[25’23”] Oh! No! The wannabe wizard that went to school with a trainee Sith.

With Paul Ducklin and Doug Aamoth. Intro and outro music by Edith Mudge.

LISTEN NOW

Click-and-drag on the soundwaves below to skip to any point in the podcast. You can also listen directly on Soundcloud.


WHERE TO FIND THE PODCAST ONLINE

You can listen to us on Soundcloud, Apple Podcasts, Google Podcasts, Spotify, Stitcher and anywhere that good podcasts are found.

Or just drop the URL of our RSS feed into your favourite podcatcher software.

If you have any questions that you’d like us to answer on the podcast, you can contact us at tips@sophos.com, or simply leave us a comment below.


go top