Wormable Windows HTTP hole – what you need to know

Yesterday was the first Patch Tuesday of 2022, with more than 100 security bugs fixed.

We wrote up an overview of the updates, as we do every month, over on our sister site news.sophos.com: First Patch Tuesday of 2022 repairs 102 bugs.

For better or for worse, one update has caught the media’s attention more than any other, namely CVE-2022-21907, more fully known as HTTP Protocol Stack Remote Code Execution Vulnerability.

This bug was one of seven of this month’s security holes that could lead to remote code execution (RCE), the sort of bug that means someone outside your network could trick a computer inside your network into running some sort of program without asking for permission first.

No need to log in up front; no pop-up warning at the other end; no Are you sure (Y/N)? questions.

Just give the order, and the malware runs.

That’s the theory, anyway.

RCE bugs considered wormable

One thing to remember about most RCE vulnerabilities is that if you can attack someone else’s computer from outside and instruct it to run a malicious program of your choice…

…then it’s possible, perhaps even probable, that you could tell it to run the very same program that you yourself just used to launch your own attack.

In other words, you might be able to use the vulnerability to locate and infect Victim 1 with malicious program W that instructs Victim 1 to locate and infect Victim 2 with malicious program W that instructs Victim 2 to locate and innfect Victim 3… and so on, perhaps even ad infinitum.

In an attack like this, we give the program W a special name: we call it a worm.

Worms form a proper subset of a type of malicious software (or malware for short) known generally as computer viruses, the overarching term for self-replicating malware of any sort.

This means that most RCE bugs are, in theory at least, wormable, meaning that they could potentially be exploited to initiate a chain of automatic, self-spreading and self-sustaining malware infections.

The reasoning here is obvious: if an RCE bug allows you to run an arbitrary program of your own choice on someone else’s computer, such as popping up CALC.EXE or launching NOTEPAD, then it almost certainly allows you to run a specific program of your choice, such as a worm.

Some bugs are more wormable than others

As you can imagine, some classes of RCE bug are considered much more wormable than others, especially bugs that can be triggered directly via a simple network interaction.

That was a risk of considerable concern in the recent Log4Shell saga, where a single booby-trapped web request with some curious but otherwise unexceptionable ASCII text in it could trigger arbitrary remote code execution.

Unfortunately, CVE-2022-21907 is a bug in the same category, with Microsoft’s own security bulletin explicitly saying the following in its FAQ section:

*How could an attacker exploit this vulnerability?* In most situations, an unauthenticated attacker could send a specially crafted packet to a targeted server utilizing the HTTP Protocol Stack (HTTP.sys) to process packets. *Is this wormable?* Yes. Microsoft recommends prioritizing the patching of affected servers.

Does this have anything to do with IIS?

Where and how does the HTTP Protocol Stack get activated?

Is this an issue unique to Windows servers, as Microsoft’s bulletin implies when it talks about patching “affected servers”?

Does the attack depend on you having a known web server such as Microsoft IIS (Internet Information Services) already installed and activated?

The answers to these questions are as follows:

  • HTTP.sys is part of Windows and is available to any program that uses ASP.NET.
  • HTTP.sys works on Windows 7 clients and later.
  • HTTP.sys works on Windows 2008 R2 servers and later.
  • HTTP.sys is not part of IIS, and doesn’t require IIS to be installed.

The last point above makes it clear that that you may have any number of apps in use – perhaps without realising it – that provide an HTTP-based interface via HTTP.sys, whether you have deployed IIS or not.

In fact, Microsoft’s own documentation notes that “HTTP.sys is useful […] where there’s a need to expose the server directly to the Internet without using IIS.”

Indeed, IIS is based on HTTP.sys, not the other way around, as Microsoft explains:

HTTP.sys is mature technology that protects against many types of attacks and provides the robustness, security, and scalability of a full-featured web server. IIS itself runs as an HTTP listener on top of HTTP.sys.

Simply put: you could, in theory, have apps installed, even on a desktop or laptop computer, that provide some sort of web-based interface that is serviced by the HTTP.sys driver code.

The silver lining, for some users at least, is that the part of HTTP.sys that contains the CVE-2022-21907 bug:

  • Affects only Windows 10 and later desktop versions.
  • Affects only Windows Server 2019 and later server versions.
  • Is not enabled by default on Windows Server 2019.
  • Can be immunised against this bug simply by installing the January 2022 Patch Tuesday updates.

As far as we can tell, the reason that this vulnerability isn’t present in earlier versions of Windows and Windows Server is that the bug was found in the code that deals with HTTP Trailers (these are like HTTP Headers, except that they are sent after the HTTP data instead of before it); HTTP Trailer support was only added after support for HTTP/2; and HTTP/2 support only arrived in the Windows 10 era.

What to do?

If you are truly unable to patch right away, and if you know that you are not running (or at least do not intend to run) any web-based software that uses HTTP.sys, you can temporarily block HTTP.sys on your computer by setting the following registry entry:

HKLM\SYSTEM\CurrentControlSet\Service\HTTP\Start = DWORD(4)

The usual value of this registry entry is 3, denoting “start on demand”; changing the value to 4 marks the driver as “service disabled”.

After a reboot, you can check the status of HTTP.sys from a regular command prompt with the SC (Service Control) command:

C:\Users\duck> sc query HTTP
SERVICE_NAME: HTTP TYPE : 1 KERNEL_DRIVER STATE : 1 STOPPED <--before applying the registry hack above, this line said: "4 RUNNING" WIN32_EXIT_CODE : 1077 (0x435) SERVICE_EXIT_CODE : 0 (0x0) CHECKPOINT : 0x0 WAIT_HINT : 0x0
C:\Users\duck>

Note that we have tested this workaround in only the most cursory fashion. We installed Server 2022, enabled IIS, created a home page and verified from another computer that it worked. We changed the service start value for HTTP to 4, as suggested above, and rebooted. Our IIS server was no longer accessible. We reverted the registry entry to 3, rebooted once more and verified that IIS came back to life automatically. From this we infer that disabling the HTTP service does indeed block HTTP-based network access to higher-level software that might otherwise be exposed to this bug, and we assume that this renders the vulnerability temporarily “untriggerable”.

Our primary recommendations are:

  • Assume that all RCE vulnerabilities are wormable. As mentioned, bugs than can be triggered directly via routine network connections pose by far the greatest risk of “getting wormed”, but in theory any bug that allows arbitrary remote code execution could allow worm code execution.
  • Assume that cybercriminals are already actively digging into this and all the other RCE vulns announced this Patch Tuesday. You have probably heard the joke about Patch Tuesday being followed by Exploit Wednesday. There’s more that a touch of truth to that, given that even closed-source patches can be often wrangled backwards – reverse engineered, in the jargon – to reveal the inner details of the bug that they prevent. (And see point 1.)
  • Patch early, patch often. Don’t use workarounds as a routine part of your patching process to buy extra time every time. Patch out of preference, and keep workarounds for situations where you genuinely need to delay patching for a while. (And see points 1 and 2.)

Don’t delay… do it today!

LEARN MORE ABOUT THE JANUARY 2022 PATCH TUESDAY


Home routers with NetUSB support could have critical kernel hole

Now that a patch has been circulated to vendors, researchers at Sentinel One have released details of a worrying bug in an IoT software driver called NetUSB.

The product comes from a Taiwanese hardware and software maker called Kcodes, which describes itself as follows:

[A] leading supplier and developer of USB over IP technology products. Today, over 20% of worldwide networking devices are embedded with KCodes solution.

The idea is a neat one: NetUSB is a virtual connector for USB hardware, so that you can plug a range of different USB devices directly into your router, and then access them remotely from some, many or all of the other devices on your network.

Instead of sharing a USB device such as a disk drive, a printer or a TV tuner by plugging it into your various laptops, desktops and mobile phones in turn, you hook up the USB device permanently at the virtual centre of your network by connecting it to your router.

Then you share it out using a “virtual USB cable” that shuffles the USB data over your wireless network instead of over a physical cable – in much the same way that Windows lets you redirect drive letters, directories and files across the network with the NET USE command.

Kernel driver open to internet traffic

Sentinel One researcher Max van Amerongen figured there might be code worth digging into when he examined a NetGear router during 2021 and found a kernel driver listening for network connections on TCP port 20005.

Significantly, the driver was listening on the network interface 0.0.0.0, which is shorthand for “all interfaces”, thus covering localhost, the internal LAN, and the externally connected WAN interface.

The networking interface used for localhost is accessible only to programs running directly on the router itself – indeed, the “network card” for this interface is implemented entirely in software, and typically gets the IP number 127.0.0.1 on IPv4 networks.

The internal LAN typically has a so-called “private” IP number, usually 192.168.x.x or 10.x.x.x, that is valid only on the LAN itself and is therefore inacessible by default from the outside world.

But your WAN interface, where WAN is short for wide-area network, and loosely means “the internet in its entirety”, typically has a public IP number, often issued automatically by your ISP every time your router starts up.

You should assume that your WAN interface is both visible to and accessible from anywhere in the world.

In other words, TCP network services that listen explicitly on the WAN port, or implicitly by specifying a catchall IP number of 0.0.0.0, are generally exposed to, can be probed by, and (if buggy) could be exploited by, almost anyone.

Worse still, most computers that are listening out for connections from the outside world – whether they are listening by accident or design – will get found and poked at automatically, regularly and repeatedly.

Even if you’re not openly advertising for visitors, as you might if you ran your own web server or blogging site, researchers and crooks alike will find you, without really trying, typically within minutes of your router booting up.

The IPv4 network can support approximately 4 billion different simultaneously connected and uniquely identifiable devices (that’s because a 32-bit network number can take on a maximum of 232 different values, and 232 = 4,294,967,296)…

…but at contemporary network speeds, even a comparatively modest commercial internet connection can try out all possible IP numbers – billions of them! – in hours or even minutes.

Simply put, someone who wants to find your vulnerable router can and will do so, without needing to target you in particular, because it’s surprisingly easy to target anyone by quite literally trying everyone.

Buffer overflow

It didn’t take long for van Amerongen to find a problem in the code that processed incoming network data in the Kcodes NetUSB driver.

Like many TCP protocols, the first step is to read in and identify the command that the user wants to perform.

If you’ve ever worked with HTTP, you’ll know that incoming commands are specified in the first few bytes of the first network packet, using human-readable byte sequences such as GET, HEAD, POST and OPTIONS.

In NetUSB, commands are specified with numeric codes, not text strings, and van Amerongen discovered one numbered 0x805F (32,863), which was processed by a C function called SoftwareBus_dispatchNormalEPMsgOut.

(We don’t know what an EP message means in this context, but that doesn’t matter, because it’s not the command itself that creates the hole, it’s the preparation for processing the command.)

After being selected by number, this function then reads in a 32-bit value that denotes the size of the message that the user wants to send, and allocates enough kernel memory to hold what’s coming next.

Except that isn’t quite how the code works, because it actually asks for “as much memory as the user requested, plus an extra 17 bytes that we’ll use during processing”.

Those extra 17 bytes are what introduces the security hole.

In pseudocode, the driver does this:

 U32 size = read(socket,4); // get 32-bit size from network void* buff = kernel_alloc(size+17); // allocate the needed memory, plus 17 additional bytes if (buff == NULL) { error(...); } // make sure there was enough memory [... accept data into buff...]

Later on, the code reads in data from the other end – up to, but not necessarily, size bytes’ worth – and then copies all the data it received into the allocated memory area buff.

You’ve probably spotted the problem.

If the user asks the driver to allocate a huge amount of RAM by setting size to, say, a value of 3 billion (which will fit into 32 bits – see above), the kernel_alloc() will almost certainly fail and the function will fail gracefully.

But if the user asks for almost, but not quite, 232 bytes of RAM, then the amount actually requested will end up, for instance, as (0xFFFFFFFF + 17).

Except that with just 32 bits to play with, the sum shown above exhibits a “millennium bug” problem, because (0xFFFFFFFF + 17) = 0x10000010, which is 33 bits long.

So the sum overflows, and gets “squashed” back into 32 bits as just 0x10 (16), in just the same way that AD1999+1 would wrap back to the year AD1900 due to the Y2K bug if you only had two digits available to represent the year.

In other words, an attacker could ask for 232-1 bytes of data (0xFFFFFFFF); would incorrectly receive a buffer of just 16 bytes; and could then send as much data as they wanted, whether that was 100 bytes, 1000 bytes, or, indeed, any amount up to 0xFFFFFFFF bytes…

…but any bytes from the 17th onwards would cause a buffer overflow.

Sentinel One didn’t take this attack any further, considering it “difficult to write an exploit for this vulnerability,” although van Amerongen noted wisely that:

We believe that it isn’t impossible and so those with Wi-Fi routers may need to look for firmware updates for their router.

What to do?

  • If you have a router that offers NetUSB for mounting devices over the network, check for an update. Note that just checking whether your router is listening for TCP connections on port 20005 (e.g. using Nmap) is not enough on its own, but it’s a useful hint that you might have a problem if you know how to do port scanning.
  • Don’t listen on all network interfaces by default unless you really need to. If you’re writing code to accept and process incoming network connections, take the least privilege you can, and open up your connectivity as little as possible.
  • If you are writing code that allocates memory on the say-so of an untrusted outsider, always check for sensible limits. Even if this buffer overflow were not possible, we can’t imagine why the SoftwareBus_dispatchNormalEPMsgOut function would ever need to allocate anywhere near 4GBytes of RAM. (According to Sentinel One, NetGear’s patch was to restrict the limit on size to just 16MBytes.)
  • Always check for integer overflow and underflow when calculating with untrusted inputs. Overflow is where positive numbers get too big and wrap around to the start of the range, as in this case; and underflow is where numbers end up below zero, and effectively wrap around to the end of the range instead.

Underflow may sound confusing at first, but if you mix up signed and unsigned numbers, underflow typically leads to enormous overflow.

You can visualise this by imagining an old-school car odometer that reads 00001 being reversed for 2km: after the first kilometre, it would correctly wind back to 00000, but after the second kilometre, it would apparently leap forwards to read 99999.

The odometer doesn’t have any way to denote negative values, which puts 00000 and 99999 next to each other in its numeric cycle, with the result that it has its own “clock” style of arithmetic in which 99999 + 2 gives 1, and 1 – 2 gives 99999


JavaScript developer destroys own projects in supply chain “lesson”

You’ve probably seen the news, even if you’re not sure what happened.

Unless you’re a JavaScript programmer and you relied on either or both of a pair of modules called faker.js and colors.js.

If you were a user of either of those projects, and if you are (or were!) inclined to accept any and all updates to your source code automatically without any sort of code review or testing…

…you’re probably well aware of exactly what happened, and how it affected you.

Supply chain attacks

Long term readers of Naked Security will be familiar with the problem of so-called supply-chain attacks in open source software libraries, because we’ve written about this sort of problem in programming ecosystems before.

We’ve written about security holes suddenly showing up in numerous coding communities, including PHP programmers, Pythonistas, Ruby users, and NPM fans.

Last year, we even had reason to debate the morality of self-styled academic researchers who deliberately used the Linux kernel source code repository as a testing ground for what they unashamedly referred to as hypocrite commits.

Software supply chain attacks typically involve poisonous, dangerous or otherwise deliberately modified content that infects your network or your development team indirectly, unlike a direct hack where attackers break into your network and mount a head-on assault.

Supply chain attacks are often passed on entirely unwittingly by one of your suppliers of products and services, who may themselves have ingested the unauthorised modifcations from someone upstream of them, and so on.

LEARN MORE ABOUT SUPPLY CHAIN ATTACKS

Click-and-drag on the soundwaves below to skip to any point in the podcast.
You can also listen directly on Soundcloud, or read a complete transscript.


Unethical, perhaps, but sometimes not criminal

As we mentioned above, however, supply chain problems of this sort don’t always arise from criminal intent, even though they may ultimately be judged unethical (or infantile, or ill-thought-out, or any combination of those).

We already mentioned hypocrite commits, which were intended to remind us all that it’s possible to inject malicious backdoor code under cover of two or more changes that don’t introduce security holes on on their own, but do create a vulnerability when they’re combined.

And we linked to the story of a “researcher” who was so keen to remind us how easy it is to create treacherous software packages that he deliberately uploaded close to 4000 of them in a sustained burst of “helpfulness”.

As we suggested at the time, both those “experts” – the hypocrites and the overloader – seem to have adopted the selfish motto that a job worth doing is worth overdoing

…thereby creating huge amounts of unnecessary work for other innocent volunteers in the Linux and Python communities respectively.

Colors and Faker go rogue

This time, the founder of two popular JavaScript coding modules known as colors.js and faker.js has thrown two slightly different spanners into the works.

Colors is a small and simple toolkit that helps you add coloured text in your console output, often in order to make the information more interesting to look at, and easier to read.

For example, when we made our Log4Shell – The Movie video recently, we added a dash of colour to the output of our mocked-up LDAP server to make it easier to track incoming requests, using ANSI control sequences in the terminal output to add green and red marks to denote successes and failures:

Sparing use of green and red terminal markers for visual appeal and clarity.

Unfortunately for colors.js users, the project’s founder, after not publishing any updates since 2019, suddenly added new code to take the release number from 1.4.0 to the somewhat unusual version identifier of 1.4.4-liberty-2.

Fed up, apparently, with never getting the financial recognition he felt he deserved from the many people that were using his work, the founder trashed his own code by adding an infinite loop like this:

/* remove this line after testing */
let am = require('../lib/custom/american');
am();
for (let i = 666; i &lt; Infinity; i++) { if (i % 333) { // console.log('testing'.zalgo.rainbow) } console.log('testing testing testing testing testing testing testing'.zalgo)
}

The loop at the end of this code prints the text testing testing ... testing over and over again, after applying a function called zalgo to it.

Zalgoification

Zalgoification, if you’ve never heard of it, is a way of making regular Roman characters look weird and meaningless by littering them with accents, cedillas, umlauts and other so-called diacritical marks – a bit like naming your band Motörhead instead of Motorhead, but without the restraint of just adding a single extra symbol.

Zalgoed text is not only meaningless, but also often puts a heavy load on the underlying text rendering software that’s trying to compose it and lay it out for display.

A human calligrapher would baulk at being asked to add every possible accent to every letter in a word, knowing that it would make no sense at all.

But a computerised compositor will simply try to oblige by combining all the markings that you request, giving your band Zalgometal a stylised name something like this:

Diacritical marks added randomly and meaninglessly throughout text

A memorial to Aaron Schwartz

Faker users experienced a different sort of update, with the project essentially wiped out and replaced with a README file asking “What really happened with Aaron Swartz?”

Schwartz, a “hacktivist” charged with federal offences relating to unauthorised access to academic papers that he thought should not be kept behind a paywall, sadly killed himself while under the stress of waiting for his trial.

The Faker project meets its end. Note the comment “endgame”, the lack of any source code files,
and the README remembering Aaron Schwartz.

Faker was a handy toolkit for developers that made it easy to generate large quantities of realistic but made-up data for quality assurance, such as creating 100,000 names and addreses you could add to your user database during development.

Fake data is a vital aspect of avoiding a privacy disaster while you are still working with untested, incomplete code because it means you aren’t exposing genuine, sensitive data in thoughtless (and possibly illegal) ways.

The author of Faker apparently tried to commercialise the project during 2021, but without success, so it looks as though he’s now given the code its coup de grace.

This plan apparently went nowhere, with little funding coming in from corporate users.

Given that the code has been released for many years under the MIT licence – which basically means that anyone can use it for free, even in commercial, closed-source products, as long as they don’t claim to have created it themselves – there’s nothing to stop existing users continuing with the previous version, or indeed any version before that.

They can even make their own modifications and improvements as they wish…

…so it’s not clear what the ultimate outcome of trashing the project so spectacularly is likely to be for the author, given that he can’t retrospectively rewrite the licences of users who have already downloaded and deployed it.

Does anyone win, or do we all lose?

As one aggrieved commenter said (someone who presumably did grab the update into production without reviewing what had changed, and who suffered a temporary outage as a result), it hasn’t really ended well for anyone:

Isn’t it interesting that its the people with no reputation that seem to think reputation has no value?? To all the people in here saying “we have been taught a valuable lesson about trusting free software”; understand this…

To cause me 15 min of grief all Marak had to do was irreversibly destroy his own reputation.

Whose side are you on in a case like this? Let us know in the comments below…


Honda cars in flashback to 2002 – “Can’t Get You Out Of My Head”

Owners of Honda cars of a certain age – apparently somewhere between 10 and 16 years old – have spent the first few days of the New Year reporting a weird “millennium bug style” problem.

Apparently, for many cars that are a decade or so old, New Year’s Day 2022 was ushered in with their in-car clocks…

…showing 01 January 2002, exactly twenty years in the past.

In case you’re wondering what life was like back then, it probably won’t help to be reminded that one of the top songs of the year was the unforgettable Can’t Get You Out of My Head, by Aussie superpopstar Kylie Minogue.

(As Kylie said at the time, “La la la, la la la-la la/La la la, la-la la la-la” – a refrain that still ranks, according to some studies, as one of the top earworms in history.)

But why?

The burning question is, “Why?”

In the infamous millennium bug, the error jump was 100 years, and the reason was obvious: programmers often used just two digits for the century (i.e. storing AD1999 as 99) as a simple shortcut to save on RAM and disk space.

Don’t forget that even by 1999, most computers had just a few megabytes of RAM, and 20 years before that, they had at most a few kilobytes, amounts that are a jaw-dropping one thousand and one million times smaller than we take for granted today.

But every shortcut comes with a cost, and the Y2K shortcut paid the price that, because 99+1 = 100, and because 100 crammed into two digits comes out as 00…

…people were afraid that the date 31 December 1999 (Baby One More Time by B. Spears) might confusingly be followed by 01 January 1900 (I Can’t Tell Why I Love You But I Do by H. Macdonough).

But why just 20 years in Honda cars? Why only in certain older-but-not-too-old models? And why two decades exactly?

Even more weirdly, why would Honda say, as some journalists are alleging:

We have escalated the issue of the navigation clock in our team of engineers and they informed us that you will face the problem from January 2022 to August 2022 and then it will be fixed automatically.

Let the guesswork commence

One good guess, backed up by a commenter on UK IT news site El Reg (The Register) who goes by VRocker, is that this particular glitch is GPS-related

Until recently, GPS time data – based on ultra-precise time signals beamed out by an array of orbiting satellites designed in the 1970s, when every individual bit of bandwidth really counted, let alone every decimal digit – was limited to a date window 1024 weeks wide.

Where Y2K-constrained dates had a maximum of two decimal digits for the year number, limiting them to a decimal century…

…GPS timecodes originally had just 10 bits (bit is short for binary digit, by the way) for the week number.

And in 10 bits you can represent numbers from 0 to 1023, giving you a 1024-week period (a “Kiloweekary”, we’ll call it) that covers nearly, but not quite, 20 years.

We’re currently in the Third Kiloweekary of the GPS era:

 First Kiloweekary: 1980-01-06T­00:00:00Z - 1999-08-21T­23:59:59Z Second Kiloweekary: 1999-08-22T­00:00:00Z - 2019-04-06T23:59:59Z Third Kiloweekary: 2019-04-07T00:00:00Z - 2039-11-20T23:59:59Z

Glitches may be relative

Of course, if your software – like the automatic time-setting software in many navigational devices – relies on GPS dates of this sort, you don’t inevitably suffer from computational wraparound glitches only at the exact changeover points betweem the Kiloweekaries listed above.

You aren’t locked to precisely the date ranges shown, but rather to the maxumum length (7168 days, or 1024 weeks, or about 19 years 7½ months) of each range.

That’s because you can add or subtract any offset you like to or from the starting point of each Kiloweekary, and do your own 19.6 year relative calculations from your new starting point.

This is the same trick that that many old-school programs did with two-digit calendar years: some software, for instance, allowed you treat 00-49 as representing AD2000 to AD2049, with 50-99 representing AD1950 to AD1999, thus shifting that software’s “millennium bug event” along by 50 years.

The Reg commenter called VRover whom we mentioned above claims that the GPS itself (not the clock) in his Honda CR-V is reporting that it’s currently May 2002 (which is, as he points out, 1024 weeks ago), while the clock is essentially stuck at 01 January 2002.

(According to reports, it seems that the clock resets to a time of midnight, modulo your timezone, on a date 01 January 2002, every time you restart the vehicle, regardless of the actual time of day; after this, the time can’t be adjusted manually.)

Rollover in the picture

That GPS detail is what led VRover to infer that this behaviour does indeed relate to a recent rollover of the Kiloweekary date range inside Honda’s software.

But why did the clock roll back in January 2022?

And why the auto-correction implied by Honda in August 2022?

VRover’s sugestions is that on 2022-08-17 by his calculation (this coming August), his rolled-over GPS date (which currently sits somewhere in May 2002) will think that 2003 just started.

And if the clock software is set so that it assumes it should disregard the time and date offered by the GPS unit if the year comes out earlier than 2003, in one of those “something must have gone wrong” error situations, then at least the time displayed – but not the date! – may well correct itself as suggested by Honda when the car thinks it has once again reached what it thinks is a valid date range.

We’re guessing liberally now, but if we assume that whoever created the software used in the affected range of vehicles knew that the first version definitely wouldn’t ship until 2003, then they’d know that using a Second Kiloweekary that represented its regular date range (from 1999 to 2019, as listed above) would waste the first few years of available dates.

But if they simply shifted the starting offset of their date range by 1000 days, they could then use the first 1000 days of the official GPS Kiloweekary (which would already be in the past when the unit shipped) to denote an additional 1000 days at the end of the range.

In other words, if they decided to use the dates from 1999-08-22 to 2002-05-17 (a 1000-day period) to represent the first 1000 days of the THIRD Kiloweekary instead of the first 1000 days of the SECOND Kiloweekary (much like a Y2K coder electing to use 00-19 to represent AD2000 to AD2019 instead of AD1900 to AD1919), they’d be able to represent dates in the following ranges:

 2002-05-18 - 2019-04-06 and 2019-04-07 - 2021-12-31

In simple terms, they’d be able to cover all years in full from 2003 to 2021 inclusive, with their “sideslipped” Kiloweekary ending conveniently on the last day of 2021 instead of mid-April in 2019.

If, therefore, as VRover suggests, the clock software was coded simply to discard the year 2002 (which is incompletely covered, and could in any case never be correct if the first units didn’t sell until at least 2003), and to default back to 01 January of that year if it ever faced dates outside the supported range…

…then his GPS date would indeed have flipped back from 2021-12-31 to 2002-05-18 when New Year arrived, which is a symptom he says he observed.

And then his clock – assuming that the sudden “reappareance” of 2002 would imply rollover or some other error – might thus repeatedly default back to 2002-01-01, in the same way that many digital oven clocks decide it’s exactly noon o’clock every time there’s a power failure.

What next?

Or course, if VRover is right about this – and we have no way of telling until Honda makes an official statement – then when his GPS thinks it’s 2003, his clock will start accepting the “sideslipped” data provided by the GPS once again, and the clock will start working, but although the time will be correct, the date won’t.

(Indeed, if VRover is right, his clock will start keeping time, and not resetting to 01 January every day, starting from 17 August 2022, but it will show the date commencing at 01 January 2003 from that point.)

And if that part of his guesswork is correct, the code in the clock that figures out daylight saving will be confused – presumably thinking it’s the Greenwich Mean Time period of the year (November to March in the UK), when in fact it should be the British Summer Time period (April to October), with the clocks turned forward by an hour.

What’s your explanation? How will this pan out? Let us know in the comments below…

(How time flies when you’re having fun!)


Log4Shell-like security hole found in popular Java SQL database engine H2

“It’s Log4Shell, Jim,” as Commander Spock never actually said, “But not as we know it.”

That’s the briefest summary we can come up with of the bug CVE-2021-42392, a security hole recently reported by researchers at software supply chain management company Jfrog.

This time, the bug isn’t in Apache’s beleagured Log4j toolkit, but can be found in a popular Java SQL server called the H2 Database Engine.

H2 isn’t like a traditional SQL system such as MySQL or Microsoft SQL server.

Although you can run H2 as a standalone server for other apps to connect into, its main claim to fame is its modest size and self-contained nature.

As a result, you can bundle the H2 SQL database code right into your own Java apps, and run your databases entirely in memory, with no need for separate server processes.

As with Log4j, of course, this means that you may have running instances of the H2 Database Engine code inside your organisation without realising it, if you use any apps or development components that themselves quietly include it.

JNDI back in the spotlight

Like the Log4Shell vulnerability, this one depends on abuse of the Java Naming and Directory Interface, better known as JNDI, which is an integral part of every standard Java installation.

JNDI is supposed to make it easier to identify and access useful resources across your network, including finding and fetching remotely stored software components using well-known search-and-discovery protocols such as LDAP (the Lightweight Directory Access Protocol).

As dangerous as this sounds, it’s important to remember that similar functionality can be coded into any software (compiled or interpreted, script or binary) that has network access, can download arbitrary data, and is able to turn that data into executable code of some sort. JNDI merely makes it very much easier to build distributed apps that find and load remote components. This sort of programmatic convenience sometimes improves security, because it’s often easier to audit and review code when it follows a well-documented path. But sometimes it reduces security, because it makes it easier to introduce unexpected side-effects by mistake. We saw this in Log4j, where “write out a text string to keep a record of data submitted by a remote user” could inadvertently turn into “download and run an untrusted program specified by a remote user”.

Fortunately, unlike Log4Shell, the CVE-2021-42392 bug can’t be triggered simply by emebdding booby-trapped text into queries that get sent to the H2 database engine.

Although Jfrog has documented several ways that cybercrimimals could, in theory, trick H2 into running arbitary remote code, the most likely attack path involves:

  • An active H2 web-based console. This is a built-in web server that usually listens on TCP port 8082, and allows developers to interact with the H2 SQL backend while it’s running. If this port is blocked or the console is inactive then this avenue of attack won’t work.
  • An H2 console listening on an external network interface. By default, the console only accepts connections from the computer it’s running on (localhost, usually IP number 127.0.0.1 in an IPv4 network). Unless this default is changed, attackers would need local access anyway before they could get at the H2 console.

According to H2, apps that embed the H2 engine directly into their code “are not externally exposed”, but as far as we can see this note refers only to the database engine itself when it’s not running as a SQL server, and not to the web console component.

Unfortunately, Jfrog notes:

We’ve observed that some third-party tools relying on the H2 database will run the H2 console exposed to remote clients by default. For example, the JHipster framework also exposes the H2 console, and by default sets the webAllowOthers property to true.

(The setting webAllowOthers is the Java property used by H2 to decide whether to accept connections from external network interfaces.)

The default web console login page includes a form that allows users to specify how they want to connect to the database.

A malevolent user could use this form to request a JNDI lookup via LDAP, just like in a Log4Shell attack, in order to trick H2 into fetching and running a remote Java .class file (a compiled Java program).

Although a treacherous URL used to launch an attack would be submitted in the the same login form that requests a username and password, Jfrog discovered that the JNDI lookup happens before the username and password are verified.

This means an attacker doesn’t need working credentials to exploit the vulnerability, so that the bug opens up what’s known as an unauthenticated remote code execution (RCE) hole, the most dangerous sort.

LEARN HOW JNDI AND LDAP COMBINE FOR REMOTE CODE EXECUTION

For a live demonstration of how JNDI can be maliciously combined with JDAP lookups to download and run untrusted remote code, watch this video:

[embedded content]

If you can’t read the text in the video clearly here, try using Full Screen mode, or watch directly on YouTube. Click on the cog in the video player to speed up playback or to turn on subtitles.

What to do?

  • If you have apps that use the H2 Database Engine, upgrade H2 to version 2.0.206.

At the time of writing, 2.0.206 (released 2022-01-04) is listed as the latest version, although the H2 changelog still lists 2.0.206 as “unreleased”, and doesn’t document CVE-2021-42392 as one of the issues fixed.

Jfrog, however, states that 2.0.206 includes a similar code change to the one that Apache used in the Log4j 2.17.0 update: H2 no longer allows JNDI to be used with any remote references.

This means, in theory, that attackers can no longer pull off the trick of saying “do a lookup, but use a network request that takes you to an untrusted exernal location so that we can manipulate the results”.

As far as we can see, the updated H2 Database Engine now only uses JNDI for what are essentially local Java function calls, so that remote code execution as an unexpected side-effect of using JNDI is no longer possible, neither by accident nor design.

  • To find instances of the H2 code on your network, you can search for files called h2-*.jar.

The wildcard text denoted by * should be of the form X.Y.Z, representing the version number of H2 that’s in use – anything below 2.0.206 should be replaced with the latest version.


go top