At last! Office macros from the internet to be blocked by default

Yesterday, we wrote that Microsoft had decided to turn off a handy software deployment feature, even though the company described itself as “thrilled” by the feature, and described its functionality as “popular”.

#ICYMI, that was about the use of so-called App Bundles to make software available for download via your browser.

By clicking on an App Bundle link (which starts with ms-appinstaller:// instead of the more usual https://), you would kickstart an installation process that looked much more official and trustworthy than just downloading an EXE file.

Criminals learned how to abuse App Bundles in order to give the impression that their malware installers were somehow approved or vetted by Microsoft, almost as though the download had come from the Microsoft Store itself (it hadn’t).

So, for the greater good of all, Microsoft essentially told its own software to block a useful feature of its own software, until further notice, at any rate:

This sort of thing doesn’t happen often, especially at Microsoft.

But no sooner had we written up the App Installer changes than we received an ever bigger – and, if we’re honest, a better – surprise…

… when Microsoft announced a change to the security settings in Office.

Macro code from the internet will at last be turned off by default!

Fin de siècle

If you’ve been in cybersecurity since the last millennium, you will certainly remember, and may still have occasional nightmares about, Microsoft Office macro viruses.

In fact, macro viruses were already a problem even before the Office apps merged into a suite of tools with a common macro coding language known as VBA, short for Visual Basic for Applications.

Before 1997, for example, Microsoft Word had its own scripting languge called WordBasic – similar to VBA, but not compatible with it – that was widely abused by malware writers for programming self-speading computer viruses.

But VBA was more powerful, more standardised, and once Office appeared, the malware writers took to it like… well, like a duck to water.

Simply put, if an Office document contained an embedded macro with a name that matched one of the Office menu options, then that macro would be triggered automatically whenever the user clicked on that menu item.

This made it easy for companies to adapt the behaviour of their Office apps to match their own workflow, which was enormously handy if you needed or wanted such a feature, for example to limit documents labelled ‘confidential’ from being printed out by mistake.

Even more dramatically, some special event-based macros, such as Auto_Open, were automatically triggered even if all you did was look at the document.

As a result, a malware writer who wanted to booby-trap a document file, getting it to run an embedded virus every single time the document was viewed, didn’t have to learn any special hacking or low-level coding skills at all.

As you probably know, the family of languages known as BASIC are meant to live up to their name. The word has even been wrangled backwards into the acronym Beginners’ All-purpose Symbolic Instruction Code, to remind you that it’s easy to learn because it was designed to be easy to learn.

Virus writing for everyone

Suddenly, anyone and everyone could be a virus writer.

Given that people typically exchanged Office documents many times a day (hundreds or thousands of times more frequently than they ever exchanged programs, or EXE files), macro viruses quickly became an ever-present, ever-troublesome problem.

Part of the problem was that that the vast majority of users, who didn’t really need VBA at all, were forced to have it installed and enabled by default.

Even those who didn’t want it and knew they didn’t want it couldn’t choose to skip the VBA part at installation time, or reliably turn it off afterwards.

For years, the cybersecurity industry urged Microsoft to change the Office defaults to allow installs where VBA functionality could be turned off (at the least), omitted entirely if desired (better still), or not installed by default at all (best of all).

The answer was always a resounding “No”.

Microsoft’s annoying, but understandable, argument was that endpoint software products generally get judged, by users and reviewers alike, based on what they do “out ot the box”.

Redmond suggested that full-power-Office-with-non-default-VBA would quickly become known in the market as a low-end-document-suite-with-no-macro-support-at-all, and thus the company would effectively be undermining its own product to give more aggressive competitors an unfair advantage.

(We’ve simplified enormously, but you get the idea, and anyone who has ever worked in a product management department or in product marketing will probably sympathise with Microsoft’s position. If your regular product already has features that influential customers expect, you don’t do yourself any favours by pretending that it doesn’t.)

Gradual restrictions and improvements

Ultimately, Microsoft did come to the cybersecurity party, and made steady changes to the VBA ecosystem that definitely helped to curb the virus writers’ “free-for-all” that existed in the late 1990s.

Sample security-related changes include:

  • Making it easier and quicker to detect whether a file was a pure document, thus swiftly differentiating between document objects containing no macros at all, and template files with macro code inside. In the early days of macro viruses, back when computers were much slower than today, significant malware-like scanning was needed on every document file just to figure out if it needed scanning for malware.
  • Making it harder for templates containing malware macros to copy them outwards into an as-yet uninfected file. Unfortunately, although this helped to kill off self-spreading macro malware (what we call true computer viruses), it didn’t prevent macro malware in general. Criminals could still create their own booby-trapped files up front and send them individually to each potential victim, just as they do today, without relying on self-replication to spread further.
  • Popping up a ‘dangerous content’ warning so that macros can’t easily run by mistake. As useful as this feature is, because macros don’t run until you choose to allow them, crooks have learned how to defeat it. They typically add content to the document that helpfully “explains” which button to press, often providing a handy arrow pointing at it, and giving a believable reason that disguises the security risk.
  • Adding Group Policy settings for stricter macro controls on company networks. For example, administrators can block macros altogether in Office files that came from outside the network, so that users can’t click to allow macros to run in files received via email or downloaded the web, even if they want to. But this setting is currently off by default.

More change coming soon

We still haven’t reached the point where an informed user with a local Office installation can remove VBA entirely and open Office files in a world in which VBA cannot work, rather than simply not working (which is nearly, but ultimately not at all, the same thing).

But Microsoft claims that this year’s Office 2203 releases will have one significantly different default – different by Redmond’s standards, anyway:

VBA macros obtained from the internet will now be blocked by default.

For macros in files obtained from the internet, users will no longer be able to enable content with a click of a button. A message bar will appear for users notifying them with a button to learn more. The default is more secure and is expected to keep more users safe including home users and information workers in managed organizations.

It took us ages to figure out the version number 2203. Having lived through Y2K – and, indeed, having been on duty in SophosLabs at that magical midnight hour, we’ve naively assumed that no one in the world, let alone anyone in IT or computer science, would now knowingly write the year as YY instead of YYYY. But 2203 apparently refers to “March 2022”, which means official releases starting in early April 2022.

According to Microsoft, any document tagged as having come from the internet, e.g. an email attachment or a web download, will be treated as though it contains no macros.

By default, you won’t be able to enable the macros from inside Office, even if you’re convinced (or are certain) that the macros are both expected and trustworthy.

Instead, says the report, you’ll see this:

Not exactly a victory over VBA malware

We’re delighted to see this change coming, but it’s nevertheless only a small security step for Office users, because:

  • VBA will still be fully supported, and you will still be able to save documents from email or your browser and then open them locally in such a way that embedded macros will be permitted. We can therefore expect cybercriminals to come up with circumlocutions, helpful diagrams and even what sound like “security conscious” reasons for opening documents in a more convoluted but insecure way.
  • The changes won’t reach older versions of Office for months, or perhaps years. Even the current version won’t include this change in all update channels until January 2023 at the earliest. Change dates for Office 2021 and earlier haven’t even been announced yet.
  • Mobile and Mac users won’t be getting this change.
  • Not all Office components are included. Apparently, only Access, Excel, PowerPoint, Visio, and Word will be getting this new setting. Although those file types cover the majority of attacks, it would be better if this macro-blocking feature applied to all Microsoft products without fear or favour.

For further information, consult Microsoft’s official article, Macros from the internet will be blocked by default in Office.


Microsoft blocks web installation of its own App Installer files

Late last year (November 2021), we reported on an unusual campaign of scammy emails warning recipients that they were in big trouble at work.

If you saw one of these, you’ll probably remember it: a customer had made a formal complaint and the company was scrambling to hold a meeting to investigate your alleged poor conduct…

…so you were expected to follow a link to download and read the complaint against you.

Here at Sophos, many of us were on the spamming list and received emails of this sort:

Click on image for full article.

You’re fired!

Some of us subsequently received follow-up messages telling us that we were no longer technically in trouble, for the rather dramatic reason that we’d been fired; our “termination letters” were attached, once again as a document download link.

The downloads looked like PDF files:

But the download links weren’t conventional http:// or https:// downloads; instead, they relied on an unusual link starting ms-appinstaller://, which (on Windows, at least), triggers Microsoft’s App Installer system to orchestrate the download process.

This ms-appinstaller protocol not only takes you down a very different visual path than a traditional web download, but also mirrors the sort of experience that you would only ever have seen before, if you had seen it at all, when using the Microsoft Store.

Notably, the process insists on a digitally-signed application bundle (for bundle, think Android APK or Linux package), and therefore starts with a reassuring, if unfamiliar, popup assuring you, with a confident-looking green tick (check mark), that this is a Trusted App, apparently coded by a vendor you know:

Note that in the screenshot above, the publisher’s name (fraudulently given as “Adobe Inc.”) is just a text string in the app bundle itself; to “verify” the signer, you need to click on the blue text Trusted App].

Unfortunately, the signer’s name doesn’t tell you much at all, or at least it didn’t last year when we first saw this trick used for distributing “trusted” malware.

Rogue signing made easy

In the example above, the app signer claimed to be a accounting firm and was a registered UK business.

But if you had chased it further than that, you would have found a company that had never really done any business, was located at an unlikely address, and was about to be deregistered anyway.

Simply put, based on an online-and-apparently-never-used-for-real company registration that probably cost just a few pounds (and who knows whether the attackers actually paid anything for it, or simply acquired access to the company data via an earlier hack or data breach?), cybercriminals were able to:

  • Acquire a “trusted” signing key and use it to sign App Installer bundles.
  • Distribute an App Installer bundle that presented itself as a Trusted App, much like an app from the curated Microsoft Store.
  • Trigger the installation via the Appinstaller.exe process, not more suspiciously via a browser.
  • Install and activate numerous unsigned apps under the imprimatur of the signed, and allewgedly trusted, top-level bundle file.

In the you’ve-been-fired email example we encountered here at Sophos, the purveyors of the “Trusted App” turned out to be the BazarLoader malware gang.

So, if the legitimate-looking, Microsoft Store-like installation process was enough to ensnare your trust, you’d have ended up with persistent backdoor malware – what’s often referred to as a bot or zombie.

The backdoor bot started off by leaking system configuration information to the crooks, and then waited for remote instructions on what new malware to download and run next.

Cybercriminals with generic remote code executiuon (RCE) access like this typically use your computer as a pawn in the underground economy, “renting” it out – possibly repeatedly – to other crooks to conduct further cybercrimes, either against your computer, or via your computer, or both.

Sometimes, sadly, this sort of zombification only gets detected by the victim when the bot operators “rent out” the infected computer (or use it themselves) one last time for a final round of malware that you can’t help but notice…

…typically, a ransomware attack.

More security implied that an HTTPS certificate

Note that the level of cybersecurity belief you are invited to adopt in the case of an ms-appinstaller:// download is significantly greater than the cybersecurity inference you’re expected to make from a regular https:// web certificate.

Web certificates, which use the TLS (transport layer security) protocol to encrypt and to integrity-check the data exchanged in an HTTP session between a client and the server, don’t say anything about how trustworthy the site at the other end actually is.

Indeed, browser makers have gone out of their way, over the past decade or so, to adjust the words, icons and colours used in the browser itself to describe an HTTPS-protected site.

After all, TLS, by design and by definition, provides transport-level security, thus putting the S (for secure) in HTTP (short for hypertext transfer protocol), but doesn’t aim or claim to perform any verification or trust assessment for the content that’s transmitted.

Firefox, for example, still uses a padlock icon to denote a “secure” site, but annotates the padlock simply with the words “Connection secure” and “You are securely connected”, without making any claims about the site itself:

The Edge browser does something similar when you click on a website’s padlock, mentioning the confidentiality of the connection, but not suggesting that you can therefore ultimately trust the contents of the site:

This site has a valid certificate, issued by a trusted authority. This means information (such as passwords or credit cards) will be securely sent to this site and cannot be intercepted. Always be sure you’re on the intended site before entering any information.​

In contrast, the App Installer popup that verifies the digital signature of the App Bundle you’re downloading explicitly identifies the software itself as a Trusted App, even though it allows the signer of the app to include entirely bogus vendor data in the app bundle, and then helpfully displays that fraudulent “identification” directly beneath to the “Trusted App” designator.

This implies, in our minds, anyway, that a much higher cybersecurity bar has been reached: it acts as a type of content-level assertion that lives on after installation, rather than merely denoting some degree of transport-level security that protects the network part of the download only.

Recommended workarounds

We recommended, and still recommend, various security workarounds, including:

  • Use a web filter, if you have one, to block the download of likely App Installer bundles. File extensions to block include: .msix, .appx, .msixbundle and .appxbundle.
  • Use a web filter, if you have one, to prevent users from clicking on URLs that start with ms-appinstaller://. This is the special protocol (referred to in URL terminology as a scheme) used by Windows to fire up the App Installer to take over from your browser.
  • Use Microsoft Group Policy settings, if possible, to prevent non-admin users from installing App Bundles at all. If that’s a step too far, lock down users so they can install App Bundles from the Microsoft Store only.

For the Group Policy tweaks that help with this issue (which was given the vulnerability idenfitier CVE-2021-43890), you can consult Microsoft’s published guidelines on which settings to use.

A step too far?

Our middle recommendation above might seem rather drastic, either for your internal users if your company relies on vendors that ship their software via App Bundles, or for external customers if you have gone down the App Bundle path for software delivery.

After all, App Bundles are supposed to have several advantages, notably for vendors with endpoint products that support a range of different Windows versions running on various computer types (e.g. Intel, AMD, ARM):

  • One signed bundle to rule them all, digitially signed at the top level.
  • User doesn’t need to figure out which of numerous distinct builds to use on their computer, so can’t end up with the wrong version.
  • Web dowloads via the App Installer save bandwidth by omitting the parts that aren’t required.

In Microsoft’s own words:

The ms-appinstaller protocol handler was introduced to enable users to seamlessly install an application by simply clicking a link on a website. What this protocol handler provides is a way for users to install an app without needing to download the entire MSIX package. This experience is popular, and we are thrilled that it has been adopted by so many people today.

What to do?

Despite the upbeat paragraph at the end of the previous section, Microsoft isn’t so thrilled that cybercriminals have adopted this “seamless” process that works “by simply clicking a link”, as we documented for the first time last last year.

For the time being, at any rate:

We are actively working to address this vulnerability. For now, we have disabled the ms-appinstaller scheme (protocol). This means that App Installer will not be able to install an app directly from a web server. Instead, users will need to first download the app to their device, and then install the package with App Installer. This may increase the download size for some packages.

In other words, Microsoft itself has given up entirely on supporting its own ms-appinstaller:// URL type via the web, because it thinks the process is still too easily abused.

Therefore:

  • If you use App Bundles to distribute your own software, you will need to change either your software packaging process, or your installation instructions, or both. Otherwise, potential customers may assume that your software is no longer compatible with their computer or their network, and shop elsewhere.
  • If you rely on vendors who distribute programs via App Bundles, you will need to change your deployment and updating procedures. Otherwise you might end up out of date, or with unhappy internal users.

S3 Ep68: Bugs, scams, privacy …and fonts?! [Podcast + Transcript]

LISTEN NOW

Click-and-drag on the soundwaves below to skip to any point. You can also listen directly on Soundcloud.

With Doug Aamoth and Paul Ducklin.

Intro and outro music by Edith Mudge.

You can listen to us on Soundcloud, Apple Podcasts, Google Podcasts, Spotify, Stitcher and anywhere that good podcasts are found. Or just drop the URL of our RSS feed into your favourite podcatcher.


READ THE TRANSCRIPT

DOUG. Bugs, scams, privacy and… *fonts*?

All that more on the Naked Security podcast.

[MUSICAL MODEM]

Welcome to the podcast, everybody: I am Doug; he is Paul…


DUCK. Hello, everybody.


DOUG. You have been *busy*.

We’ve got six stories of yours to talk about today… what have you been *doing*?


DUCK. I didn’t make the bugs that I felt compelled to write about!


BOTH. [LAUGHTER]


DUCK. That’s all I’m saying.


DOUG. Yes, that’s fair.

So we’ll jump right into it, because we’re going to do a lightning round and then we’ll dive a little deeper into some privacy issues.

But we like to start to show with a Fun Fact: I found today that the North American elk can reach 700lb, which is about 320kg, yet it can also reach running speeds of 40 mph or 65 km/hr, and is often able to outrun even horses.

So, a very large animal that can run very fast.


DUCK. Did you say “elk”, Doug?


DOUG. Yes.

And we will talk about elk later in the show.


DUCK. Whenever I hear that word – because we don’t get elk here [in the UK] – it means one particular thing to me, and I bet you it’s the same thing that you’re thinking about.


DOUG. Yep! Wink, wink.

Let’s talk about these two Linux bugs: a big one that happened a week ago but has since been patched, and a maybe-not-as-big one that is happening as we speak.


DUCK. That’s right.

Let’s start with PwnKit, shall we?


DOUG. We shall.


DUCK. Whether it was a big one or not, I don’t know; that depends on your outlook.

But it’s an interesting reminder that sometimes – and the other bug proves this as well – when you introduce tools that are designed to make security easier, they sometimes make security *too* easy, such that they introduce a bypass.

And this is CVE-2021-4034, also known as PwnKit. Apparently, that is meant to be a play on words, Doug, because the bug was in a part of Linux called “Polkit”, formerly known as the Policy Kit.

[LAUGHS] I don’t think it’s quite as much of a joke as the researchers at Qualys who found it thought, but I get where they’re coming from.

Polkit is meant to be a way in which unprivileged apps can securely interact with the operating system in order to say, “Engage some kind of password prompt that will authorise the user temporarily to do something they wouldn’t normally be allowed to do.”

And you can imagine that there are lots of cases in every operating system where you might need to do that.

The classic example is when you plug in a USB stick: maybe you’re allowed to read it and access the files on it, but when it comes to wiping it, and reformatting and zapping everything, maybe it’s time to pop up a password prompt to make sure that you are authorised.

However, there is a command line tool that goes with Polkit, and it’s like the Linux or Unix sudo tool, which is “Set UID and do”, which means “Run a command as another user”, exactly like Windows Run As....

You usually use sudo for running things as root, but you can in fact use it to run as anybody else, depending on how it’s configured.

And it turns out that Polkit has a very similar program, imaginatively called pkexec, the “Polkit execute” command.

Anyway, it turned out that if you deliberately ran this pkexec app in a way that you could not normally do from the command line – in other words, if you ran it and said, “I want to give you absolutely no command line arguments at all”, two things happen.

One is that pkexec goes, “OK, you probably just want to run a command shell.”

And the other thing is that it turns out that you could actually trick the program into doing something naughty: loading an external module or program that it wasn’t supposed to.

And, bingo!, you would convert yourself, if you already had access to the computer, from dear old doug to bad old root.

Just like that, just literally by running one command – ironically, a command that was supposed to be there to improve security and to control your ability to get access to root commands.

You could abuse the command to let you take over: one of those “elevation of privilege” bugs that turns a remote code execution bug that wouldn’t otherwise be harmful into a total disaster.


DOUG. So that’s been patched?


DUCK. It has.


DOUG. OK, very good.

And then we have a bug in the video driver…


DUCK. Well, yes, but I don’t think it’s a new bug, actually.


DOUG. Yes, it looks like they had it fixed in October.


DUCK. Yes: the patch that was documented is originally dated October 2021.

I think that what happened is someone found that this was something that probably shouldn’t be in the code, but I presume they figured, “Well, we don’t really see a way that this can be exploited. And when we implement this patch, it might reduce performance slightly. So, because there’s no clear and present danger, we’ll just put it in the basket of things to do when the time comes.”

And then suddenly the time came…


DOUG. [LAUGHS]


DUCK. …and the fix got rolled out.

This one was a bug in the Intel video driver.

The thing is that you might want to give a user access to run code raw code on the graphics card for performance reasons, because graphics cards aren’t just used by gamers.

They’re also used for things like [IRONIC CHUCKLE] cryptomining, video rendering, machine learning – high-performance computing, because there’s a certain class of problem that graphics cards can attack really, really quickly.

And it turns out that, deeply hidden in this driver, the i915 driver, was a possibility that somebody who had the right to run GPU graphics card code could run some code, and then later could come back and say, “Dear kernel, I’d like to run some more GPU code”, and, inadvertently, they would get access – via their graphics code – *to the memory that they had last time*.


DOUG. [WORRIED] Hmmmmmmmm.


DUCK. Even though that memory might now have been allocated to another process.

So, if you could, for example, collide your memory buffer with one that you know gets allocated, say, to some cryptographic processing subsequently…

…you might be able to read out passwords or private keys.

You might even be able to write back to somebody else’s data.

And that was the bug, basically, caused by a component inside the chip itself that aims to speed up memory access when you access memory a second, third, fourth time: a thing in the chip called the TLB, the translation look-aside buffer.


DOUG. OK, that has been patched as well.


DUCK. It has.


DOUG. Check that out: both those stories are on nakedsecurity.sophos.com.

And those of you that tuned into last week’s show will know that we talked about an Apple Safari bug – a “supercookie” situation – that has now been patched.

And they kind of slipped as zero day in there at the same time…


DUCK. The zero-day is not related to the Safari patch, but the Safari bug is maybe the thing that caused this fix to come out sooner than we thought it might have done.

Like you said, in there with the Safari bug fix – which now gets a CVE – is one that where Apple just says (and we’ve read these words before), [FAST, QUIET ROBOTIC VOICE] “The company is aware of a report that this issue may have been actively exploited.”

Sounds like nothing, doesn’t it?

My translation is [DANGEROUS DALEK VOICE]: “This is an 0-day. An in-the-wild exploit is already doing the rounds.”

I’m not going to say, “Be very afraid”, but certainly Patch Now!

I guess that’s good: zero-day closed off, and that Safari data leak fixed.

If you listened to us – I think it was last week, wasn’t it? – that bug was a special feature in a local database cache (again, caching data locally can be problematic!).

And while you couldn’t read other people’s databases, you could read other people’s database *names*.

Of course, to make your database name unique, as a programmer, you have two choices.

Either you pick a weird string that is specific to your website, which means that anyone else can see which website you’ve been visiting, because of the name of the database, without having to look inside it – it’s like having a phone number showing up.

Or you pick a completely random number for each user, and then it doesn’t identify the website, but it does uniquely identify the user.

Apple fixed that: they made the list of names as private as the data concealed behind the names.


DOUG. And they fixed it quickly… after fixing it slowly.


DUCK. Yes. [LAUGHS] That’s a lovely way of putting it, Doug!

I forget when it was reported, but it was sometime in the middle to end of last year, wasn’t it?

The bug finders reported it and Apple, as usual… basically, when they don’t say anything, I think that means you infer, “Thank you.”

And they sort of sat and waited and waited and waited.

Suddenly Apple started working on it in WebKit; then they mentioned how it worked, and that kind of forced Apple’s hand.

So, I guess that’s why, these days, we do have responsible disclosure: give the vendor a break and let them fix it first.

But then there has to be some payback, doesn’t there?

If the vendor goes, “Thanks for telling us. Please hold the carpet while we sweep it underneath”…


DOUG. [LAUGHS]


DUCK. …so the idea is there’s a deadline. “Please do it by then.”


DOUG. All right, so those updates are available wherever you get your Apple updates.

We will move on to a COVID scam that promises an at-home PCR testing device… what’s the catch?


DUCK. Well, the good news is that if you click the link…

(It was reported to us by a naked security reader who got it on… I think it was Friday afternoon last week, and the domain it was using (which wasn’t completely unbelievable; it was omicron DOT testing-and-a-few-funny-characters DOT com… that domain had been set up *that morning*, and the Let’s Encrypt HTTPS certificate had been issued *that morning*.)

…they haven’t got the site ready, and the site is still not working; everyone’s blocking it now.

So, we don’t actually know whether it was crooks just testing how many people would click, or whether they were just looking for IP numbers.

I’m suspecting, from the files that we could see on that website that weren’t protected – very few of them – that it was just an attempt to set up a believable scam where they didn’t quite get the website right in time.

It’s not that unbelievable: I can see why there would be people who go, “I’m not surprised. Who would have thought the modern computer would have 16 processor cores in an affordable laptop? Who would have thought miniaturisation would get to where it is today? Maybe you *can* get a PCR testing device at home.”

It’s not a laughable idea, and you can see why people would click through.

So: beware, folks!


DOUG. OK, good.

And then our final quick story to cover is this “Google Font” brouhaha.

The existential question for any web developer is to link or not to link to a font library? Download it and put it on your own server? Is it OK to link out?


DUCK. Well, to be fair to Google Fonts, they actually say, “You can do this how you like. They’re open source fonts. Here’s the licensing.”

They’re trying to do the right thing because fonts have been one of the most ripped off bits of intellectual property in history, haven’t they, online and for printing.


DOUG. Yes.


DUCK. Google is trying to do the right thing, in my opinion, by having correctly licensed typefaces from lots of people, including reputable designers who want to make their fonts available free.

And they’re saying: “You can download them; you can use them on your own website; you can share them with other people because they are open source, but we will host them for you as well, if you like.”

You and I were chatting about this earlier, weren’t we, Doug?

And you said that you would never have thought, in your web admin days, to copy the font, because they do surprisingly regularly get updated, don’t they?


DOUG. Yes. I don’t want to have to worry… t’s one more thing to look after.


DUCK. Absolutely!

Anyway, Doug, a court in Bavaria, in Munich – a District Court in Munich – heard a case where the plaintiff said, “I went to this website that fetched the font from Google so it could display the rest of their content, which was stored locally. They could have stored the font locally. They jolly well *should* have, because they violated my privacy by giving my IP number to Google.”

And the court found in the plaintiff’s favour and find the website €100 [$110], I do believe, and said, “No, you have to store it locally.”


DOUG. What’s the German phrase for “slippery slope”? Because that’s what I’m thinking this is.


DUCK. Or the German for “very deep hole”.

It’s interesting that although – because it’s somewhat esoteric – this has not been the most viewed article of the week on Naked security, it’s *by far* the most commented on.


DOUG. It is!


DUCK. But, like you say, “slippery slope/great deep hole”.

Like, “What next?”

As one commenter said, perhaps going a little bit over the top, “Well, then, you shouldn’t even be allowed an ISP!”


BOTH. [LAUGHTER]


DUCK. “Dial-up modem into your own basement. 386. Do it yourself!”

Where do you draw the line?

So, I don’t quite understand this.

I see where they’re coming from: IP numbers are personally identifiable information; GDPR says so; I don’t think that’s unreasonable.

But the idea that if you *can* host it locally, you *must* host it locally?

Good luck with that in the cloud era.

And good luck defining where self-hosting ends and “somebody else hosting it for you” starts.


DOUG. Well, 25 comments and counting!

So if you want to opine, get over to that article, that’s: Website operator fined for using Google Fonts the cloudy way on nakedsecurity.sophos.com – lots of discussion!


DUCK. We shall see how it ends up – I’m sure we haven’t heard the end of that.


DOUG. All right, it is now time for This Week in Tech History.

We talked about elk earlier in the show, and this week in 1982, we were introduced to the Elk Cloner virus, one of the first viruses…


DUCK. [TRIUMPHANT] I got it right, Doug!


DOUG. …if not the first to spread in the wild.

Cloner was a boot sector virus written by then-15-year-old Rich Skrenta, and distributed on Apple ][ floppy disks.

The virus was hidden inside a game and wouldn’t spring into action until the 50th time the game was loaded.

At that point, the virus, which had been loaded into memory, would spread to uninfected disks when they were inserted into the drive.

So, it spread, and I think Skrenta came out and said, “Look, man, this is a joke. A prank. I used it prank my friends. What’s the big deal?”

And, back then, what was the big deal?


DUCK. Well, I’m not sure that there was one then, although if only we had all learned a lesson from it before boot sector viruses became a huge problem on the IBM PC four years later.

Those of our listeners who don’t remember floppy disks will also probably not realise that the big hassle with boot sector viruses is that *every floppy disk had a boot sector*.

It didn’t have to be a bootable operating system disk, or a bootable game disk.

It could be a blank diskette: when you formatted a disk, it would get a boot sector on it.

But when you booted, it just said, “This is not a bootable disk.”

And by the time you saw that message, you could already have run the boot sector virus.

In those days, if you left a floppy in, it would *always* try to boot off the diskette, so the chance that you would contract a virus from an otherwise blank diskette by mistake was huge.

“Elk Cloner – the program with a personality”, Doug.

[RECITES POEM FROM VIRUS] “It will get on all your disks/It will infiltrate your chips/Yes, it’s Cloner!/It will stick to you like glue/It will modify RAM, too/Send in the Cloner!”


BOTH. [LAUGHTER]


DUCK. Well, I believe that Rich Skrenta went on to have a good career as a computer scientist, still does.


DOUG. He did!.


DUCK. So, it didn’t end badly for him.

I can’t imagine that he could easily have him prosecuted then.

I guess the first time you do it, it *is* a joke.

Once people have realised that the joke isn’t funny, and you’ve realised it yourself, *that’s* when it starts becoming naughty.


DOUG. Anyhoo, let’s talk about privacy.


DUCK. [IRONIC] Malware won’t last, Doug! It’ll die out!


DOUG. [LAUGHING] No, it’s a fad!

Last week, it was Data Privacy Day.

And, Paul, I thought you had a great article with some no-nonsense tips for keeping your data private.

So, let’s talk a little bit about those.

The first thing you say is, “Get to know your privacy controls”, which I’m guessing not a lot of people do.


DUCK. Or perhaps they *think* they do.

Because they’ve looked at… say if they’ve got a Mac, they’ve gone into System Preferences and they’ve clicked through to “Firewall”, “Security”, “Privacy”, and they’ve fiddled with the settings there.

Maybe they’ve gone into Safari and they’ve changed some settings there…

And then they forget, unfortunately, that if you then install Firefox, well, that’s got its own privacy settings!

They’re in a “Settings” menu, but they don’t have quite the same names, and they’re not arranged in quite the same menu hierarchy.

And then maybe they install Edge, or Chrome, or Chromium and they all have their own menu systems as well.

And then maybe you think, “I know! Tonight I’m going to spend 38 minutes digging through all the Facebook privacy options and security settings.”

Whether you love or hate Facebook, you actually might be pleasantly surprised at how much control you do have; the problem is that you have so much control that there are so many different settings that you need to take into account, under so many different headings.

And then every other social network; every other website; every other online service: they’ll have some settings that are the same; some overlap; some don’t; some turn on 2FA *here*; some turn it on *there*…

And unfortunately, you don’t really have much choice other than to get yourself a plentiful supply of soft drink, maybe even some popcorn, if you don’t mind getting popcorn detritus on your keyboard…


DOUG. [LAUGHS]


DUCK. …and take the time to go through the privacy settings in all the apps and online services you use.

It *is* a bit of a pain in the behind, but you may find it’s well worth it.

Because even though social networking companies are getting a bit better about their defaults – both because they recognise that it makes users happier, and because there are regulations they now have to comply with – their opinion may not coincide with yours.

After all, you are the product, and they do have different expectations of what they can collect…


DOUG. That is a great segue to another great tip: “Decide what your data is really worth.”

The ultimate question, with everything being free online.


DUCK. It is, isn’t it?

Sadly, that’s one of the shortest tips that I put out, because the amount of advice or discussion or explanation I can give you is quite low.

I don’t know what your home address feels like it’s worth to you, or your home phone number; I don’t know whether you think it’s worthwhile to share this photo or that photo…

But the point is that you *can* set some limits on what you’re willing to hand over – and then back yourself and stick to them, if you do see an app or a website that is asking for more than you think it’s worth, or more than you think it needs.

So, if you’re getting free WiFi for 35 minutes, for instance, at a shopping mall that you’ve never been to before, and they say., “We need your date of birth”, then just say, “You know what, maybe you do, maybe you don’t. But I don’t need your service.”

Find somewhere that isn’t so nosy!

To use old language. “Vote with your chequebook!”


DOUG. Very good.

And this next tip – I am absolutely delighted that this is the second week in a row we are talking about FOMO and JOMO!

This tip is: “Be fair to yourself and to others.”

What did you mean by that, Paul?


DUCK. I meant that it is sometimes easy, particularly if you’re out on the town. or you’re having fun with friends, or everyone else is talking about this fantastic new social network service that they love…

It’s really easy to go, “OK, you know what? I’ve decided how much my data is worth. I’ve decided how much I want to share. This service is asking for too much. But FOMO! I don’t want to miss out! I want to be in it. I want to be there with all my buddies. I’m going to let them push me into sharing stuff that I’m not really comfortable with.”

Maybe remember that, for every FOMO there is, as you said last week, a JOMO: the *joy* of missing out.

You don’t have to feel smug about it, but sometimes – particularly if there’s a security breach down the line – you’re going to be the one with a smile on your face, while everyone else is running around thinking, “Oh, golly!”

So, don’t let your friends talk you into sharing more about your digital life than you want to.

And the flip side of that is that if you’re more liberal with your data than one of your friends, and they say, “You know what? I was happy to be in that selfie, but I didn’t realize you planned to post it on XYZ service. Please don’t”…

…then let them enjoy their JOMO moment.

So don’t… I nearly said a rude word there… don’t be a naughty person!

If they say, “Please don’t post it”, let them have their way.

Life’s too short to wind up your friends over something as simple as that.


DOUG. OK, and then a very practical tip: “Don’t let scammers into your life.”


DUCK. Yes, that’s once again FOMO and JOMO on the opposite sides of the coin.

Meeting new people online can be fun:; in theory, there’s nothing wrong with it.

But it’s when you’re in a little bit of a hurry, or when you let yourself get pushed along, then it’s not just that you might leak data that you later regret – for example, where some crook comes along and figures out your birthday and your dog’s name and your cat’s name. and puts them all together and guesses your password.

It might be that you are simply befriending someone that, if you had kept your eyes and ears a bit wider open, you would have realised was up to no good from the start.

Stop. Think. Connect!

When you let someone trick you, squeeze you, press you into doing things online faster than you would naturally do them yourself, you could end up in trouble.


DOUG. Great!

We’ve got some additional advice that you can share with your friends and family, so we invite you to check that out.

That article is called: Happy Data Privacy Day, and we really do mean happy on nakdesecurity.sophos.com.

Happy Data Privacy Day – and we really do mean “happy” 🙂

And it’s that time of the show: the Oh! No! of the week.

Reddit user Computer1313 writes…

“An old, short story from a previous co-worker.

He was working at an automotive manufacturing plant, many years ago, and he was reprogramming the paint robotic arms for the incoming new truck model.”

(What could possibly go wrong?)

“He uploaded the changes and started the automated painting system with a test truck frame to see how the paint job is done.

He had his hand over the emergency stop button in case anything went wrong.

All he remembered from the immediately ensuing chaos was that one of the robotic arms struck a steel beam and broke off its nozzle, so now a solid jet of paint was spraying everywhere.

Another arm repeatedly smashed the frame like a hammer, caving in the truck’s roof.

He said he was so shocked that he didn’t press the emergency stop button until he heard yelling.

It took a long time for the paint fumes to be vented out so they could go in, clean up the paint mess, and repair the damages.

Oh, and it was the day when the plant management was giving corporate executives a tour of the place.

I asked what their facial expressions looked like when they saw the ruined paint station and he said, ‘Pure horror.’

So, just a cautionary tale that computer programming can sometimes be destructive and dangerous.”


DUCK. I don’t like that story, Doug, because it is grist to the mill of anyone who stands firm against our advice to Patch early, patch often


DOUG. [LAUGHS]. Yes!


DUCk. …because *that* is what I call a bug.


DOUG. Yes, Sir!


DUCK. Can you imagine a full “Fire Brigade-type spraying tube” of paint?


DOUG. [LAUGHS] Instead of a beautiful little spritz.

I like to imagine this thing looks just like an octopus too – just a bunch of arms flailing around.


DUCK. I assume that the next update he tried, he had an artificial hand on a long stick, held over the button at a long distance.


DOUG. Yes!


DUCK. Terrifying.


DOUG. Everyone be careful out there!

If you have an Oh! No! you’d like to submit, we’d love to read it on the podcast.

You can email tips@sophos.com. you can comment on any one of our articles, or hit us up on social @NakedSecurity.

That’s our show for today – thhanks very much for listening.

For Paul Ducklin, I’m Doug Aamoth, reminding you until next time to…


DOUG. Stay secure!
DUCK. Patch early, patch often, and STAND BACK!


BOTH. [LAUGHTER]

[MUSICAL MODEM]


Elementor WordPress plugin has a gaping security hole – update now

If you run a WordPress site and you use the Elementor website creation toolkit, you could be at risk of a security hole that combines data leakage and remote code execution.

That’s if you use a plugin called Essential Addons for Elementor, which is a popular tool for adding visual features such as timelines, image galleries, ecommerce forms and price lists.

An independent threat researcher called Wai Yan Myo Thet recently discovered what’s known as a file inclusion vulnerability in the product.

This security hole made it possible for attackers to trick the plugin into accessing and including a server-side file…

…using a filename supplied in the incoming web request.

Simply put, a malicious visitor could trick an unpatched server into serving up a file it’s not supposed to, such as the server’s own username database, or coerce the server into running a script it shouldn’t, thus creating a remote code execution (RCE) hole.

As you proably know, web server RCE bugs are typically abused to implant malware that allows the attackers to do something to your immediate, and often costly, detriment.

Typical examples of how cybercriminals exploit RCE bugs include:

  • Opening up a backdoor, so they can sell access to your server on to other crooks.
  • Launching a cryptominer to steal your electricity or cloud services to generate money for themselves.
  • Setting up network surveillance tools to snoop on and steal your own or your customers’ data.

Server-side includes

Web server file inclusions, often referred to in the jargon as server-side includes, are used in dynamic website content software such as WordPress so that you don’t need to store pre-generated HTML for every page on your website.

For example, if your website includes a page laid out like this…

…then only the text highlighted in blue above – the primary content your reader is supposed to see – is unique to the page:

If you have a completely static, pre-rendered website and want to change the style settings, or to alter the wording of the header and footer, you’ll need to edit or regenerate every web page on the site, even those that might end up never getting visited.

But with a website builder that allows server-side includes, you might be able to rewrite your page something like this:

The idea is that the server will read in the specified #include files at run-time and add them into the HTML page that actually gets served up, thus generating the web page automatically when needed, using the latest versions of the styles, header and footer files.

Often, you will want to customise some aspect of the files you include, such as adapting the style to suit your users, for example based on a cookie that their browser supplies when they visit.

Your server-side include system might therefore allow you to “tweak” the names of the files included, for example like this:

If you’re wondering why we chose the “magic characters” ${...} in our invented server-side scripting system above, it’s a nod to the infamous Log4Shell vulnerability, where those very characters were used with untrusted, user-supplied data to trick the Log4j Java programming system into running unwanted commands.

Untrusted input can’t be trusted

You can see the obvious problem here, namely that if the special text string ${cookie:usr_theme} blindy extracts the text in the usr_theme cookie supplied by the user, and uses it to build a filename, then there’s nothing to stop a malicious user from asking for a theme called, say, ../../../../etc/passwd.

This would trick the server into #including the file content/theme/../../­../../etc/passwd, which wouldn’t read in a file from the content/theme/ directory, but would navigate up to the root directory, and then descend back down into the system’s /etc/ directory to in the contents of the passwd file instead.

Even if the resulting HTML file wouldn’t display properly because of the unexpected content in the section of the file served up, the visitor would still end up with a copy of your passwd file, and thus a list of all accounts and usernames on your server.

Worse still, many web servers and content management systems treat some filenames specially when they’re included.

Microsoft IIS, for example, considers files with the extension .aspx special; many Linux-based web services do something similar if the file ends in .php.

Instead of including the raw contents of the file, the system will run the file as a program (typically written in Visual Basic on Windows servers, and in PHP on Linux servers), and include the output from the program instead.

This makes content such as customised pages and one-off search results easy to generate on demand, because the code needed to generate the content is embedded in a logical place in the directory tree that represents the structure of the website.

Of course, this also means that an uncontrolled #include directive, like the theme-based one we envisioned above to steal the password file, could be used for remote code execution as well as data leakage.

For example, imagine that we replaced the malicious “theme cookie” above with text such as ../../scripts/listusers.php, because we knew or could guess that the server in use contained a PHP utility script of that name to list all the website logins.

We’d then be able to trick the server into runnning that script, even if it was never intended for running from inside web pages, and wasn’t supposed ot be accessible to outsiders at all.

Even worse, we might find that we could use the ../.. (“move upwards in the directory tree”) trick to execute a script file called, say, ../../uploads/pending/img000067.php.

Usually, there wouldn’t be such a file and the #include would therefore obviously fail, but if we knew (or suspected) that the server had an uploads/pending/ directory where user-contributed objects such as comments, images, videos and so on were stored temporarily until a moderator decided whether to approve them…

…and if we could upload a “pending” file using a name we could subsequently predict, then we’d not only have a remote code execution hole, we’d have a totally arbitrary remote code execution hole.

We we could first upload a rogue script, so that the file appeared temporarily in the uploads/pending/ directory, and immediately afterwards trick the server into executing it by setting a special cookie to trigger the attack.

Unfortunately, the Essential Addons for Elementor plugin included a bug of this sort, based on PHP code that constructed a filename for server-side inclusion like this:

$sentbyuser = $_REQUEST['userinfo'];
# ...
$filetoinclude = sprintf( '%s/Template/%s/%s', $systemfilepath, $sentbyuser['name'], $sentbyuser['file_name']
# ...
# ... no safety checks done on constructed filename
# ...
include $filetoinclude

This is totally unacceptable code, because constructs the variable $filetoinclude, and then includes it, without doing any checks for dangerous characters such as ../ sequences in the untrusted variables $sentbyuser['name'] and $sentbyuser['file_name'].

The creators of the plugin were informed of the hole by original bug-finder Wai Yan Myo Thet; unfortunately, their first attempt to safety-check and sanitise the filename was insufficient to keep determined attackers out.

Following further prodding from WordPress security company Patchstack, the plugin was updated twice more in quick succession to stave off attacks caused by malicious incoming user data.

According to Patchstack, the buggy code is only used if certain gallery-related web widgets are enabled, so that not all unpatched Essential Addons for Elementor sites are vulnerable. Nevertheless, we recommend patching promptly anyway, rather than leaving an easy-to-exploit RCE hole that could come to life at any time based on a server configuration change that would otherwise be uncontroversial.

What to do?

  • For Essential Addons for Elementor users. Check that you have version 5.0.6 [released on the day this article was written] or later. The bug was discovered in version 5.0.3, but patch 5.0.4 was quickly superseded by the updated patch 5.0.5, which was in turn quickl superseded by 5.0.6.
  • For web developers. We shouldn’t need to say this as often as we do (or even, perhaps at all) in 2022, but we shall say it anyway: validate your inputs.

Don’t just check programmatic input when you know for sure that it came from an untrusted source such as an HTTP request.

Even if you think you can trust the upstream process or user who provided your input, check it anyway, in case that trusted process itself contains a bug, or relied in some way on tainted content that started further up in the data supply chain.


Linux kernel patches “performance can be harmful” bug in video driver

Remember all those funkily named bugs of recent memory, such as Spectre, Meltdown, F**CKWIT and RAMbleed?

Very loosely speaking, these types of bug – perhaps they’re better described as “performance costs” – are a side effect of the ever-increasing demand for ever-faster CPUs, especially now that the average computer or mobile phone has multiple processor chips, typically with multiple cores, or processing subunits, built into each chip.

Back in the olden days (by which I mean the era of chips like the Inmos Transputer), received wisdom said that the best way to do what is known in the jargon as “parallel computing”, where you split one big job into lots of smaller ones and work on them at the same time, was to have a large number of small and cheap processors that didn’t share any resources.

They each had their own memory chips, which means that they didn’t need to worry about hardware synchronisation when trying to dip into each others’ memory or to peek into the state of each others’ processor, because they couldn’t.

If job 1 wanted to hand over an intermediate result to job 2, some sort of dedicated communications channel was needed, and accidental interference by one CPU in the behaviour of another was therefore sidestepped entirely.

Transputer chips each had four serial data lines that allowed them to be wired up into a chain, mesh or web, and jobs had to be coded to fit the interconnection topology available.

Share-nothing versus share-everything

This model was called share-nothing, and it was predicated on the idea that allowing multiple CPUs to share the same memory chips, especially if each CPU had its own local storage for cached copies of recently-used data, was such a complex problem in its own right that it would dominate the cost – and crush the performance – of share-everything parallel computing.

But share-everything computers turned out to much easier to program than share-nothing systems, and although they generally gave you a smaller number of processors, your computing power was just as good, or better, overall.

So share-everything was the direction in which price/performance and thus market ultimately went.

After all, if you really wanted to, you could always stitch together several share-everything parallel computers using share-nothing techniques – by exchanging data over an inexpensive LAN, for example – and get the best of both worlds.

The hidden costs of sharing

However, as Spectre, Meltdown and friends keep reminding us, system hardware that allows separate programs on separate processor cores to share the same physical CPU and memory chips, yet without treading on each others’ toes…

…may leave behind ghostly remains or telltales of how other progams recently behaved.

These spectral remnants can sometimes be used to figure out what other programs were actually doing, perhaps even revealing some of the data values they were working with, including secret information such as passwords or decryption keys.

And that’s the sort of glitch behind CVE-2022-0330, a Linux kernel bug in the Intel i915 graphics card driver that was patched last week.

Intel graphics cards are extremely common, either alone or alongside more specialised, higher-performance “gamer-style” graphics cards, and many business computers running Linux will have the i915 driver loaded.

We can’t, and don’t really want to, think of a funky name for the CVE-2022-0330 vulnerability, so we’ll just refer to it as the drm/i915 bug, because that’s the search string recommended for finding the patch in the latest Linux kernel changelogs.

To be honest, this probably isn’t a bug that will cause many people a big concern, given that an attacker who wanted to exploit it would already need:

  • Local access to the system. Of course, in a scientific computing environment, or an IT department, that could include a large number of people.
  • Permission to load and run code on the GPU. Once again, in some environments, users might have graphics processing uniut (GPU) “coding powers” not because they are avid gamers, but in order to take advantages of the GPU’s huge performance for specialised programming – everything from image and video rendering, through cryptomining, to cryptographic research.

Simply put, the bug involves a processor component known as the TLB, short for Translation Lookaside Buffer.

TLBs have been built into processors for decades, and they are there to improve performance.

Once the processor has worked out which physical memory chip is currently assigned to hold the contents of the data that a user’s program enumerates as, say, “address #42”, the TLB lets the processor side-step the many repeated memory address calculations might otherwise be needed while a program was running in a loop, for example.

The reason regular programs refer to so-called virtual addresses, such as “42”, and aren’t allowed to stuff data directly into specific storage cells on specific chips is to prevent security disasters. Anyone who coded in the glory days of 1970s home computers with versions of BASIC that allowed you to sidestep any memory controls in the system will know how catastrophic an aptly named but ineptly supplied POKE command could be.)

The drm/i915 bug

Apparently, if we have understood the drm/i915 bug correctly, it can be “tickled” in the following way:

  • User X says, “Do this calculation in the GPU, and use the shared memory buffer Y for the calculations.”
  • Processor builds up a list of TLB entries to help the GPU driver and the user access buffer Y quickly.
  • Kernel finishes the GPU calculations, and returns buffer Y to the system for someone else to use.
  • Kernel doesn’t flush the TLB data that gives user X a “fast track” to some or all parts of buffer Y.
  • User X says, “Run some more code on the GPU,” this time without specifying a buffer of its own.

At this point, even if the kernel maps User X’s second lot of GPU code onto a completely new, system-selected, chunk of memory, User X’s GPU code will still be accessing memory via the old TLB entries.

So some of User X’s memory accesses will inadvertently (or deliberately, if X is malevolent) read out data from a stale physical address that no longer belongs to User X.

That data could contain confidential data stored there by User Z, the new “owner” of buffer Y.

So, User X might be able to sneak a peek at fragments of someone else’s data in real-time, and perhaps even write to some of that data behind the other person’s back.

Exploitation considered complicated

Clearly, exploiting this bug for cyberattack purposes would be enormously complex.

But it is nevertheless a timely reminder that whenever security shortcuts are brought into play, such as having a TLB to sidestep the need to re-evaluate memory accesses and thus speed things up, security may be dangerously eroded.

The solution is simple: always invalidate, or flush, the TLB whenever a user finishes running a chunk of code on the GPU. (The previous code waited until someone else wanted to run new GPU code, but didn’t always check in time to suppress the possible access control bypass.)

This ensures that the GPU can’t be used as a “spy probe” to PEEK unlawfully at data that some other program has confidently POKEd into what it assumes is its own, exclusive memory area.

Ironically, it looks as though the patch was originally coded back in October 2021, but not added to the Linux source code because of concerns that it might reduce performance, whilst fixing what felt at the time like a “misfeature” rather than an outright bug.

What to do?

  • Upgrade to the latest kernel version. Supported versions with the patch are: 4.4.301, 4.9.299, 4.14.264, 4.19.227, 5.4.175, 5.10.95, 5.15.18 and 5.16.4.
  • If your Linux doesn’t have the latest kernel version, check with your distro maintainer to see if thids patch has been “backported” anyway.

By the way, if you don’t need and haven’t loaded the i915 driver (and it isn’t compiled it into your kernel), then you aren’t affected by this bug because it’s specific to that code module.

To see if the driver is compiled in, try this: $ gunzip -c /proc/config.gz | grep CONFIG_DRM_I915= CONFIG_DRM_I915=m <--driver is a module (so only loaded on demand) To see if the modular driver is loaded, try: $ lsmod | grep i915 i915 3014656 19 <--driver is loaded (and used by 19 other drivers ttm 77824 1 i915 cec 69632 1 i915 [. . .] video 49152 2 acpi,i915 To check your Linux kernel version: $ uname -srv Linux 5.15.18 #1 SMP PREEMPT Sat Jan 29 12:16:47 CST 2022

go top