Twitter apologizes for leaking businesses’ financial data

Twitter apologized on Tuesday for sticking business clients’ billing information into browser cache – a spot where the uninvited could have had a peek, regardless of not having the right to see it.

In an email to its clients, Twitter said it was “possible” that others could have accessed the sensitive information, which included email addresses, phone numbers and the last four digits of clients’ credit card numbers. Any and all of that data could leave businesses vulnerable to phishing campaigns and business email compromise (BEC) – a crime that the FBI says is getting pulled off by increasingly sophisticated operators who’ve grown fond of vacuuming out payrolls.

Mind you, Twitter hasn’t come across evidence that billing information was, in fact, compromised.

On 20 May, Twitter updated the instructions that Twitter sends to browser cache, thereby putting a stopper in the leak. The two affected platforms are ads.twitter.com or analytics.twitter.co. If you viewed your billing information on either platform before 20 May, your billing information may have gotten stuck in browser cache.

Browser-sharers take heed

Twitter said that if you used a shared computer during that time, someone who used the computer after you may have seen the billing information stored in the browser’s cache. The company notes that most browsers generally store data in their cache by default for a short period of time – say, 30 days.

What to do?

Twitter recommends that those who use a shared computer to access Twitter Ads or Analytics billing information should clear the browser cache when they log out.

Twitter’s mea culpa

Whoops, Twitter said:

We’re very sorry this happened. We recognize and appreciate the trust you place in us, and are committed to earning that trust every day.

The company didn’t say how many accounts were affected.

If you’ve got questions, Twitter says you can write to its Office of Data Protection, here.

Not the first flub

This isn’t the first time that Twitter’s stumbled with account security.

In May 2018, we got a warning from Twitter admitting that the company had made a serious security blunder: it had been storing unencrypted copies of passwords. That’s right: plaintext passwords, saved to disk.

You’re reading Naked Security, so there’s a good chance you already know that plaintext passwords are an acutely bad idea.

A few years prior to that, in June 2016, Twitter locked out some users after nearly 33 million logins went up for sale. The thievery was credited to a well-known hacker and dark-web seller: a Russian actor known by the handle Tessa88. Twitter said at the time that its systems hadn’t been breached and that the logins may have come from other password leaks.

That’s a whole lot of leaked passwords and about 33 million reasons to repeat the “use a unique, strong password” mantra. Need a real bruiser of a password? Here’s how to pick a strong password.

Ixnay on the password reuse, too, of course. That’s where a password manager comes in handy.

Do all that to protect your credentials, wipe browser cache if you’re potentially affected by this browser cache storage glitch, and stay safe!

Glupteba – the malware that gets secret messages from the Bitcoin blockchain

Here’s a SophosLabs technical paper that should tick all your jargon boxes!

Our experts have deconstructed a strain of malware called Glupteba that uses just about every cybercrime trick you’ve heard of, and probably several more besides.

Like a lot of malware these days. Glupteba is what’s known a zombie or bot (short for software robot) that can be controlled from afar by the crooks who wrote it.

But it’s more than just a remote control tool for criminals, because Glupteba also includes a range of components that let it serve as all of the following:

  • A rootkit. Glupteba includes a variety of Windows kernel drivers that can hide the existence of specific files and processes. Kernel rootkits are unusual these days because they’re complex to write and often draw unnecessary attention to themselves. However, if loaded successfully, rootkits can help cybersecurity threats lie low by keeping malware files off the radar of security tools and stopping them from showing up in security logs.
  • A security suppresor. Glupteba has a module that does its best to turn Windows Defender off, and then regularly checks to make sure it hasn’t turned itself back on. It also looks for a laundry list of other security tools, including anti-virus software and system monitoring programs, killing them off so they can no longer search for and report anomalies.
  • A virus. Glupteba uses two different variants of the ETERNALBLUE exploit to distribute itself automatically across your own network, and anyone else’s it can find by reaching out from your computer. That makes it an old-school, self-spreading computer virus (or more specifically a worm) rather than just a standalone piece of malware.
  • A router attack tool. Glupteba bundles in various exploits against popular home and small business routers, using your computer as a jumping off point to attack other people. It uses one of these attacks to open up unpatched routers to act as network proxies that the crooks can use as “jumping off” points for future attacks. This leaves the unfortunate victim looking like an attacker themselves and showing up as an apparent source of cybercriminal activity.
  • A browser stealer. Glupteba goes after local data files from four different browsers – Chrome, Firefox, Yandex and Opera – and uploads them to the crooks. Browser files often contain sensitive information such as URL history, authentication cookies, login details and even passwords that can’t be accessed by code such as JavaScript running inside the browser. So crooks love to attack your browser from outside, where the browser isn’t in control.
  • A cryptojacker. Along with everything else it does, Glupteba can act as a secretive management tool for two different cryptomining tools. Cryptominers are legal if you use them with the explicit permission of the person paying the electricity bills to run the computers you’re using (and cryptomining can consume a lot of power). Here, the crooks get you to pay their power bills and take the cryptocoins for themselves.

There’s more – much more

But that’s not all.

The most interesting feature that we learned about in the report (and we think you’ll be fascinated too) is how Glupteba uses the Bitcoin blockchain as a communication channel for receiving updated configuration information.

As you probably know, zombies or bots aren’t much use to the crooks if they can’t call home to get their next wave of instructions.

Glupteba has a long list of built-in malicious commands that the crooks can trigger, including the self-explanatory update-data and upload-file commands that are detailed in the report. But it also includes, as with most bots, generic commands to download and run new malware, meaning that even if you know everything about Glupteba itself, you can’t predict what it might morph into next because the crooks can update the running malware at will.

The current command-and-control servers used by the crooks, known as C2 servers or C&Cs, might get found out and blocked or killed off at any moment, so zombie malware often includes a method for using an otherwise innocent source of data for updates.

After all, to tell a bot to switch from one C&C server to another, you typically don’t need to send out much more than new domain name or IP number, and there are lots of public messaging systems that make it easy to share short snippets of data like that.

For example, bots have used services such as Twitter, Reddit, Pastebin and other public websites as temporary storage for secret messages, in the same way that spies from the Cold War era might have communicated using the “Personals” section in a print newspaper.

Bring on the blockchain

Glupteba uses the fact that the Bitcoin transactions are recorded on the Bitcoin blockchain, which is a public record of transactions available from a multitude of sources that are unexceptionably accessible from most networks.

Bitcoin “transactions” don’t actually have to be about money – they can include a field called RETURN, also known as OP_RETURN, that is effectively a comment of up to 80 characters.

Let’s start with a list of all the Bitcoin transaction hashes (lightly redacted) associated with one of the Bitcoin wallets used as a covert source of messages by Glupteba.

The wallet ID shown here was extracted from the malware by SophosLabs.

The command line program bx below is a popular and useful Bitcoin blockchain explorer tool:

$ bx fetch-history 15y7......qNHXRtu5wzBpXdY5mT4RZNC6 | awk '$1 == "hash" { print $2 }'
dfef43552fc953ff14ca7b7bb........b79e8409b5638d4f83b1c5cec0abc3d
98987c05277c97b06edfc030c........07e74334c203075ec27b44b3cc458bf
717da8bea87d02ef62b1806cf........7e01f0267718f0351f9ae1592e02703
20b37b655133491b94a8021ab........0266d15331a14caf10570b6623a86e4
fa9cd0622535cf6c9ff449510........c5d526d5794d9d98ba5d6469a97be2c
0d83cbc74a12a9f130fcead23........d5d56cf769c6c0a4cf1cebbf9e97e4a
a7fb3bb04b82922923e8359f8........3db69bd2863ec88b98f9c69a37212ad
52ee10617c1fc3e25922b146a........7daefdc3c3d5421b0387a737e46b396
f29cbbb96de80dbc7e5236c98........3da6f8118bb356f537ce0317f7ab10c
6a3a720ab97511528309fbf64........f37bc25d95d45d3408540174daad786
8bf7acc56aab4b87d73a85b46........1486f0a764fd0a5f13e2d79e0a14625
3bd54c0832cc411f5299064e4........c11ab05c1a4aff62fa323c068e88945
1e1c0249bb22d1fcfb596e4fb........df7ab3bf627e25a2fe9530eb3dce476
51899ffeadf5d0d605d5122c3........5b82baa15a4fa6b203abf59731c158f
8a7c43d0bbf01cdf3bb28de48........6e339a063251fce30cb83ae50c2096a
55e8fe62bcc41ec465c3f1f28........f5d82443a15a30d88fefc3f55ad2f29

If we fetch the details of each of these transactions, we can see which ones include OP_RETURN data.

Here’s a transaction dump for one that does, truncated to save space:

$ bx fetch-tx 55e8fe62bcc41ec465c3f1f28........f5d82443a15a30d88fefc3f55ad2f29
{ hash 98987c05277c97b06......1ce207e74334c203075ec27b44b3cc458bf inputs { input {
[ . . . . . . . . . ] output { script "return [18fe788a52d7aa57808d801d0f8f7cd39e1a......9f986b877befce0c2f558f0c1a9844833ac702cb3eba6e]"
[ . . . . . . . . . ] value 0 } }
[ . . . . . . . . . ] 

The bytes in the OP_RETURN data shown above are the secret message.

To decrypt it, you need a 256-bit AES decryption key that’s coded into the the Glupteba malware program (you can find the keys in the SophosLabs paper), and you need to know that the data returned in the blockchain consists of:

First 12 bytes = AES-256-GCM initialisation vector
Last 16 bytes = AES-256-GCM authentication tag
Bytes in between = Encrypted message (bytes from 0f8f7cd3... to ...877befce)

Decrypt the data from the blockcode to reverse the AES-256-GCM encryption, and you’ll reveal the hidden message.

This sort of “hiding in plain sight” is often referred to as steganography.

Here’s some pseudocode to give you the idea:

 > cipher = newcipher('AES-256-GCM') > cipher.key = d8727a0e...d66503cf // extracted by SophosLabs > cipher.iv = 18fe788a52d7aa57808d801d // GCM mode needs an IV > cipher.tag = 0c2f558f0c1a9844833ac702cb3eba6e // GCM mode needs a message hash > plain = cipher:decrypt(0f8f7cd39e1a......9f986b877befce) > print('secret message is: ',plain) secret message is: venoco___ol.com // see report for full IoC list // this is a new C&C server to move to

And that’s how Glupteba hides its command-and-control server names in plain sight!

How bad is it?

The bad news about Glupteba is that its many self-protection components mean that it has many tricks available to stop itself showing up in your security logs.

The good news is that this complexity makes the malware less reliable, and ironically more prone to triggering security alarms at some point.

Indeed, some of the low-level programming tricks it uses, including the kernel-level rootkits, not only don’t work on recent versions of Windows, but also often draw attention to themselves by the way they misbehave, up to and including crashing your computer with a giveaway blue screen of death.

Also, Glupteba relies on numerous exploits that were patched many months or years ago – including the attacks it uses against routers – so a patched system is much less likely to get infected in the first place.

Lastly, the main delivery mechanism we’re aware of so far that brings infections of Glupteba into a network (assuming you are patched against ETERNALBLUE and can’t get infected by its viral component), seems to be via “software cracks” on well-known piracy sites.

Like this one:

This crack didn’t lead to Adobe Illustrator.
It led to a Glupteba infection.

What to do?

  • Patch early, patch often. That includes your operating system, the apps you use, and any devices such as routers and file storage servers on your own network.
  • Use a decent anti-virus with built-in web filtering. Most malware, including zombie malware, arrives as a series of downloads. Even if you hit by get the first stage of malware attack, you can still defeat the crooks if you stop the final payload arriving.
  • Stay away from hookey software. Assume that the sort of person who’s willing to steal software such as Adobe Illustrator and give away tools to crack it “for free” is also willing to accept money from crooks to implant malware in their fraudulent downloads.

LEARN MORE ABOUT STEGANOGRAPHY

If you enjoyed this article, why not watch one of our Naked Security Live videos in which we discuss the weird and wonderful world of steganography?

[embedded content]

You can watch directly on YouTube if the video won’t play here.

The articles referenced in the video are:


iOS 14, macOS Big Sur, Safari to give us ‘No, thanks!’ option for ad tracking

As is typical for Apple’s developer conferences, on Monday it started hyping the privacy and security goodies it’s got in store for us in a few months.

During the pre-taped keynote at Apple’s Worldwide Developers Conference (WWDC), the company promised to pump up data protection even more with gobs of new features in its upcoming iOS 14, macOS Big Sur, and Safari releases.

(Here’s the complete keynote transcript, courtesy of Mac Rumors, if you don’t have a spare 1:48:51 to listen to the opening for Apple’s first-ever, all-online WWDC.)

Pretty please stop the ad tracking

The big ones include the option for users to decline apps’ ad tracking. More specifically, we’ll be given the option to “Allow Tracking” or “Ask App Not to Track.” As Wired’s Lily Hay Newman points out, “asking” sounds a lot more dubious than “blocking.” But Apple makes it decisive in its notes to developers, where it says that the permission is a must-have for developers, not a nice-if-you’re-in-the-mood.

Apps' permission to track
Developers notes on apps’ permission to track. IMAGE: Apple

Katie Skinner, a user privacy software manager at Apple, said during the keynote that this year, the company wants to help users to control ad tracking:

We believe tracking should always be transparent and under your control. So moving forward, App Store policy will require apps to ask before tracking you across apps and websites owned by other companies.

Developers will also be required to cough up data on exactly what third-party software development kits and other modules they’ve incorporated into their apps, what those components do, what data they collect, who they share it with and how it will be used. Think of the charts like nutrition labels, Apple said on Monday: they’re a way for developers to transparently share security and privacy details.

Apple isn’t the first to think about labels that could give us a heads-up about what a chunk of code is up to. Last month, Carnegie Mellon University presented a prototype security and privacy label based on interviews and surveys, the focus of which was the shabby state of security in the Internet of Things (IoT).

IoT devices, App Store apps, fill in the blank: why not label them all? One caveat is that we actually have to trust developers to a) be candid about what they’re up to, rather than b) lying through their teeth. Unfortunately, developers all too often choose option B. For example, sometimes they try to manipulate Google’s security by removing suspicious code before adding it back in to see what trips detection systems, and then we wind up with ad fraud apps hiding in the Play Store.

Another of many examples: in March, Google and Apple had to hose down their app stores to cleanse them of apps that secretly install root certificates on mobile devices – certificates that enable a popular analytics platform to suck up users’ data from ad-blocker and virtual private network (VPN) mobile apps.

The long privacy road

Just like iOS 13 last year, Apple’s upcoming iOS 14 mobile update – expected in the autumn with the release of new iPhones and iPads – is yet another step in the company’s long privacy march.

Since at least 2015, Apple CEO Tim Cook has drawn a distinction between how the company handles privacy versus the tech companies that “are gobbling up everything they can learn about you and trying to monetize it.” Apple, which makes its money selling hardware, has “elected not to do that,” he’s said.

Apple was already working on taking control of ad trackers when it released iOS 13 last year, bringing with it the ability to see what apps track you in the background and offering the option of switching them off. Ditto for iPadOS. The new feature came in the form of a map that displayed how a given app tracks you in the background, as in, when you’re not actually using the app. Giving us the ability to ask that we not be tracked in iOS 14 is a logical next step.

In other security-positive news, the Safari upgrade will also start checking any passwords you store in the browser and can alert you if any have been compromised in a data breach. It won’t share those passwords with Apple.

Happy talk

Of course, it’s worth noting that Apple’s much-vaunted privacy technologies sometimes fall flat on their faces. Case in point: in January, Google researchers published a proof-of-concept analysis of how the Intelligent Tracking Prevention (ITP) in Safari could actually leave users exposed to a slew of privacy issues, including, ironically, being tracked.

But even if we have to take Apple’s privacy and security news with a grain of salt, there’s a lot of meat on Apple’s upcoming privacy and security enhancements.

United States wants HTTPS for all government sites, all the time

The US government just announced its plans for HTTPS on all dot-gov sites.

HTTPS, of course, is short for for “secure HTTP”, and it’s the system that puts the padlock in your browser’s address bar.

Actually, the government is going one step further than that.

As well as saying all dot-gov sites should be available over HTTPS, the government wants to get to the point that all of its web servers are publicly committed to use HTTPS by default.

That paves the way to retiring HTTP altogether and preventing web users from making unencrypted connection to government sites at all.

HTTPS relies on an internet protocol called Transaction Layer Security, or TLS, which uses a combination of strong encryption and digital signatures to help to keep your browsing private.

(You may still hear TLS referred to by the name SSL, short for Secure Sockets Layer, which is its less-secure precursor. Ironically, three of the most popular programming tools used for TLS support have clung to old-school names: OpenSSL, LibreSSL and BoringSSL.)

As recently as 10 years ago, HTTPS was thought of as something you only needed occasionally, either for browsing to super-sensitive content, or when performing a security-specific action such as changing your password or logging in.

Even mainstream sites used HTTPS only when you were putting in a password or a credit card number, but happily reverted to plain old HTTP for all your other interactions.

Small business and hobby sites often ignored HTTPS altogether, because getting the necessary web certificates to make TLS work correctly took both time and money.

Worse still, web certificates typically expired every year and cost anywhere from $10 to $100 each to renew, making them an ongoing expense that many website owners couldn’t afford.

Why all the fuss?

Until fairly recently, website operators who published information that they wanted to make public anyway, such as news stories or price lists, simply couldn’t see the need for HTTPS at all.

Why encrypt data that wasn’t confidential?

More importantly, why pay a fee every year to a digital certificate signing company, known as a CA or Certificate Authority, just to encrypt something you wanted to tell the world about anyway?

But there are two compelling reasons for using TLS while you are browsing, even if you are looking at information that is already in the public domain, or downloading software that’s 100% free anyway:

  1. TLS encryption makes surveillance much harder. Your detailed browsing history – exactly what you choose to read, as well as when and where you read it, along with every file you download – gives away an awful lot to cybercriminals, stalkers, spies, unethical competitors and even just your nosy neighbours. Without TLS, pretty much anyone on the network path between you and the server can sniff out pretty much every detail of your online life.
  2. TLS encryption provides strong tamper protection. You probably don’t want crooks to be able to snoop on your bank balance, but you very definitely don’t want them to be able to alter your transactions. Likewise, when you fetch new software, you don’t want anyone on the network path to be able to inject malware into the download with ease.

As a result, HTTPS has steadily been winning out over plain old HTTP, with Google estimating that about 95% of users visiting its sites and services now “talk” HTTPS.

Website operators don’t even need to pay for web certificates any more – certificate authorities such as Let’s Encrypt let you acquire certificates for free, and with almost none of the bureaucratic hassle that used to be involved.

95% is still short of the mark

If it’s really that easy both to support TLS (e.g. using Let’s Encrypt for your certificates) and to use it (e.g. by using any browser built in the last few years), how come the web community doesn’t just drop HTTP altogether?

Why is the US government’s announcment that it plans to embrace HTTPS anything but stating the obvious?

Ideally, the US government would already have set a date after which all dot-gov websites would effectively be HTTPS only.

In fact, there’s a surprisingly easy way to do that, called Strict Transport Security, also known as HSTS (the H is for HTTP, as you probably guessed).

That’s a way that websites can tell your browser, “Next time you visit, use HTTPS even if the user wants to connect using HTTP.”

Additionally, all modern browsers support something called an HSTS Preload List that tells the browser up front not to wait for a website to announce its preference for HTTPS, but to talk HTTPS to it anyway.

(There’s a master preload list of about 100,000 domains, curated by Google, that most browser vendors use as the core of their own lists.)

In theory, then, the US government could add one single entry to the global preload list, proclaming to every brower in the world, “For any domain that ends in .GOV, use HTTPS, with no exceptions.”

Every browser would stop making HTTP connections to .GOV sites, so any government site that didn’t support HTTPS, or that dodn’t support it correctly, would basically stop working overnight, which would flush out any sites that had been forgotten about pretty quickly.

But, as the government’s own report Making .gov More Secure by Default points out:

If we did that, some government websites that don’t offer HTTPS would become inaccessible to users, and we don’t want to negatively impact services on our way to enhancing them! […G]etting there will require concerted effort among the federal, state, local and tribal government organizations that use a common resource, but don’t often work together in this area.

In other words, even if 95% of the government’s websites, and 95% of their users, are happily talking HTTPS, the 5% that aren’t still adds up to a lot of users, and a lot of sites.

What happens next?

Sadly, closing that 5% gap is a long and winding road.

As a result, the US government has in fact only announced its intention to add .GOV to the global browser preload list at some undisclosed time, and it admits that the process might take “a few years” yet.

However, from 2020-09-01, the government says that it will individually add any new .GOV domains to the preload list, come what may.

In other words, anyone setting up a new server for the US government after that date will have to get HTTPS right, or their server will basically be useless.

The good news is that there are already more than 800 US government websites on Mozilla’s always-use-HTTPS list (all the way from from 18F.GOV to ZEROWASTESONOMA.GOV).

But only after all, or sufficiently close to all, government sites are on the list can the government take the simplifying step of replacing all of those individual sites with one overarching entry, which will look something like this when encoded into JSON:

 { "name": "gov", "policy": "public-suffix", "mode": "force-https", "include_subdomains": true }

‘BlueLeaks’ exposes sensitive files from hundreds of police departments

DDoSecret – a journalist collective known as a more transparent alternative to Wikileaks – published hundreds of thousands of potentially sensitive files from law enforcement, totaling nearly 270 gigabytes, on Juneteenth.

That date – 19 June – is a holiday that celebrates the emancipation of those who were enslaved in the US. There’s currently a push to make the date into a national holiday – a movement bolstered by the nationwide Black Lives Matter (BLM) protests.

DDoSecrets, which refers to itself as a “transparency collective,” has dubbed the release BlueLeaks.

On Friday, DDoSecrets said on Twitter that the BlueLeaks archive indexes “ten years of data from over 200 police departments, fusion centers and other law enforcement training and support resources”, including “police and FBI reports, bulletins, guides and more.”

Fusion Centers are state-owned and operated entities that gather and disseminate law enforcement and public safety information between state, local, tribal and territorial, federal and private sector partners.

DDoSecrets published the data in a publicly accessible, searchable portal that says it contains more than 1 million files, such as scanned documents, videos, emails, audio files, and more.

BlueLeaks data purportedly stolen from US law enforcement agencies and fusion centers. IMAGE: BlueLeaks

The collective said that the source of the data was Anonymous: the label that some hacktivists have used when taking actions against targets such as the hospitals involved in the Justina Pelletier case, revenge-porn site publisher Hunter Moore, and, more recently, attacks on the Atlanta and Minneapolis police departments following police killings of Black men.

Information security reporter Brian Krebs says that he got his hands on an internal analysis dated 20 June, from the National Fusion Center Association (NFCA), that confirmed the validity of the BlueLeaks data.

The NFCA said that the data appears to have come out of a security breach at Netsential, a web-hosting company based in Houston. Netsential provides web hosting for many US law enforcement agencies and fusion centers.

According to the alert, the dates on the files actually go back further than 10 years. They span nearly 24 years, from August 1996 through 19 June, 2020. The files include names, email addresses, phone numbers, PDF documents, images, and a large number of text, video, CSV and ZIP files.

The NFCA alert said that some of the files also contain highly sensitive information, including “ACH routing numbers, international bank account numbers (IBANs), and other financial data, as well as personally identifiable information (PII) and images of suspects listed in Requests for Information (RFIs) and other law enforcement and government agency reports.”

Krebs talked to Stewart Baker, an attorney and former assistant secretary of policy at the US Department of Homeland Security (DHS), who said that the BlueLeaks data isn’t likely to give anybody much insight into police misconduct. Rather, it’s more likely to expose sensitive law enforcement investigations and even endanger lives, he told Krebs:

With this volume of material, there are bound to be compromises of sensitive operations and maybe even human sources or undercover police, so I fear it will put lives at risk.

Every organized crime operation in the country will likely have searched for their own names before law enforcement knows what’s in the files, so the damage could be done quickly. I’d also be surprised if the files produce much scandal or evidence of police misconduct. That’s not the kind of work the fusion centers do.

For its part, DDoSecrets says that it’s politics-free:

We aim to avoid any political, corporate or personal leanings, and to act as a simple beacon of available information. As a collective, we do not support any cause, idea or message beyond ensuring that information is available to those who need it most – the people.

The collective has two criteria for publishing data: The data has to be of public interest, and it’s got to be capable of being verified. The group says it ensures anonymity for those who provide information that hasn’t previously been disclosed. DDoSecrets often passes the data it receives on to journalists or “other figures best positioned to interrogate it.”

A quick look at Twitter posts shows that since Friday, figures have most certainly been interrogating the BlueLeaks data:

Poring through BlueLeaks
A tweet from those poring over the BlueLeaks files.

go top