How one man could have flooded your phone with Microsoft spam

Microsoft has a neat web page that helps you get Outlook set up on your phone.

You can either scan in a QR code off the web page, which takes you to the relevant download link…

…or put in your phone number and get an SMS with the link in it:

Just like Italian security researcher Luca Epifanio, our first thought was, “What if someone decides to put in someone else’s phone number and then spam them over and over and over again?”

That would be pretty darned bad – bad for the recipient, whose phone would be swamped with unwanted text messages, and bad for Microsoft, who would look like shabby and unreconstructed spammers.

(It might also end badly for the person who dishonestly triggered all the spam in the first place, if ever they were found by law enforcement or the regulators, but that is an issue for another day.)

We tested it against our own phone number, using various browsers from various countries (we used the Tor proxy so we emerged onto the internet from semi-random places), and were happy to notice, as did Luca Epifiano, that after three messages, that was that.

Microsoft’s website will accept the number a fourth, fifth, sixth time, and so on, but simply and quietly stops texting it once it’s received three messages. (We don’t know how long it takes for the block to be lifted, but it certainly stopped us spamming ourselves at will.)

We tried to send many messages from various locations.
Only the first three showed up.

Well, Luca wondered just how robust Microsoft’s “same number” detection might be, and whether it could easily be bypassed.

Using a locally-installed web proxy, he snooped on his own web traffic to see what the data looked like on the way from his browser to Microsoft.

To his surprise, he found that by replaying the original web request with a non-alphabetic character at the end, such as a star (*) or a plus (+), he’d get three more goes at texting the number.

Then he could pick another character and get three more goes, and so on, allowing him to bypass the three-message limit at high speed, just by churning out new HTTP requests with a tiny modification each time.

Only the digits matter in the phone number to which the message gets sent, but – as Luca suggested in an email he sent us – it looks as though Microsoft’s “number verification” check was done with the extraneous characters included.

In other words, the number wasn’t being trimmed to its simplest correct form (you’ll see this called canonicalisation in the jargon) before it was logged, tested and used.

As a result, numbers that were the same in practice appeared different in theory, allowing the rate limit to be bypassed.

This is a similar sort of problem to one that Google experienced back in 2017, when an adware app that falsely claimed to be from the vendor WhatsApp, Inc. was able to sneak past the Play Store validation checks simply by adding a space character to the company name.

Visually, you couldn’t tell the difference, so the new app looked legitimate, but programmatically the two company names were of different lengths and contained different characters – so the new app was not recognised as an imposter and was admitted anyway.

What to do?

The good news is that you don’t have to do anything – Luca reported this responsibly to Microsoft, who fixed the problem.

We tried adding redundant characters to our own phone number today, and were unable to send any messages after the third had gone through.

Luca also received a bug bounty payout, with the ultimate result that everyone ended up a winner.

We think that the lessons to learn are:

  • Bug hunting isn’t just about machine code hacking and reverse engineering. You don’t need to crack open a debugger and a disassembler to do useful and productive cybersecurity work.
  • Bugs can be deceptively simple. In this case, a single character that would typically be ignored was enough to bypass an important rate limit. If you’re a programmer, don’t forget to test for the obvious things as well as all those complex “corner cases” you need to deal with.
  • Responsible bug reporting really works. If you find bugs, it’s tempting to make a big splash by disclosing them for shock value in a blaze of glory, but as Luca has shown here, you can do the right thing, help everyone else, and still get recognition – without turning security holes into nightmares.

Slickwraps data breach earns scorn for all

Slickwraps, a Kansas company that makes vinyl wraps for phones and other electronics, announced last week that it had suffered a data breach.

This was no ordinary data breach. This was a breach that earned the deep scorn of both the hacker – who was twice blocked by Slickwraps for reporting the vulnerability – and observers after some other hacker went ahead and exploited the company’s vulnerable setup.

The Verge, for one, called the breach and the aftermath “comically bad”. One of the commenters on The Verge’s story, trost79muh, had this to say about when a company with garbage security meets a bug reporter with an attitude:

The whole thing on both sides was clownshoes, when an unpiercably large ego meets an unfathomably dense IT staff.

The initial hacker – who calls themselves a white-hat security researcher – isn’t coming out of this smelling like roses either. Slickwraps was given little time to follow up on their vulnerability report and they then proceeded to run amok getting and exploiting root and taunting the company instead of clearly explaining the vulnerability.

The hacker who initially found Slickwraps’ vulnerability goes by the handle Lynx0x00. They recently posted an article to Medium (here’s the archived version) detailing how they pulled off the hack and how pathetic Slickwraps’ response was.

You can read the Medium post or The Verge’s writeup for all the gory details, but in essence, Hacker 1 –  Lynx0x00 – found a vulnerability on Slickwraps’ phone case customization page that would enable anyone with the right toolkit to upload “any file to any location in the highest directory on their server (i.e. the ‘web root’).”

From there, an attacker could get at current and former employees’ resumes (including their selfies, email addresses, home addresses, phone numbers and more) and backed-up customer photos (including porn), among many other things.

Then, Hacker 2 came along, read the Medium post, exploited the vulnerability, and gang-emailed 377,428 email addresses from the company’s records using the hacked email address hello@slickwraps.com. Some customers shared the hacked email on Twitter:

The responses to this breach are all over the map, but they generally fall into two camps: contempt for Slickwraps, and contempt for the way that Hacker 1 and Hacker 2 handled disclosure by breaching the company – not exactly “white hat” behavior, that. Here’s one such critique from Reddit’s r/hacking forum:

<img data-attachment-id="478142" data-permalink="https://nakedsecurity.sophos.com/2020/02/27/slickwraps-data-breach-earns-scorn-for-all/reddit-4/" data-orig-file="https://sophosnews.files.wordpress.com/2020/02/reddit.jpg" data-orig-size="788,456" data-comments-opened="1" data-image-meta="{"aperture":"0","credit":"","camera":"","caption":"","created_timestamp":"0","copyright":"","focal_length":"0","iso":"0","shutter_speed":"0","title":"","orientation":"0"}" data-image-title="reddit" data-image-description="

Reddit r/hacking thread, White hat hacker: ‘I hacked SlickWraps. This is how.’ IMAGE: Reddit screenshot

” data-medium-file=”https://sophosnews.files.wordpress.com/2020/02/reddit.jpg?w=300″ data-large-file=”https://sophosnews.files.wordpress.com/2020/02/reddit.jpg?w=775″ class=”size-medium wp-image-478142″ src=”https://sophosnews.files.wordpress.com/2020/02/reddit.jpg?w=300&h=174″ alt width=”300″ height=”174″ srcset=”https://sophosnews.files.wordpress.com/2020/02/reddit.jpg?w=300&h=174 300w, https://sophosnews.files.wordpress.com/2020/02/reddit.jpg?w=600&h=348 600w, https://sophosnews.files.wordpress.com/2020/02/reddit.jpg?w=150&h=87 150w” sizes=”(max-width: 300px) 100vw, 300px”>

Reddit r/hacking thread, White hat hacker: ‘I hacked SlickWraps. This is how.’ IMAGE: Reddit screenshot

[All typos are sic] Theres just so much glaringly wrong with how this person went about this. This wasnt a “oh i found a vuln” this was an “i compromised their entire company, stole customer data and then failed to properly convey the severity”

tagging someone and telling them they failed a “vibe check” is a joke. no wonder noone at the company took the disclosure seriously. and then posting a complaint email and assuming the social media person would put 2 and 2 together that they have been compromised? also not the way to go about a breach report.

Last i checked a fairly common disclosure cycle is about 90 days, not the 7 this person gave them to figure out by vague twitter posts they had been compromised. If youre going to approach a company about your findings at least tell them you have something to disclose dont just tweet about “vibe checks” and then throw a hissy fit when they dont reply right away.

As far as the breached data goes, Slickwraps CEO Jonathan Endicott said in his announcement that the “Slickwraps Family” need not worry, as passwords and financial data are safe and weren’t involved in this breach.

The information did not contain passwords or personal financial data.

The information did contain names, user emails, addresses If you ever checked out as “GUEST” none of your information was compromised.

However, some commenters said that their information was compromised in spite of having registered only as guests on the site.

In their Medium post, Lynx0x00 said that they used the vulnerability to access an extensive list of sensitive information:

  • All SlickWraps admin account details, including password hashes
  • All current and historical SlickWraps customer billing addresses
  • All current and historical SlickWraps customer shipping addresses
  • All current and historical SlickWraps customer email addresses
  • All current and historical SlickWraps customer phone numbers
  • All current and historical SlickWraps customer transaction history
  • Current SlickWraps API credentials for its email marketing service provider
  • Current SlickWraps API credentials for a number of of the company’s credit card and payment handlers
  • Current SlickWraps API credentials for the company’s warehouse management system
  • Current SlickWraps API credentials for the company’s customer service platform
  • Current SlickWraps API credentials for the company’s official brand Facebook account
  • Current SlickWraps API credentials for the company’s official brand Twitter account
  • Current SlickWraps API credentials fo the company’s official brand Instagram account

…all of which the hacker accessed only after exploiting the vulnerability to get remote code execution (RCE), decrypting the local config file, and finding the credentials to get into the company’s database.

Readers, do the actions and disclosure style of this “white-hat” hacker pass your “vibe test?” Is that how responsible disclosure works? I’m a “No” on both counts, but please, do tell us what you think.

Slickwraps says the exploit has been fixed, and it’s working hard to get back customer trust.


Latest Naked Security podcast

LISTEN NOW

Click-and-drag on the soundwaves below to skip to any point in the podcast.

Apple’s iOS pasteboard leaks location data to spy apps

To most iOS users, pasteboard is simply part of the way to copy and paste data from one place to another.

You take a picture, fancy sharing it with friends, and your phone uses the pasteboard to move the image to the desired app.

Now an app developer called Mysk has discovered pasteboard’s dark side – malicious apps could exploit it to work out a user’s location even when that user has locked down app location sharing.

The weakness here is caused by the fact that, unless GPS permissions were refused, images taken with the embedded camera app on iPhones and iPads are saved with embedded GPS metadata recording where each was taken.

In the simplest scenario, an iPhone user would take a photo, copy it between apps using the pasteboard, from which a malicious app could extract location metadata while comparing it with timestamps to determine whether it was current or taken in the past.

Images taken from third-party web sources could be filtered out by comparing aspects of an image’s metadata with the device’s hardware and software properties to detect differences.

Although a malicious app should only be able to access pasteboard data while active, Mysk’s bypass was to write a demo app, KlipboardSpy, paired to a foreground widget visible in Today View, to prove the hack worked under real-world conditions. Moreover:

As the pasteboard is designed to store all types of data, the exploit is not only restricted to leaking location information. By gathering all these types of content, a malicious app can covertly build a rich profile for each user, and what it can do with this content is limitless.

That’s not only location data, then, but potentially anything the user has copied into pasteboard, including passwords and bank details.

Is this a bug or a feature?

There was a time when the ability to siphon GPS location history from smartphone images would have sounded of marginal use to a surveillance app. These days, however, image- and data-sharing has exploded, as any visit to social media will attest.

And yet when Mysk reported the issue to Apple, the response was muted:

We submitted this article and source code to Apple on January 2, 2020. After analyzing the submission, Apple informed us that they don’t see an issue with this vulnerability.

Arguably, Apple is correct because the pasteboard is working exactly as intended – it allows users to exchange data within and between applications while the latter are in the foreground.

That is, while it’s true that data can be slurped from the pasteboard in theory, that hypothetical downside is outweighed by the certainty that people need to access copy-and-paste on a daily basis.

Mysk’s view is that Apple could protect the iOS pasteboard by integrating it inside its permissions system, allowing users to grant access one app at a time, or by limiting the time apps can access it to the copy-and-paste action.

Currently, this is a theoretical weakness that, as far as anyone knows, has never been exploited. It’s likely that Apple will patch up this risk at some point as the permissions system inside iOS evolves.


Latest Naked Security podcast

LISTEN NOW

Click-and-drag on the soundwaves below to skip to any point in the podcast.

LTE vulnerability allows impersonation of other mobile devices

Researchers have found a way to impersonate mobile devices on 4G and 5G mobile networks, and are calling on operators and standards bodies to fix the flaw that caused it.

Research into the vulnerability, conducted by academics at Ruhr Universität Bochum and New York University Abu Dhabi, is called Impersonation Attacks in 4G Networks (IMP4GT), although deployment requirements for 5G networks mean that it could work on those newer systems too.

The attack targets LTE networks, exploiting a vulnerability in the way that they authenticate and communicate with mobile devices. The researchers claim that they can impersonate a mobile device, enabling them to register for services in someone else’s name. Not only could an attacker use this to get free services such as data passes in someone else’s name, but they could also impersonate someone else when carrying out illegal activities on the network, they point out:

The results of our work imply that providers can no longer rely on mutual authentication for billing, access control, and legal prosecution.

It wouldn’t necessarily let you into someone’s Gmail, because you might still have strong password protection or, more sensibly, be using MFA. Neither would it let you access someone’s SMS-based 2FA messages, David Rupprecht, one of the report’s authors, told us:

Under the assumption the authentication app is correctly implemented, e.g. uses TLS for the transmission, the attacker can not access that information. Text messages are part of the control plane and are therefore not attackable.

LTE networks use a mechanism called integrity protection, which mutually authenticates a device with the nearby cellular base station using digital keys. The problem is that this integrity protection only applies to control data, which is the data used to set up telephone communications. It doesn’t always apply to the user data, which is the actual content sent between the phone and the base station.

Rupprecht and his colleagues have already proven that they can use this weakness to change data sent between the phone and the base station, redirecting communication to another destination by DNS spoofing. This all happens at layer two of the network stack (the data link layer, which transports data across the physical radio link).

The IMP4GT attack takes this vulnerability a step further by using it to manipulate data at layer three of the LTE network. This is the network layer, handling things like user authentication, tracking devices around the network, and IP traffic.

Rather than just redirecting IP packets, the new attack accesses their payload and also injects arbitrary new packets, giving the researchers control over the mobile session. They do this by mounting a man-in-the-middle attack, impersonating the base station when dealing with the mobile device, and impersonating the mobile device when talking to the base station.

The vulnerability not only affects existing 4G networks but also has implications for the 5G systems that carriers are rolling out. Companies are implementing these systems in two phases, the researchers explain. The first uses dual connectivity, where the phone uses 4G for the control plane and the 5G network for user data. The user channel doesn’t use integrity protection in this case.

The second phase of the rollout, known as the standalone phase, sees 5G networks used for both control and user data. However, this implementation only mandates user data integrity protection on user channel connections up to 64kbit/sec, which is tiny by 5G standards. Integrity protection on user channels with higher speeds than that is optional. The researchers called for mandatory full-rate integrity protection.

Is this likely to affect you?  Fortunately, the researchers themselves suggest:
Probably not! The attacker needs to be highly skilled and in close proximity to the victim. Besides the specialized hardware and a customized implementation of the LTE protocol stack, conducting the attack outside a controlled lab environment and without a shielding box would also require more engineering effort. However, for single targets of high interest it might be worth to meet the constraints above.
Nevertheless, the researchers liken the theoretical practical equipment used in an attack to the Stingray devices that law enforcement has used in the past to eavesdrop on mobile phones and track their location. Those boxes have a range of around 2km. So investing more research into this sort of attack might be worthwhile for high-value targets, given that there is already a healthy market for tools to compromise such mobile users.

Latest Naked Security podcast

LISTEN NOW

Click-and-drag on the soundwaves below to skip to any point in the podcast.

Switch to Signal for encrypted messaging, EC tells staff

Imagine that you work in government or at an NGO – both places that want to keep their communications private.

Understandably, given that governments these days use powerful spyware to surveil political activists, NGOs, and each other, you and your colleagues use an encrypted messaging app.

There’s a good chance that you’ve gone with WhatsApp, which has been a trailblazer in end-to-end encrypted messaging. As early as 2016, The Guardian was referring to the app as a “vital tool” to conduct diplomacy – an app with which diplomats could “talk tactics, arrange huddles, tweak policy – and send Vladimir Putin emojis.”

But given recent events, you have to wonder: what happens if holes develop in that supposed cone of silence?

Like, say, the stupidly simple social engineering hack that the UN said was used – allegedly by the crown prince of Saudi Arabia – to infect Amazon CEO Jeff Bezos’s phone with personal-message-exfiltrating malware, with one single click?

Or the zero-day vulnerability in WhatsApp that allowed attackers to silently install spyware just by placing a video call to a target’s phone? Or, as happened this past weekend, the way that WhatsApp and parent company Facebook shrugged off responsibility for private groups being indexed by search engines, thereby rendering them easy to find and join by anybody who knew the simple search string?

What happens, at least in the case of the European Commission (EC), is that you tell your staff to move over to Signal. Last week, Politico reported that earlier this month, the EC took to internal messaging boards to recommend moving to the alternative end-to-end encrypted messaging app, which it said “has been selected as the recommended application for public instant messaging.”

The EC didn’t mention WhatsApp, per se. It didn’t have to. Security experts have been pointing out reasons why it’s a potential national security risk for a while. Besides its recent and not-so-recent security flubs, there are privacy issues that come with being swallowed up by Facebook. One of WhatsApp’s co-founders, Brian Acton, left the company after the Facebook acquisition, saying that Facebook wanted to do things with user privacy that made him squirm. In his words: “I sold my users’ privacy.”

As Politico notes, privacy activists favor Signal not just because of its end-to-end encryption. Bart Preneel, cryptography expert at the University of Leuven, told the news outlet that, unlike WhatsApp, Signal is open-source, which makes it easy to find security flaws and privacy-jeopardizing pitfalls:

It’s like Facebook’s WhatsApp and Apple’s iMessage, but it’s based on an encryption protocol that’s very innovative. Because it’s open-source, you can check what’s happening under the hood.

Signal is recommended by a who’s who list of cybersecurity pros, including Edward Snowden, Laura Poitras, Bruce Schneier, and Matthew Green. “Use anything by [Signal’s protocol, called] Open Whisper Systems,” as Snowden is quoted as saying on the app’s homepage, while Poitras praises its scalability.

Cryptographer Green says he literally started to drool when he looked at the code. While WhatsApp is based on Open Whisper Systems, it’s not open-source, so it’s not as easy to spot something that goes awry. Another plus of Signal: unlike WhatsApp, it doesn’t store message metadata that could expose users in worldwide data centers. Nor does it use the cloud to back up messages, further exposing them to potential interception.

Sorry, WhatsApp, but you just don’t induce drooling among cryptographers.

Unlike WhatsApp, Signal is operated by a non-profit foundation – one that WhatsApp co-founder Brian Acton put $50 million into after he ditched Facebook – and is applauded for putting security above all else. Like, say, in October 2019, when it immediately fixed a FaceTime-style eavesdropping bug. It fixed the bug in both Android and iOS on 27 September – the same day on which it was reported.

It’s not just Signal’s reputation and WhatsApp’s problems that have pushed the EC into recommending that Signal become the private messaging app of choice – also motivating the Commission are multiple high-profile security incidents that have rattled officials and diplomats.

EC officials are already required to use encrypted email when exchanging sensitive, non-classified information, an official told Politico. The recommendation to use Signal mainly pertains to communications between EC staff and people outside the organization, the news outlet reported, and is a sign that diplomats are trying to bolster security in the wake of recent breaches.

The EC isn’t the only governmental body to dump WhatsApp in favor of Signal. As The Guardian reported in December 2019, the UK’s Conservative party switched to Signal following years of leaks from WhatsApp groups.

What’s ironic, of course, is that governments have been hounding companies to put backdoors in all of these products. While law enforcement in multiple governments have been demanding an end to encrypted messaging that they can’t penetrate, they themselves are increasingly turning to ever more reliable forms of encrypted messaging.

What’s good for the gander isn’t quite up to snuff for the goose, apparently.

But while WhatsApp suffers in comparison to Signal, and while at least two government outfits have shed it in favor of Signal, WhatsApp still matters. It’s one of the messaging apps that’s at the heart of the encryption debate. Facebook, alongside Apple, has stood up to the US Congress to defend end-to-end encryption, in the face of lawmakers telling the companies that they’d better put in backdoors – or else they’ll pass laws that force an end to end-to-end encryption.

As Politico reported, in June 2019, senior Trump administration officials met to discuss whether they should seek legislation to ban unbreakable encryption. They didn’t come to an agreement, but such laws are undeniably on the table.

That matters. Regardless of which messaging app the EC switches to, or the Tories, they’re all liable to being outlawed if the world’s superpowers get their way and legislate backdoors into existence. As goes WhatsApp and Apple encryption, so goes Signal, or Wickr, or any other flavor of secure IP messaging.

And, of course, so goes the stronger security that some government bodies are, ironically enough, moving to embrace.

Watch it, goose and gander, before you wind up cooking both yourself and your own sensitive communications.


Latest Naked Security podcast

LISTEN NOW

Click-and-drag on the soundwaves below to skip to any point in the podcast.

go top
cialis for daily use . canada discount drugs . vagragenericaar.org