Cybersecurity Awareness Month: Building your career

The overall motto of #Cybermonth consists of three simple words.

Repeat these words (try sitting on your hands while you’re saying them, for extra safety) whenever you’re faced with a cybersecurity risk, instead of rushing straight in and making a possibly expensive mistake:

Stop. Think. Connect.

Well, in Week 3 of #Cybermonth 2021, there are three more official words you can try, too:

Explore. Experience. Share.

Not quite as catchy as “Stop. Think. Connect,” we must admit, but the idea is straightforward: to show you how to find out more about cybersecurity as a career, to encourage you to dip your toes in the water, and to make sure that existing cybersecurity researchers help newcomers to learn more.

Our experts speak for themselves

We’d love to see more people getting into cybersecurity, not least because the crooks are busy trying to lure newbies to the Dark Side, so having a bigger, better and bolder global crew of experts to protect us from cybercriminality is in everyone’s interest.

So, to help you get an idea of what cybersecurity researchers do, we decided to give you a bunch of articles from our “Day in the Life” series, where Sophos staff tell you in their own words how they got started, and what they’re aiming to do in future:

  • A day in the life of a Managed Threat Response Sales Engineer
  • On the side of the good guys: a day in the life of a Senior Development Manager
  • How curiosity builds better products: a day in the life of a Senior Hardware Engineer
  • Small details make a big impact: a day in the life of a Distinguished Engineer
  • The project of my life: a day in the life of a Principal Hardware Engineer
  • The importance of adaptability: a day in the life of a Distinguished Engineer

(If you are thinking of applying for a job at Sophos, visit our official careers page, and find out what it’s like to work for us.)

If you want to wet your whistle

Even if you aren’t thinking of a cybesecurity career, why not learn more about how cybersecurity people think – or, perhaps more importantly, the things they ought to be thinking about, given the sort of mistakes that programmers sometimes make, and the eagerness with with cybercriminals pounce on them?

Here on Naked Security, we publish a series of occasional articles called Serious Security, where we dig into all sort of fascinatings topics, all the way from randomness and cryptography to passwords and pi.

Pick from a list of the whole series, or try out some of our popular topics from the past few years:

  • Serious Security: Webshells explained in the aftermath of HAFNIUM attacks
  • Serious Security: What 2000 years of cryptography can teach us
  • Serious Security: What we can all learn from #PiDay
  • Serious Security: GPS week rollover and the other sort of zero-day

By the way, if there are any subjects you’d like us to cover in future Serious Security articles… please let us know in the comments below! (If you don’t put in a name, you’ll show up as Anonymous. You’re welcome.)


LANtenna hack spies on your data from across the room! (Sort of)

If you’re a Naked Security Podcast listener (and if you aren’t, please give it a try and subscribe if you like it!), you may remember a humorous remark about ‘sideband’ attacks and sneaky data exfiltration tricks that Sophos expert Chester Wisniewski made in a recent episode.

We were talking about how to stop cybercriminals from stealing cryptocurrency wallets, and I noted that the modest size of wallet files made them not only easier to identify but also quicker to sneak out of a network once they’d been located.

Chester’s quip at this point was:

As soon as you said that, my mind went to those researchers at Ben Gurion University who are always doing some sort of sideband attack… [for example,] they vary the frequency of the light bulbs to leak 11 bytes of data from the computer. I’m just waiting for them to leak a Bitcoin wallet by playing music through the speakers, or something like that!

Well, Chester’s wait might be over. (In theory, at least.)

BGU on the case

Mordechai Guri from the abovementioned Ben Gurion University of the Negev (BGU) in Israel has recently published a new ‘data exfiltration’ paper detailing an unexpectedly effective way of sneaking very small amounts of data out of a cabled network without using any obvious sort of interconnection.

This one is entitled LANTENNA: Exfiltrating Data from Air-Gapped Networks via Ethernet Cables, and it’s the latest of many BGU publications in recent years dealing with a tricky problem in cybersecurity, namely…

…how to split a network into two parts, running at different security levels, that can nevertheless co-operate and even exchange data when needed, but only in strictly controlled and well-monitored ways.

Cut the cords!

Physically disconnecting the two networks so that human intervention is needed to move data between them seems like an obvious solution, creating the proverbial “airgap” mentioned in the title of Guri’s paper.

Typically, this also means disallowing “free air” communications protocols such as Bluetooth and Wi-Fi, at least on the more secure side of the network, so that any interconnection points genuinely require some sort of physical interaction.

You might, however, allow (possibly limited) wireless technologies on the less secure side of the network, as long as no emanations from the insecure side can be received, whether by accident or design, on the secure side, and as long as there aren’t any detectable emanations at all from the secure side that could be picked up on the insecure side.

At one time, physical airgaps such as plugging a network cable into a special socket, or using a carefully vetted USB device in a specific USB port, were considered a good solution to this problem, although even USB-based airgaps can sometimes be breached, as anyone who has studied the infamous Stuxnet virus will know.

USB considered harmful

Stuxnet was programmed to damage a specific piece of industrial control equipment if ever it found itself running on a computer that was hooked up in the right way to the right sort of device.

For the longest time, no one could work out what the “right” (or wrong) sort of equipment was, because the virus didn’t identify the hardware by name but merely by some arbitrary characteristics that needed to match.

The puzzle was a bit like trying to find a single person on earth based only on a partial fingerprint and their approximate age.

Eventually, a device was tracked down that matched the “does it look like the one we want?” rule coded into Stuxnet, and it turned out to be a type of industrial centrifuge (used for separating tricky substances with nearly-but-not-quite-identical characteristics, such as different isotopes of uranium) known to be used in Iran.

You can probably extrapolate the rest of the Stuxnet saga for yourself if you aren’t familiar with it already.

Airgaps in a post-Stuxnet world

But what about data exfiltration across an airgap in a post-Stuxnet world, where the operators of airgapped networks have become much stricter about the “border controls” between the two sides of the network?

What covert channels could be used, even if they offered only the most modest data rates?

How could you detect and prevent the abuse of these channels if they were indeed exploitable by corrupt insiders (perhaps with the innocent help of unknowingly co-opted colleagues), if the tricks used were abstruse enough not to arouse suspicion in the first place?

BGU’s previous research has warned of low-bandwith data leakage tricks that can be orchestrated using techniques as varied as:

  • Turning PC capacitors into ultrasonic “loudspeakers”, creating apparently innocent sound waves that a co-operating computer can detect but the human ear cannot hear.
  • Coding hidden messages via the miniature LED light in the Caps Lock key, or other programmable lights on the keyboard that humans don’t expect to be used to convey more than a single bit of somewhat unimportant data at ay time.
  • Tweaking the speed of the CPU cooling fan, which in many computers can be done programmatically, or by deliberately increasing the processing load.
  • Using a steganographic trick involving the amount of red tint in the screen. Steganography is the “art” of hidden data: apparently, a digital camera can reliably detect a 3% change in screen “redness” that the human eye will simply filter out and ignore.

STEGANOGRAPHY EXPLAINED

[embedded content]

Original video here: https://www.youtube.com/watch?v=q2hD4v8_8-s
Click the cog icon to speed up playback or show live subtitles.


LANtenna in plain English

LANtenna is more of the same, this time abusing the staple of any so-called secure network: the LAN cables themselves.

With Wi-Fi off the menu for the simple reason that you can’t see (or easily control) where it’s going, because it is an electromagnetic broadcast medium using an invisible part of the radio spectrum, most secure networks rely on visible runs of traditional network cabling and switches.

In cabled networks, which mostly use so-called shielded twisted pair cables such as CAT5e, CAT6 and higher specifications, a suspicious connector can be physically traced to its source or destination (assuming it’s noticed, of course).

Making each conductor in the cable from a pair of wires twisted around each other along their length reduces electromagnetic leakage, and thus interference, a property first discovered and exploited in the earliest days of the telephone industry. Additional shielding around each conductor pair and around the entire cable, plus tighter twists using more wire, improve performance and reduce stray emissions even further.

Additionally, any device or segment of a cabled network can be quickly, reliably and visibly disconnected by unplugging either end of a cable.

But just how shielded are those twisted-pair cables?

More importantly, if their shielding isn’t perfect, just how big and expensive and obvious would the equipment be that you’d need to detect it?

In other words, if a collaborator on the secure side of the network could arrange for innocent-looking data with a hidden meaning to be sent on the network…

…how surreptitously and uncontroversially could you (and you might be your own collaborator, of course) pick up the steganographically encoded data with an innocent-looking device on the insecure side?

If you’d need a two-metre long Uda-Yagi antenna to pick up the stray emissions, and specialised detection hardware in a case the size of one of Spinal Tap’s sound cabinets, you’d be unlikely to get away with it.

LANtenna in practice

Guri found that he was able to emit encoded data, via the LANtenna attack, using two different techniques:

  • Send innocent data while toggling the network speed of the sending LAN card. If the network is under light load, then flipping the LAN speed between (say) 100Mbit/sec and 1Gbit/sec probably won’t attract attention by causing network jobs to slow down noticeably. But the spectrum of electromagnetic emissions varies with the encoding speed of the LAN card, and these emission changes themselves can be used to encode data for detection by someone who knows what to look out for.
  • Send innocent data packets of a fixed format in timed bursts. Guri used predetermined UDP packets, which can be broadcast harmlessly if no other device on the network is listening out for them, because unwanted UDP packets are, by design, generally ignored. Again, the electromagnetic emissions from a nearby network cable varied in a detectable way depending on whether the known “throwaway” UDP packets were part of the overall network signal or not. This technique is a bit like listening out for a specific motorcycle with a distinctive exhaust note going past at specific times in the midst of the noise of an an otherwise busy freeway.

Guri discovered that he could detect these stray emissions reliably from up to three metres away using commodity “software radio” hardware that is available in the form of cheap and modestly-sized USB dongles that are easy to conceal, or to disguise as more innocent-looking hardware devices.

The first technique was much more reliable and gave faster exfiltration rates, but generally requires root (sysadmin) access on the computer used to leak the data.

Speed toggling is also likely to get spotted and routinely logged by network monitoring hardware, not least because network cards that keep switching speed suggest hardware problems as well as being suspicious from a security perspective.

This trick is also unlikely to work in a virtual machine environment, because the guest operating system generally works with a virtual network card that simply pretends to switch its speed, while the physical interaction with the network itself is handled by the host computer, which combines all the virtual machine traffic and sends it at a constant speed.

So, the second method was easier to exploit, even in virtual computers…

…but the data rates that Guri was able to achieve were modest to say the very least.

We’re talking about just one bit per second (that’s about 400 bytes an hour, or about one movie per millennium) using the “innocent data packets” technique, with a reliable range of 2m using a PC, from which emissions were stronger, or just 1m using a laptop.

But that’s still enough to leak numerous typical symmetric cryptographic keys, or several cryptocurrency private keys, within a single working day, so Chester’s remark in S3 Ep46 of the podcast may have come true after all.

What to do?

Guri has several recommendations for countermeasures, of which the most obvious and easiest to implement are:

  • Treat the insecure side of the network more securely. Don’t allow anyone to bring wireless devices of any sort, including mobile phones, headsets, keyboards or or unverified “hardware dongles” into the shared security area at all.
  • Improve your cable shielding. Upgrading older network cabling to newer specifications, even if the more expensive cabling is not strictly necessary, can help. CAT8 cables, for example, are rated up to 40GBit/sec and are built to much higher shielding standards than CAT5e or CAT6.
  • Monitor network interfaces for unexpected and unwanted speed changes. A modern network card that shuttles speed frequently should be a cause for suspicion, even if only on reliability grounds.

Guri also suggests that you could consider emitting your own counter-surveillance jamming signals in the bandwidth ranges he monitored with his software radio dongles (typically 125MHz and above), and emitting randomised, background UDP traffic of your own to confuse anyone using the “innocent data packet” signalling technique.

These last two countermeasures are, of course, specific to the LANtenna attack as described in the paper, so a variation on Guri’s theme might bypass them.

Happy hunting!

(If you’re a secure area Blue Teamer, it’s a great excuse for budget to purchase some Software Defined Radio gear!)


S3 Ep54: Another 0-day, double Apache patch, and Fight The Phish [Podcast]

[04’04”] Apple (you guessed it!) fixes yet another iPhone 0-day.
[08’38”] Apache patches an embarrassing bug and then has to patch the patch.
[20’01”] It’s Fight The Phish week.
[28’42”] Oh! No! The computer that punched a user in the face.

With Paul Ducklin and Doug Aamoth. Intro and outro music by Edith Mudge.

LISTEN NOW

Click-and-drag on the soundwaves below to skip to any point in the podcast. You can also listen directly on Soundcloud.


WHERE TO FIND THE PODCAST ONLINE

You can listen to us on Soundcloud, Apple Podcasts, Google Podcasts, Spotify, Stitcher and anywhere that good podcasts are found.

Or just drop the URL of our RSS feed into your favourite podcatcher software.

If you have any questions that you’d like us to answer on the podcast, you can contact us at tips@sophos.com, or simply leave us a comment below.


Romance scams with a cryptocurrency twist – new research from SophosLabs

Sadly, we’ve needed to write and warn about romance scams and romance scammers many times in recent years.

Indeed, in February 2021 we published an article entitled Romance scams at all-time high: here’s what you need to know, following a report from the US Federal Trade Commission (FTC), America’s official consumer protection watchdog, warning that romance scammers are making more money than ever before.

Victims in the US were tricked out of more than $300 million in 2020, up from $200 million in 2019.

Conventional romance scams are what we often refer to as “long game” confidence tricks, where someone you meet online, typically on a dating site, manages to convince you: [a] that they’re a real person with the life history they claim; [b] that they’re love with you; and, most importantly of all, [c] that you are in love with them.

After weeks, perhaps months, of careful ground work, the illusory lover turns the talk towards money, and gradually convinces you to part with more and more of it, thanks to an ever-evolving series of ruses, abuses and excuses that practised cyberscammers can sometimes maintain for weeks, months or even years.

Putting money before love

Well, there’s another angle that dating-site scammers are taking these days, where the crooks quite deliberately put money before love.

They still use dating sites to select, stalk and groom their victims, but instead of investing weeks or months progressing from friendship, through love, romance and perhaps even fraudulent betrothal, to the “fleecing” phase…

…they strike up a friendship, using the dating game as a ruse, but then quickly move to money, this time in the guise of them doing you a big favour by offering you a chance to join an “unbeatable” investment opportunity.

As you can imagine, the “investment” that they propose typically involves cryptocoins, but to add a veneer of legitimacy, these cryptorom crooks, as we’ve dubbed them (crypto- from “cryptocurrency” and -rom from “romance scam”), invite you to install an “official” app in order to join the scheme.

All those dubious excuses needed by traditional romance scammers to talk you into using wire transfer services to send money, or into buying them gift cards and sending through the redemption codes, are replaced by a sense of structure: there’s a genuine app for this investment!

In fact, the cryptorom scammers will even offer you an app if you have an iPhone, where Apple’s “walled garden” approach of requiring all consumer app downloads to come from the Apple App Store almost certainly persuades many victims that the cryptorom app must indeed have some sort of official authorisation or approval.

The App Store, like Google’s Play Store equivalent for Android, is by no means immune to malware, fleeceware and other badware apps.

But totally bogus cryptocurrency trading apps, based on totally bogus trading platforms, rarely make it through. (Generally speaking, trading apps and platforms are supposed to comply with a whole bunch of regulations in additional to Apple’s own.)

So these crooks bypass the App Store entirely, using a series of tricks explained in a new SophosLabs research report entitled CryptoRom fake iOS cryptocurrency apps hit US, European victims for at least $1.4 million.

“Pretend that your phone really is our phone”

The technological basis for these scam apps is surprisingly simple: the crooks persuade you, for example on the basis of a friendship carefully cultivated via a dating site, into giving them the same sort of administrative power over your iPhone that is usually reserved for companies managing corporate-owned devices.

Companies who enrol staff devices into Apple’s remote management system, by means of what’s known as an MDM (mobile device management) profile, do so in order to take an active role in the protection, monitoring and control of those devices.

Typically, they can remotely wipe them, unilaterally or on request, block access to company data, enforce specific security settings such as lock codes and lock timeouts…

…and (this is the feature the crooks are after!) they can install bespoke corporate apps intended for employees only.

This “loophole” allows companies to bypass the App Store for proprietary apps that aren’t supposed to be available for anyone to download.

So, the cryptorom crooks exploit this Enterprise Provisioning feature by tricking you into treating them as if they were your employer, and as if they had a reasonable need or right to exercise almost complete control over your device.

In one fraudulent app deployment process that SophosLabs investigated, the criminals even used the “Description” field in the their fake app to claim that their off-market software was “authorised by Apple to be safe and reliable”:

1. Fake “Apple” 5-star reviews.
2. Fake “Apple” name on management certificate.
3. Fake “Apple” endorsement in bogus app.

Of course, the app isn’t a trading program at all.

There’s no trading platform behind it; your “investments” aren’t used to buy any sort of cryptocurrency, not even a volatile or little-known one; any “trades” and “profits” reported by the app are imaginary; if you are ever allowed to withdraw any of your “profits” in order to built up trust, the crooks will simply give you a tiny bit of your own money back; and when you want to cash out your “investment”…

…you realise that it’s all smoke and mirrors, what’s known in the jargon as a pyramid or Ponzi scheme.

What to do?

  • Take your time when “dating site” talk turns from friendship, love or romance to money. It’s Cybersecurity Awareness Month right now, and one of the catch phrases of #Cybermonth is: Stop. Think. Connect. Don’t be swayed by the fact that your new “friend” happens to have a lot in common with you. That needn’t be down to serendipity or because you have a genuine match. The other person could simply have read your various online profiles carefully in advance.
  • Never give administrative control over your phone to someone with no genuine reason to have it. Never click [Trust] on a dialog that asks you to enrol in remote management unless it’s from someone you already have an employment contract with who, the conditions have been clearly explained to you in advance, and you understand and accept the reasons for enrolling your phone.
  • Don’t be fooled by app descriptions that claim approval from Apple. Description text, unofficial reviews, and text shown by screens in the app itself are just that: text. Relying on what an app says about itself is like emailing someone you aren’t sure about and asking “Are you genuine?” If they are truthful, then the answer will be “Yes”. If they are lying, then the answer will be “Yes.”
  • Listen openly to your friends and family if they try to warn you. Criminals who use romance or dating as a lure think nothing of deliberately setting you against your family as part of their scams. They may even “counsel” you not to let your friends and family in on your “secret”, pitching their romantic interest or their investment proposal as something that conservative, hidebound people will simply never understand. Don’t let the scammers drive a wedge between you and your family as well as between you and your money.

YOU MIGHT ALSO LIKE:

Apple quietly patches yet another iPhone 0-day – check you have 15.0.2

It’s been a wild few weeks for Apple, or perhaps an “in-the-wild” few weeks, with several zero-day bugs necessitating emergency updates.

We were going to say “unexpected updates”, but all (or almost all) Apple security patches are, of course, unexpected by design.

Apple deliberately announces security fixes only after they’ve been published, so you couldn’t plan for them even if you wanted.

Apple claims that this is for “customers’ protection”, because it prevents crooks who may have heard rumours about a security hole but haven’t figured it our for themselves from working out where to start looking for it.

On the other hand, it also means that you will hardly ever hear about official workarounds or threat mitigations from Apple, even if those workarounds might keep you safe during the gap between the zero-day hole appearing and the patch being created, tested and released.

Remember that zero-day vulnerabilities refer to bugs that cybercriminals know how to exploit before a patch is available, with the result that even a well-informed user or sysadmin would have had zero days to get officially ahead of the Bad Guys.

Kernel memory corruption

Apple’s clipped-as-ever prose [2021-10-11T23:55Z] says simply:

Impact: An application may be able to execute arbitrary code with kernel privileges. Apple is aware of a report that this issue may have been actively exploited. Description: A memory corruption issue was addressed with improved memory handling. CVE-2021-30883: an anonymous researcher

As we’ve mentioned before, memory corruption bugs that affect the kernel itself are usually much more serious than bugs that only affect regular apps.

Apps in iOS and iPadOS are insulated from each other to the point that even if you can crash an app and take it over, you usually can’t get access to anything other than the files and saved data that belong to the app.

Each app effectively runs as if it were a separate user, with its own account and access control settings, so apps can only interact or read each others’ files in carefully regulated ways.

This contrasts with typical laptop and desktop apps, where your email software can typically read your documents, your document processing app can typically read your spreadsheets, your spreadsheets can peek at your accounting databases, and so on.

But the app separation on iPhones and iPads is set up and regulated by the kernel, making the kernel itself into a kind of “ueberapp” that is a trophy target for any jailbreaker, threat researcher or cybercriminal.

In other words, a remote code execution bug in the kernel could allow an attacker to trick an otherwise legitimate and harmless app into compromising the very core of the operating system.

When the kernel is exploited, the side-effects may blow away iOS’s inter-app protection entirely and allow a single rogue app to snoop on and take control over everything.

What to do?

  • Look up the bug bulletin on Apple security page HT212846. There’s very little to go on, sadly, but this page confirms that iOS 15.0.1 and iPadOS 15.0.1 need updating to 15.0.2.
  • Check for and if necessary install the update on your device. Go to Settings > General and choose Software Update.

In several previous emergency update situations where Apple has witheld its official email security bulletins, the reason has turned out to be that related updates were also needed for other operating system in Apple’s menagerie, including macOS and older flavours of iOS.

As a result, Apple said nothing much about anything until all the updates were ready.

Does this mean, in this case, that iOS 14, iOS 12, macOS Big Sur and macOS Catalina are vulnerable too, and will be receiving patches in due course?

As usual, we can’t say, but we advise you to keep your eye on Apple’s core security page, numbered HT20122, in case there’s any additional news you need to keep up with over the next few days.


go top