Category Archives: News

Fashion brand SHEIN fined $1.9m for lying about data breach

Chinese company Zoetop, former owner of the wildly popular SHEIN and ROMWE “fast fashion” brands, has been fined $1,900,000 by the State of New York.

As Attorney General Letitia James put it in a statement last week:

SHEIN and ROMWE’s weak digital security measures made it easy for hackers to shoplift consumers’ personal data.

As if that weren’t bad enough, James went on to say:

[P]ersonal data was stolen and Zoetop tried to cover it up. Failing to protect consumers’ personal data and lying about it is not trendy. SHEIN and ROMWE must button up their cybersecurity measures to protect consumers from fraud and identity theft.

Frankly, we’re surprised that Zoetop (now SHEIN Distribution Corporation in the US) got off so lightly, considering the size, wealth and brand power of the company, its apparent lack of even basic precautions that could have prevented or reduced the danger posed by the breach, and its ongoing dishonesty in handling the breach after it became known.

Breach discovered by outsiders

According to the Office of the Attorney General of New York, Zoetop didn’t even notice the breach, which happened in June 2018, by itself.

Instead, Zoetop’s payment processor figured out that the company had been breached, following fraud reports from two sources: a credit card company and a bank.

The credit card company came across SHEIN customers’ card data for sale on an underground forum, suggesting that the data had been acquired in bulk from the company iself, or one of its IT partners.

And the bank identied SHEIN (pronounced “she in”, if you hadn’t worked that out already, not “shine”) to be what’s known as a CPP in the payment histories of numerous customers who had been defrauded.

CPP is short for common point of purchase, and means exactly what it says: if 100 customers independently report fraud against their cards, and if the only common merchant to whom all 100 customers recently made payments is company X…

…then you have circumstantial evidence that X is a likely cause of the “fraud outbreak”, in the same sort of way that groundbreaking British epidemiologist John Snow traced an 1854 cholera outbreak in London back to a polluted water pump in Broad Street, Soho.

Snow’s work helped to dismiss the idea that dieseases simply “spread through foul air”; established “germ theory” as a medical reality, and revolutionised thinking on public health. He also showed how objective measurement and testing could help connect causes and effects, thus ensuring that future researchers didn’t waste time coming up with impossible explanations and seeking useless “solutions”.

Didn’t take precautions

Unsurprisingly, given that the company found out about the breach second-hand, the New York investigation castigated the business for not bothering with cybersecurity monitoring, given that it “did not run regular external vulnerability scans or regularly monitor or review audit logs to identify security incidents.”

The investigation also reported that Zoetop:

  • Hashed user passwords in a way considered too easy to crack. Apparently, password hashing consisted of combining the user’s password with a two-digit random salt, followed by one iteration of MD5. Reports from password cracking enthusiasts suggest that a standalone 8-GPU cracking rig with 2016 hardware could churn through 200,000,000,000 MD5s a second back then (the salt typically doesn’t add any extra computation time). That’s equivalent to trying out nearly 20 quadrillion passwords a day using just one special-purpose computer. (Today’s MD5 cracking rates are apparently about five to ten times faster than that, using recent graphics cards.)
  • Logged data recklessly. For transactions where some kind of error occurred, Zoetop saved the entire transaction to a debug log, apparently including full credit card details (we’re assuming this included the security code as well as long number and expiry date). But even after it knew about the breach, the company didn’t try to find out where it might have stored this sort of rogue payment card data in its systems.
  • Couldn’t be bothered with an incident response plan. Not only did the company fail to have a cybersecurity response plan before the breach happened, it apparently didn’t bother to come up with one afterwards, with the investigation stating that it “failed to take timely action to protect many of the impacted customers.”
  • Suffered a spyware infection inside its payment processing system. As the investigation explained, “any exfiltration of payment card data would [thus] have happened by intercepting card data at the point of purchase.” As you can imagine, given the lack of an incident response plan, the company was not subsequently able to tell how well this data-stealing malware had worked, though the fact that customers’ card details appeared on the dark web suggests that the attackers were successful.

Didn’t tell the truth

The company was also roundly criticised for its dishonesty in how it dealt with customers after it knew the extent of the attack.

For example, the company:

  • Stated that 6,420,000 users (those who had actually placed orders) were affected, although it knew that 39,000,000 user account records, including those ineptly-hashed passwords, were stolen.
  • Said it had contacted those 6.42 million users, when in fact only users in Canada, the US and Europe were informed.
  • Told customers that it had “no evidence that your credit card information was taken from our systems”, despite having been alerted to the breach by two sources who presented evidence strongly suggesting exactly that.

The company, it seems, also neglected to mention that it knew it had suffered a data-stealing malware infection and had been unable to produce evidence that the attack had yielded nothing.

It also failed to disclose that it sometimes knowingly saved full card details in debug logs (at least 27,295 times, in fact), but didn’t actually try to track down those rogue log files down in its sytems to see where they ended up or who might have had access to them.

To add injury to insult, the investigation further found that the company was not PCI DSS compliant (its rogue debug logs made sure of that), was ordered to submit to a PCI forensic investigation, but then refused to allow the investigators the access they needed to do their work.

As the court documents wryly note, “[n]evertheless, in the limited review it conducted, the [PCI-qualified forensic investigator] found several areas in which Zoetop’s systems were not compliant with PCI DSS.”

Perhaps worst of all, when the company discovered passwords from its ROMWE website for sale on the dark web in June 2020, and ultimately realised that this data was probably stolen back in the 2018 breach that it had already tried to cover up…

…its response, for several months, was to present affected users with a victim-blaming login prompt saying, “Your password has a low security level and may be at risk. Please change your login password”.

That message was subseqently changed to a diversionary statement saying, “Your password has not been updated in more than 365 days. For your protection, please update it now.”

Only in December 2020, after a second tranche of passwords-for-sale were found on the dark web, apparently bringing the ROMWE part of the breach to more than 7,000,000 accounts, did the company admit to its customers that they had been mixed up in what it blandly referred to as a “data security incident.”

What to do?

Unfortunately, the punishment in this case doesn’t seem to put much pressure on “who-cares-about-cybersecurity-when-you-can-just-pay-the-fine?” companies to do the right thing, whether before, during or after a cybersecurity incident.

Should penalties for this sort of behaviour be higher?

For as long as there are businesses out there that seem to treat fines simply as a cost-of-business that can be worked into the budget in advance, are financial penalties even the right way to go?

Or should companies that suffer breaches of this sort, then try to impede third-party investigators, and then to hide the full truth of what happened from their customers…

…simply be prevented from trading at all, for love or money?

Have your say in the comments below! (You may remain anonymous.)


Not enough time or staff?
Learn more about Sophos Managed Detection and Response:
24/7 threat hunting, detection, and response  ▶


Serious Security: Microsoft Office 365 attacked over feeble encryption

We’re not quite sure what to call it right now, so we referred to it in the headline by the hybrid name Microsoft Office 365.

(The name “Office” as the collective noun for Microsoft’s word processing, spreadsheet, presentation and collaboration apps is being killed off over the next month or two, to become simply “Microsoft 365”.)

We’re sure that people will keep on using the individual app names (Word, Excel, PowerPoint and friends) and the suite’s moniker Office for many years, though newcomers to the software will probably end up knowing it as 365, after dropping the ubiquitous Microsoft prefix.

As you may know, the Office standalone apps (the ones you actually install locally so you don’t have to go online to work on your stuff) include their own option to encrypt saved documents.

This is supposed to add an extra layer of security in case you later share any of those files, by accident or design, with someone who wasn’t supposed to receive them – something that’s surprisingly easy to do by mistake when sharing attachments via email.

Unless and until you also give the recipient the password they need to unlock the file, it’s just so much shredded cabbage to them.

Of course, if you include the password in the body of the email, you’ve gained nothing, but if you’re even slightly cautious about sharing the password via a different channel, you’ve bought yourself some extra safety and security against rogues, snoops and ne’er-do-wells getting easy access to confidential content.

OME under the spotlight

Or have you?

According to researchers at Finnish cybersecurity company WithSecure, your data could be enjoying much less protection that you might reasonably expect.

The feature that the testers used is what they refer to as Office 365 Message Encryption, or OME for short.

We haven’t reproduced their experiments here, for the simple reason that the core Office, sorry, 365 products don’t run natively on Linux, which we use for work. The web-based versions of the Office tools don’t have the same feature set as the full apps, so any results we might obtain are unlikely to align with how most business users of Office, ah, 365 have configured Word, Excel, Outlook and friends on their Windows laptops.

As the researchers describe it:

This feature is advertised to allow organisations to send and receive encrypted email messages between people inside and outside your organisation in a secure manner.

But they also point out that:

Unfortunately the OME messages are encrypted in insecure Electronic Codebook (ECB) mode of operation.

ECB explained

To explain.

Many encryption algorithms, notably the Advanced Encryption Standard or AES, which OME uses, are what’s known as block ciphers, which scramble largeish chunks of data at a time, rather than processing individual bits or bytes in sequence.

Generally speaking, this is supposed to help both efficiency and security, because the cipher has more input data to mix-mince-shred-and-liquidise at each turn of the cryptographic crank-handle that drives the algorithm, and each turn gets you further through the data you want to encrypt.

The core AES algorithm, for example, consumes 16 input plaintext bytes (128 bits) at a time, and scrambles that data under an encryption key to produce 16 encrypted ciphertext output bytes.

(Don’t confuse block size with key size – AES encryption keys can be 128 bits, 192 bits or 256 bits long, depending on how unlikely you want them to be to guess, but all three key sizes work on 128 bit blocks each time the algorithm is “cranked”.)

What this means is that if you pick an AES key (regardless of length) and then use the AES cipher directly on a chunk of data…

…then every time you get the same input chunk, you’ll get the same output chunk.

Like a truly massive codebook

That’s why this direct mode of operation is called ECB, short for electronic code book, because it’s sort of like having an enormous code book that could be used as a lookup table for encrypting and decrypting.

(A full “codebook” could never be constructed in real life, because you’d need to store a database consisting of 2128 16-byte entries for each possible key.)

Unfortunately, especially in computer-formatted data, repetition of certain chunks of data is often inevitable, thanks to the file format used.

For example, files that routinely pad out data sections so they line up on 512-byte boundaries (a common sector size when writing to disk) or to 4096-byte boundaries (a common allocation unit size when reserving memory) will often produce files with long runs of zero bytes.

Likewise, text documents that contain lots of boilerplate, such as headers and footers on every page, or repeated mention of the full company name, will contain plentiful repeats.

Every time a repeated plaintext chunk just happens to line up on a 16-byte boundary in the AES-ECB encryption process, it will therefore emerge in the encrypted ouput as exactly the same ciphertext.

So, even if you can’t fornmally decrypt the ciphertext file, you may be able to make immediate, security-crushing inferences from it, thanks to the fact that patterns in the input (which you may know, or be able to infer, or to guess) are preserved in the output.

Here’s an example based on an article we published nearly nine years ago when we explained why Adobe’s now-notorious use of ECB-mode encryption to “hash” its users’ passwords was Not A Good Idea:

Left. Original RGBA image.
Right. Image data encrypted with AES-128-ECB.

Note how the pixels that are solid white in the input reliably produce a repetitive pattern in the output, and the blue parts remain somewhat regular, so that the structure of the original data is obvious.

In this example, each pixel in the original file takes up exactly 4 bytes, so each left-to-right 4-pixel run in the input data is 16 bytes long, which aligns exactly with each 16-byte AES encryption block, thus accentuating the “ECB effect”.


Matching ciphertext patterns

Even worse, if you have two documents that you know are encrypted with the same key, and you just happen to have the plaintext of one of them, then you can look through the ciphertext that you can’t decrypt, and try to match sections of it up with patterns in the ciphertext that you can decrypt.

Remember that you don’t need the key to “decrypt” the first document if you already have it in decrypted form – this is known, unsurprisingly, as a known-plaintext attack.

Even if there are only a few matches of apparently innocent text that isn’t itself secret data, the knowledge an adversary can extract this way can be a gold-mine for intellectual property spies, social engineers, forensic investigators, and more.

For example, even if you have no idea what the details of a document refer to, by matching known plaintext chunks across multiple files, you may be able to determine that an apparently random collection of documents:

  • Were all sent to the same recipient, if there’s a common salutation at the top of each one.
  • Refer to the same project, if there’s a unique identifying text string that keeps popping up.
  • Have the same security classification, if you are keen on focusing on the stuff that’s clearly meant to be “more secret” than the rest.

What to do?

Don’t use ECB mode!

If you’re using a block cipher, pick a block cipher operating mode that:

  • Includes what’s known as an IV, or initialisation vector, chosen randomly and uniquely for each message.
  • Deliberately arranges the encryption process so that repeated inputs come out differently every time.

If you’re using AES, the mode you probably want to choose these days is AES-GCM (Galois Counter Mode), which not only uses an IV to create a different encryption data stream every time, even if the key remains the same, but also calculates what’s known as a Message Authentication Code (MAC), or cryptographic checksum, at the same time as scrambling or unscrambling the data.

AES-GCM means not only that you avoid repeated ciphertext patterns, but also that you always end up with a “checksum” that will tell you if the data you just decrypted was tampered with along the way.

Remember that a crook who doesn’t know what the ciphertext actually means might nevertheless be able to trick you into trusting an inexact decryption without ever knowing (or caring) what sort of incorrect output you end up with.

A MAC that is calculated during the decryption process, based on the same key and IV, will help ensure that you know that the ciphertext you received is valid, and therefore that you have almost certainly decrypted what was originally put in at the other end.

Alternatively, use a dedicated stream cipher that produces a pseudo-random byte-by-byte keystream that allows you to encrypt data without having to process 16 bytes (or whatever the block size might be) at a time.

AES-GCM essentially converts AES into a stream cipher and adds authentication in the form of a MAC, but if you’re looking for a dedicated stream cipher designed specifically to work that way, we suggest Daniel Bernstein’s ChaCha20-Poly1305 (the Poly1305 part is the MAC), as detailed in RFC 8439.

Below, we’ve shown what we got using AES-128-GCM and ChaCha20-Poly1305 (we discarded the MAC codes here), along with an “image” consisting 95,040 RGBA bytes (330×72 at 4 bytes per pixel) from the Linux kernel pseudo-random generator.

Remember that just because data looks unstructured doesn’t mean that it is truly random, but if it doesn’t look random, yet it claims to be encrypted, you might as well assume that there’s some structure left behind, and that the encryption is suspect:

What happens next?

According to WithSecure, Microsoft doesn’t plan to fix this “vulnerability”, apparently for reasons of backward compatibility with Office 2010…

Legacy versions of Office (2010) require AES 128 ECB, and Office docs are still protected in this manner by Office apps.

…and…

The [WithSecure researchers’] report was not considered meeting the bar for security servicing, nor is it considered a breach. No code change was made and so no CVE was issued for this report.

In short, if you’re currently relying on OME, you may want to consider replacing it with a third-party encryption tool for sensitive messages that encrypts your data independently of the apps that created those messages, and thus works independently of the internal encryption code in the Office range.

That way, you can choose a modern cipher and a modern mode of cipher operation, without having to drop back to the old-school decryption code built into Office 2010.


HOW WE MADE THE IMAGES IN THE ARTICLE Start with sop330.png, which you can create for yourself by cropping the cleaned-up SOPHOS logo from the topmost image, removing the 2-pixel blue boundary, and saving in PNG format. The image should end up at 330x72 pixels in size. Convert to RGBA using ImageMagick: $ convert sop330.png sop.rgba Output is 330x72 pixels x 4 bytes/pixel = 95,040 bytes. === Encrypt using Lua and the LuaOSSL library (Python has a very similar OpenSSL binding): -- load data
> fdat = misc.filetostr('sop.rgba')
> fdat:len()
95040 -- create cipher objects
> aes = openssl.cipher.new('AES-128-ECB')
> gcm = openssl.cipher.new('AES-128-GCM')
> cha = openssl.cipher.new('ChaCha20-Poly1305') -- initialise passwords and IVs
-- AES-128-ECB needs a 128-bit password, but no IV
-- AES-128-GCM needs a 128-bit password and a 12-byte IV
-- ChaCha20 needs a 256-bit password and a 12-byte IV
> aes:encrypt('THEPASSWORDIS123')
> gcm:encrypt('THEPASSWORDIS123','andkrokeutiv')
> cha:encrypt('THEPASSWORDIS123THEPASSWORDIS123','qlxmtosh476g') -- encrypt the file data with the three ciphers
> aesout = aes:final(fdat)
> gcmout = gcm:final(fdat)
> chaout = cha:final(fdat) -- a stream cipher produces output byte-by-byte,
-- so ciphertext should be same length as plaintext
> gcmout:len()
95040
> chaout:len()
95040 -- we won't be using the MAC codes from GCM and Poly1305 here,
-- but each cipher produces a 128-bit (16-byte) "checksum"
-- used to authenticate the decryption after it's finished,
-- to detect if the input ciphertext gets corrupted or hacked
-- (the MAC depends on the key, so an attacker can't forge it)
> base.hex(gcm:getTag(16))
a70f204605cd5bd18c9e4da36cbc9e74
> base.hex(cha:getTag(16))
a55b97d5e9f3cb9a3be2fa4f040b56ef -- create a 95040 "image" straight from /dev/random
> rndout = misc.filetostr('/dev/random',#fdat) -- save them all - note that we explicity truncate the AES-ECB
-- block cipher output to the exact image length required, because
-- ECB needs padding to match the input size with the block size
> misc.strtofile(aesout:sub(1,#fdat),'aes.rgba')
> misc.strtofile(gcmout,'gcm.rgba')
> misc.strtofile(chaout,'cha.rgba')
> misc.strtofile(rndout,'rnd.rgba') === To load the files in a regular image viewer, you may need to convert them losslessly back into PNG format: $ convert -depth 8 -size 330x72 aes.rgba aes.png
$ convert -depth 8 -size 330x72 gcm.rgba gcm.png
$ convert -depth 8 -size 330x72 cha.rgba cha.png
$ convert -depth 8 -size 330x72 rnd.rgba rnd.png === Given that the encryption process scrambles all four bytes in each RGBA pixel, the resulting image has variable transparency (A = alpha, short for tranparency).
Your image viewer may decide to display this sort of
image with a checkerboard background, which confusingly looks like part of the image, but isn't. We therefore
used the Sophos blue from the original image as a background for the encrypted files to make them easier
to view. The overall blue hue is therefore not part of the image data. You can use any solid colour you like.

S3 Ep104: Should hospital ransomware attackers be locked up for life? [Audio + Text]

THREE DEEP QUESTIONS

Should hospital ransomware attackers get life in prison? Who was the Countess of Computer Science, and just how close did we come to digital music in the 19th century? And could a weirdly wacky email brick your iPhone?

With Doug Aamoth and Paul Ducklin.

Intro and outro music by Edith Mudge.

Click-and-drag on the soundwaves below to skip to any point. You can also listen directly on Soundcloud.

You can listen to us on Soundcloud, Apple Podcasts, Google Podcasts, Spotify, Stitcher and anywhere that good podcasts are found. Or just drop the URL of our RSS feed into your favourite podcatcher.


READ THE TRANSCRIPT

DOUG.  Legal troubles abound, a mysterious iPhone update, and Ada Lovelace.

All that and more on the Naked Security Podcast.

[MUSICAL MODEM]

Welcome to the podcast, everybody.

I am Doug Aamoth; he is Paul Ducklin.

Paul, how do you do today, Sir?


DUCK.  I’m very well, Doug…

…except for some microphone problems, because I’ve been on the road a little bit.

So if the sound quality isn’t perfect this week, it’s because I’ve had to use alternative recording equipment.


DOUG.  Well, that leads us expertly into our Tech History segment about imperfection.


DUCK.  [IRONIC] Ohhhhh, thanks, Doug. [LAUGHS]


DOUG.  On 11 October 1958, NASA launched its first space probe, the Pioneer One.

It was meant to orbit the moon, but failed to reach lunar orbit thanks to a guidance error, fell back to Earth, and burned up upon re-entry.

Though it still collected valuable data during its 43 hour flight.


DUCK.  Yes, I believe it got to 113,000km above the Earth… and the Moon is just shy of 400,000 kilometres away.

My understanding is it went off target a bit and then they tried to correct, but they didn’t have the granularity of control that they do these days, where you run the rocket motor for a little tiny burst.

So they corrected, but they could only correct so much… and in the end they figured, “We’re not going to make it to the moon, but maybe we can get it into a high Earth orbit so it’ll keep going around the Earth and we can keep getting scientific measurements?”

But in the end it was a question of, “What goes up… [LAUGHS] must come down.”


DOUG.  Exactly. [LAUGHS]


DUCK.  And, as you say, it was like shooting a very, very, very powerful bullet way into outer space, well above the Kármán line, which is only 100km, but in such a direction that it didn’t actually escape the influence of the Earth altogether.


DOUG.  Pretty good for a first try, though?

I mean, not bad… that’s 1958, what do you expect?

I mean, they did their best, and got a third of of the way to the moon.

Well, speaking of people not doing their best and crashing, we’ve got a kind of a lightning round of legal stories here…

…starting with our friend Sebastien Vachon-Desjardins, who we’ve spoken about before.

He is in hot water in Florida and perhaps beyond:


DUCK.  Yes, we’ve spoken about him on the podcast, I think, a couple of times.

He was a notoriously busy affiliate of the NetWalker ransomware-as-a-service crew.

In other words, he didn’t write the ransomware… he was one of the attackers, breakers-in and deployers of it.

As far as I know, he was quite keen on ransomware: he joined several of these gangs, as it were; signed up to several clubs.

Apparently, he may have made as much as one-third of the overall NetWalker gang’s earnings, so he was very vigorous.

So we’re talking about many millions of dollars that he made for himself, and of course, 30% of that was going to the core people.

He was arrested in Canada, he was sent to prison…

…and then he was specially released from prison in Canada.

Not because they felt sorry for him: they released him from prison so he could be extradited to the US, where he decided to plead guilty, and got 20 years.

Apparently when he finishes those 20 years in federal prison, he will be deported to Canada and he will go straight back in to finish his seven years in Canada.

And if I remember correctly, the judge in that case, noting that this is a ransomware gang that is, amongst other things, notorious for attacking health care institutions, hospitals; people who really, really can’t afford to pay, and where the disruption really, really directly affects people’s lives…

…the judge apparently said words to the effect of, “If you hadn’t actually decided to plead guilty, put your hand up for the offence, I would have sentenced you to life in prison.”


DOUG.  Yes, that’s wild!

OK, also kind of low: the former Uber CSO Joe Sullivan… this story is also wild!

They’re answering to a breach that happened with the regulators, and while they’re answering to the breach that happened, *another* breach happens and there’s coverups:


DUCK.  Yes, that was a vigorously watched story by much of the cybersecurity community…

Because Uber have paid all sorts of penalties, and apparently they agreed to co-operate, but this wasn’t the company being charged.

This was the individual who was supposedly in charge of security – he had previously been at Facebook, and then was enticed to Uber.

As far as the jury was concerned, it wasn’t so much that the crooks got paid in this case, it’s that they got paid to pretend that the data breach was a bug bounty; that they disclosed it responsibly rather than actually stole the data and then extorted it.

And, of course, the second part of this is, I believe… I’m not sure how you say this word, because you don’t hear it in the UK, but it’s “misprision”… I think that’s how you say it.

It basically means “covering up a crime”.

And, of course, that deals with the fact that, as you say, they’re in the middle of an investigation, they’re being reviewed by the FTC… you’re about to convince them. “Yes, we have put in a whole load of precautions since last time.”

And in the middle of trying to plead your case and go, “No, no, we’re much better than we were”…

…oh, dear, you lose not just some records, what was it?

More than 50 million records relating to people who’d taken Ubers, customers.

Seven million drivers, and that included driving licence numbers for 600,000 drivers and SSNs (social security numbers) for 60,000.

So that’s pretty serious!

And then just trying to go, “Well, let’s [COUGHS MEANINGFULLY] make it so that we don’t have to tell anybody, and then let’s go and get the crooks to sign non-disclosure agreements.” [LAUGHS]

Speaker1
[LAUGHS] Oh, god!


DUCK.  [LAUGHING] Not funny, Doug!


DOUG.  Very good.

And a little more cut and dried…

If you create an app that purports to be connected with WhatsApp, and you collect user credentials, WhatsApp’s going to come after you!


DUCK.  Yes, this is a case of WhatsApp and Meta.

Sounds a bit weird to say both of them, but I guess both legal entities (WhatsApp is owned by Meta) have decided, “Well, if you can’t beat them, sue them!”

So this is credential theft, so that accounts can be used basically to send fake messages.

Spam, basically, but probably also loads of scams, right?

If you’ve got my password, you can contact all my buddies and said, “Hey, I made loads of money out of this cryptocoin scam,” and because it’s *me* saying it rather than some random individual off the internet, you might be more inclined to believe it.

So WhatsApp figured, “Right, we’re just going to sue you, and try and shut down your companies that way. And that would basically give us a vehicle to force all these apps to be removed, wherever they might appear.”

Unfortunately, the crooks had done enough treachery to sneak them into Google Play.

So the accusation is that they “misled more than 1 million WhatsApp users into self-compromising their accounts as part of an account takeover attack.”

And by self-compromise, it means they just presented users with a fake login page and basically proxied their credentials.

Presumably they kept them and abused them afterwards…


DOUG.  OK, we will keep an eye on that.

Now, please tell us, what does a Countess who lived in the first half of the 19th century have to do with computing and computer science?


DUCK.  That would be Ada Lovelace.

Or, more formally, Ada, Countess of Lovelace… she married a chap who was called Lord Lovelace, so she became Lady Lovelace:

She was of aristocratic stock, and in those days, women generally didn’t go into science.

But she did: she was keen on mathematics.

And she met up, as a youngster, as a teenager, I think, with Charles Babbage, who’s famous for having invented the Difference Engine, which could calculate things like trig tables.

So therefore the UK government was interested because where you can do trigonometry, you can do artillery tables, and that means you can make your gunners more accurate on land and sea.

But then Babbage figured, “That’s just a pocket calculator (in modern terminology). Why don’t I build a general-purpose computer?”

And he designed a thing called the Analytical Engine.

And that was what Ada Lovelace was really interested in.

In fact, I believe she offered to be Babbage’s VC at one point, his venture capitalist: “I’ll bring in the money, but you have to leave the running of the business part of it to me. Let me build the business for you!


DOUG.  It’s truly amazing.

To anyone that’s listening to this…

…as you’re listening to this story, I want you to keep in mind that she died at 36.

She’s doing this all in her 20s and early 30s.

Amazing things!


DUCK.  She died of uterine cancer, so she was really in pain and unable to work in the end.

And she didn’t just want to be the business person behind it, “Hey, let me build a business.”

Babbage, I think, had a little bit of bitterness towards the establishment for not coming in; he wanted to do it in a more traditional, “No, I want to prove I’m right kind of way”, rather than going, “Yes, just go and find me the money,” which might be the approach today.

So the business side that she proposed never came off.

But she was also essentially the world’s first computer programmer… certainly she was the first published computer programmer.

You can imagine Babbage tinkering with his Analytical Engine… he probably came up with some programs before she did, but he never realised them.

And certainly he never published, like she did, a treatise on why this Analytical Engine was important, and the fact that it could actually do much more than just numeric calculations.

She had this vision that calculators added numbers together, but if you could do numeric calculations and on the basis of those make decisions (what we might now call IF…THEN…ELSE), then you could actually represent and work with all sorts of other stuff, such as logical propositions, devising proofs, or even working with music, if you had some mathematical or numerical way of representing music.

Now, I don’t know whether digital music will ever take off, Doug, but if it ever does…


DOUG.  [LAUGHS] We have Ada Lovelace to thank!


DUCK.  She was there in 1840, thinking and writing about this!

She was, believe it or not, the daughter of the famous (or infamous) poet Lord Byron.

Apparently her mother and father parted ways, so I don’t believe she ever met him – she was sort of the “unknown daughter” to him.

Now, Byron famously was on vacation in Switzerland once, where rain kept him and the friends that he was vacationing with indoors.

And those friends were Percy and Mary Shelley.

And Byron said, “Hey, let’s have a horror story writing competition!” [LAUGHTER]

And what he did, and what Percy Shelley did, came to nothing; no one remembers what they wrote.

But Mary Shelley… that is apparently where she came up with Frankenstein…


DOUG.  Wow!


DUCK.  … or the modern Prometheus, which is essentially all about artificial intelligence and human-created thought machines, if you like, and how it ends badly.

And Ada, Byron’s daughter, was actually the first person to write in a scientific way about “Can machines think?” in the notes that she wrote on the Analytical Engine.

She did *not* share the same horror story concerns that her father’s chums had.

The way she wrote it (scientists generally had a more literary bent in those days):

The Analytical Engine has no pretensions whatever to originate anything. It can do whatever we know how to order it to perform. It can follow analysis, but it has no power of anticipating any analytical relations or truths.

So she saw computing devices, general-purpose computing devices, as a way of helping us understand and work out things that would be impossible for regular human minds to do.

But I don’t think she thought that they could be a replacement for human minds.


DOUG.  And again, keep in mind she’s writing this in 1842…


DUCK.  Exactly!

It’s one thing to hack in real life; it’s another to hack on imaginary computers that you know *could* exist, but nobody has built one yet.


DOUG.  [LAUGHS] Exactly.


DUCK.  The problem was, because these computers were mechanical and required mechanical gears, they required absolute perfection in manufacturing.

Or there would just be this cumulative error that would make them lock up due to backlash, the fact that the gears don’t mesh perfectly.

And I think, as we’ve said in the podcast before, ironically, it took the design of digital computers, that are essentially extensions of the Analytical Engine, that can control computerised metal cutting machines with sufficient precision…

…before we could make a Difference Engine or an Analytical Engine that actually worked.

And if that isn’t a fascinatingly circular story, I don’t know what is!

So Ada Lovelace was in the middle of this: proselytiser; evangelist; scientist; mathematician; computer scientist; and as a budding venture capitalist, saying to Babbage, “Let go of all your business interests; hand them over to me. I move in the right circles to find you the money  – I’ll get the investment! Let’s see what we can do with this!”

And, for better or for worse, Babbage baulked at that and apparently died essentially in poverty, rather a broken man.

One wonders what might have happened had he done it…


DOUG.  It’s a fascinating story.

I urge you to head to Naked Security to read it.

It’s called Move over, Patch Tuesday – it’s Ada Lovelace day.

Great long read, very interesting!

And now let’s wrap up with this mysterious iPhone update, which is a so-called “one-bug fix”.

These are not common:


DUCK.  No, mostly when you get your Apple updates (because you don’t know when they’re coming – there isn’t a Patch Tuesday where you can predict), they just arrive…

…there’s this giant list of stuff that they’ve fixed since the last one they did.

And occasionally there’s a zero-day, massive emergency, and you get an Apple update that says, “Oh, well, we’re fixing one or maybe two things.”

And this one just suddenly arrived, for iOS 16 only.

I was about to go to bed, Doug… it was quite late, and I thought, I’ll just have a look at my email, see if Doug sent me anything. [LAUGHTER]

And there was this thing from Apple: iOS 16.0.3.

And I thought, “That’s sudden! I wonder what’s gone wrong? Must be a zero day.”

So I went into the security bulletin… it’s not a zero day; it’s only a denial-of-service (DoS) attack; not an actual remote code execution.

The Mail app can be made to crash.

And yet Apple suddenly pushed out this update and it just says:

Impact: Processing a maliciously crafted mail message may lead to a denial of service. An input validation issue was addressed with improved input validation.

Strange double use of the word validation there…

CVE-2022-22658.

And that’s all we know.

And it doesn’t say, “Oh, it was reported by such-and-such a bug hunting group”, or, “Thanks to an anonymous researcher”, so I presume they found it themselves.

And I can only guess that they felt they needed to fix this really quickly because it could accidentally lock you out of your phone, or make it almost unusable.

Because that’s the problem with denial-of-service bugs when they’re in messaging apps, isn’t it?

You think of denial of service… the app crashes; woo hoo, you just start it again.

But the problem with a messaging app is that: [A] it tends to run in the background, so it can receive a message at any time; [B] you don’t get to choose who sends you messages, other people do; and [C] it may be that in order to get into the app to delete the rogue message, you have to wait for the app to load, and it decides. “Oh. I need to show you this message that you want to del…”, CRASH!

What I call a CRASH: GOTO CRASHerror.

In other words, maybe you can’t fix it, because while you’re booting your phone, or if you restart your phone, by the time you get to the point that you could jump in and hit delete on the message…

…the app has already crashed again; too late!

We know that there have been so-called “text of death” problems in iOS before.

We’ve got a list of them in the Naked Security article – they’ve made quite fascinating stories.

So we don’t know whether it was it an image, the way that glyphs (character images) get formed, character combinations, text direction… we don’t know.

It’s certainly worth getting the patch, because my gut feeling is if Apple thinks it’s important enough to put it in the security bulletin, which has that one-and-only-one fix, when it’s not a zero day, and it’s not remote code execution, and it’s not elevation of privilege…

…then they’re probably worried what would happen if anyone else found out about it!

So maybe you should be too.

It’s also, Doug, a fantastic reminder that although people tend to prioritise vulnerabilities from remote code execution at the top; then elevation of privilege then information leakage…

…denial of service is, “OK, the server can crash, but I can always start it up again.”

That can nevertheless be a really troublesome sort of problem.

Although it might not steal your data or ransomware your files, it could nevertheless prevent you using your computer, getting at your data, and doing real work.


DOUG.  Yes, we have the issue here that you need to update, but if you are experiencing this problem, you might not be able to get to the update if your phone keeps crashing!

So that leads us into our reader question for the week.

Here on the post that we’re talking about, Naked Security reader Peter asks:

Not an Apple user here, but isn’t there an option for Apple users to log into their email accounts in a browser which hopefully doesn’t crash like the app and delete the mail there instead of wiping your device?


DUCK.  Well, that’s certainly true for me.

The way I use my iPhone, I can read the same mail on my phone as in the web app in my browser.

So it’s a good starting point, if you’re locked out of your phone, and if you happen to have a laptop handy.

The problem is that when you’ve deleted mails, say, in your web browser, or via the native app on your laptop…

…your phone Mail app still has to sync with the server to know that it’s got to delete those messages.

And if, on the way there, it processes the message that it’s now about to delete, it could still get into the crashtastic situation, couldn’t it?

So the problem with that comment is the only real answer I can give is: “Not enough info. Can’t say for sure. But I jolly well hope you can do that!”


DOUG.  Give it a try, at least.


DUCK.  Yes, give it a try!

If you really get locked out, so that your phone crashes as soon as it starts, you’d like to think you could do what Apple call a DFU (direct firmware update), where you basically start afresh.

But the problem is to enable that (to stop it being used for evil), it essentially involves a wipe-and-start-over.

So you would lose all the data on the phone, assuming it would work.

So I guess the answer to that question is…

Try the least intrusive way of solving it that you can first.

Try “beating the app” on the phone, the messaging app.

This is what worked for some of the previous iOS things.

You basically reboot your phone; [SPEEDING UP] you type in your lock code really quickly; [SPEAKING REALLY FAST] you get into the app as fast as you can, and you click delete…

…before the phone gets there and starts the process that eventually runs out of memory.

So you might have enough time to do it on the phone itself.

If not, try doing it via an external app that manages the same set of data.

And if utterly stuck, then I suppose a flash-and-reinstall is your only solution.


DOUG.  All right, thank you, Peter, for sending that in.

If you have an interesting story, comment, or question you’d like to submit, we’d love to read on the podcast.

You can email tips@sophos.com; you can comment on any one of our articles; or you can hit us up on social: @nakedsecurity.

That’s our show for today.

Thanks very much for listening.

For Paul Ducklin, I’m Doug Aamoth, reminding you until next time to…


BOTH.  Stay secure.

[MUSICAL MODEM]


Patch Tuesday in brief – one 0-day fixed, but no patches for Exchange!

Two weeks ago we reported on two zero-days in Microsoft Exchange that had been reported to Microsoft three weeks before that by a Vietnamese company that claimed to have stumbled across the bugs on an incident response engagement on a customer’s network. (You may need to read that twice.)

As you probably recall, the bugs are reminiscent of last year’s ProxyLogin/ProxyShell security problems in Windows, although this time an authenticated connection is required, meaning that an attacker needs at least one user’s email password in advance.

This led to the amusing-but-needlessly-confusing name ProxyNotShell, though we refer to it in our own notes as E00F, short for Exchange double zero-day flaw, because that’s harder to misread.

You’ll probably also remember the important detail that the first vulnerability in the E00F attack chain can be exploited after you’ve done the password part of logging on, but before you’ve done any 2FA authentication that’s needed to complete the logon process.

That makes it into what Sophos expert Chester Wisniewski dubbed a “mid-auth” hole, rather than a true post-authentication bug:

One week ago, when we did a quick recap of Microsoft’s response to E00F, which has seen the company’s official mitigation advice being modified several times, we speculated in the Naked Security podcast as follows:

I did take a look at Microsoft’s Guideline document this very morning [2022-10-05], but I did not see any information about a patch or when one will be available.

Next Tuesday [2022-10-11] is Patch Tuesday, so maybe we’re going to be made to wait until then?

One day ago [2022-10-11] was the latest Patch Tuesday

…and the biggest news is almost certainly that we were wrong: we’re going to have to wait yet longer.

Everything except Exchange

This month’s Microsoft patches (variously reported as numbering 83 or 84, depending on how you count and who’s counting) cover 52 different parts of the Microsoft ecosystem (what the company descibes as “products, features and roles”), including several we’d never even heard of before.

It’s a dizzying list, which we’ve repeated here in full:

Active Directory Domain Services
Azure
Azure Arc
Client Server Run-time Subsystem (CSRSS)
Microsoft Edge (Chromium-based)
Microsoft Graphics Component
Microsoft Office
Microsoft Office SharePoint
Microsoft Office Word
Microsoft WDAC OLE DB provider for SQL
NuGet Client
Remote Access Service Point-to-Point Tunneling Protocol
Role: Windows Hyper-V
Service Fabric
Visual Studio Code
Windows Active Directory Certificate Services
Windows ALPC
Windows CD-ROM Driver
Windows COM+ Event System Service
Windows Connected User Experiences and Telemetry
Windows CryptoAPI
Windows Defender
Windows DHCP Client
Windows Distributed File System (DFS)
Windows DWM Core Library
Windows Event Logging Service
Windows Group Policy
Windows Group Policy Preference Client
Windows Internet Key Exchange (IKE) Protocol
Windows Kernel
Windows Local Security Authority (LSA)
Windows Local Security Authority Subsystem Service (LSASS)
Windows Local Session Manager (LSM)
Windows NTFS
Windows NTLM
Windows ODBC Driver
Windows Perception Simulation Service
Windows Point-to-Point Tunneling Protocol
Windows Portable Device Enumerator Service
Windows Print Spooler Components
Windows Resilient File System (ReFS)
Windows Secure Channel
Windows Security Support Provider Interface
Windows Server Remotely Accessible Registry Keys
Windows Server Service
Windows Storage
Windows TCP/IP
Windows USB Serial Driver
Windows Web Account Manager
Windows Win32K
Windows WLAN Service
Windows Workstation Service

As you can see, the word “Exchange” appears just once, in the context of IKE, the internet key exchange protocol.

So, there’s still no fix for the E00F bugs, a week after we followed up on our article from a week before that about an initial report three weeks before that.

In other words, if you still have your own on-premises Exchange server, even if you’re only running it as part of an active migration to Exchange Online, this month’s Patch Tuesday hasn’t brought you any Exchange relief, so make sure you are up-to-date with Microsoft’s latest product mitigations, and that you know what detection and threat classification strings your cybersecurity vendor is using to warn you of potential ProxyNotShell/E00F attackers probing your network.

What did get fixed?

For a detailed review of what got fixed this month, head over to our sister site, Sophos News, for an “insider” vulns-and-exploits report from SophosLabs:

The highlights (or lowlights, depending on your viewpoint) include:

  • A publicly disclosed flaw in Office that could lead to data leakage. We’re not aware of actual attacks using this bug, but information about how to abuse it was apparently known to potential attackers before the patch appeared. (CVE-2022-41043)
  • A publicly exploited elevation-of-privilege flaw in the COM+ Event System Service. A security hole that is publicly known and that has already been exploited in real-life attacks is a zero-day, because there were zero days that you could have applied the patch before the cyberunderworld knew how to abuse it. (CVE-2022-41033)
  • A security flaw in how TLS security certificates get processed. This bug was apparently reported by the government cybersecurity services of the UK and the US (GCHQ and NSA respectively), and could allow attackers to misrepresent themselves as the owner of someone else’s code-signing or website certificate. (CVE-2022-34689)

This month’s updates apply to pretty much every version of Windows out there, from Windows 7 32-bit all the way to Server 2022; the updates cover Intel and ARM flavours of Windows; and they include at least some fixes for what are known as Server Core installs.

(Server Core is a stripped-down Windows system that leaves you with a very basic, command-line-only server with a greatly reduced attack surface, leaving out the sort of components you simply don’t need if all you want is, for example, a DNS and DHCP server.)

What to do?

As we explain in our detailed analysis on Sophos News, you can either head into Settings > Windows Update and find out what’s waiting for you, or you can visit Microsoft’s online Update Guide and fetch individual update packages from the Update Catalog.

Update under way on Windows 11 22H2.

You know what we’ll say/
   ‘Cause it’s always our way.

That is, “Do not delay/
   Simply do it today.”


Move over Patch Tuesday – it’s Ada Lovelace Day!

The second Tuesday of every month is Microsoft’s regular day for security updates, still known by almost everyone by its unofficial nickname of “Patch Tuesday”.

But the second Tuesday in October is also Ada Lovelace Day, celebrating Ada, Countess of Lovelace.

Ada was a true pioneer not only of computing, but also of computer science, and gave her name to the programming language Ada.

The Ada language, intriguingly, emerged from a US Department of Defense project aimed at “debabelising” the world of governmental coding, where every department semed to favour a different language, or a different language dialect, making it more difficult, more expensive, and less reliable to get them to work together.

Ada had numerous syntactic features aimed at improving readability and avoiding common mistakes. Unlike comments in C, which start with /* and run until the next */, perhaps many lines later, Ada simply ignores anything after -- on any one line, so comments can’t accidentally run on further than you intended. Instead of enclosing all multiline code blocks within squiggly brackets ({...}, also known as braces), Ada has a unique terminator for each sort of multi-line block, e.g. end record, end loop and end if. Ada Lovelace, we suspect, would have applauded the clarity of her namesake language, but Ada-the-language never really caught on, and C’s squiggly bracket syntax has largely won the day, with Python perhaps the only non-squiggly-bracket language in widespread use. Squiggly brackets are a vital aspect of C, C++, C#, Go, Java, JavaScript, Perl, Rust and many other popular languages.

Ada Lovelace’s era

You might be surprised to find, given how strongly Ada’s name is associated with the beginnings of computer science, that she lived in the first half of the nineteenth century, long before anything that we currently recognise as a computer, or even a calculator, existed.

(Ada died of uterine cancer in 1852 at just 36 years old.)

But although computers in their modern sense didn’t exist in the 1800s, they very nearly did.

Here’s how it almost happened.

Charles Babbage, in the early 1800s, famously devised a mechanical calculating device called the Difference Engine that could, in theory at least, automatically solve polynomial equations in the sixth degree, e.g. by finding values for X that would satisfy:

aX6 + bX5 +cX4 +dX3 +eX2 + fX + g = 0

The UK government was interested, because a device of this sort could be used for creating accurate mathematical tables, such as square roots, logarithms and trigonometric ratios.

And any machine good at trigonometric calculations would also be handy for computing things like gunnery tables that could revolutionise the accuracy of artillery at land and sea.

But Babbage had two problems.

Firstly, he could never quite reach the engineering precision needed to get the Difference Engine to work properly, because it involved sufficiently many interlocking gears that backlash (tiny but cumulative inaccuracies leading to “sloppiness” in the mechanism) would lock it up.

Secondly, he seems to have lost interest in the Difference Engine when he realised it was a dead end – in modern terms, you can think of it as a pocket calculator, but not as a tablet computer or a laptop.

So Babbage leapt ahead with the design of a yet more complex device that he dubbed the Analytical Engine, which could work out much more general scientific problems than one sort of polynomial equation.

Perhaps unsurprisingly, if regrettably in hindsight. the government wasn’t terribly interested in funding Babbage’s more advanced project.

Given that he hadn’t managed to build the mechanism needed for a much simpler equation solver, what chance did a giant, steam-powered, general-purpose computer have of ever delivering any useful results?

The European conference circuit

In a curious twist of international, multilingual co-operation, Babbage travelled to Italy to give a lecture promoting his Analytical Engine.

In the audience was a military engineer named Captain Luigi Menabrea, who was thus inspired to co-operate with Babbage to produce an 1842 paper that described the machine.

Although he was Italian, Menabrea published his paper in French…

…and it was Ada Lovelace who then translated Menabrea’s paper into English.

At Babbage’s urging, Ada also added a series of Notes by the Translator, which turned out not only to be more than twice as long as Menabrea’s original report, but also more insighful, explaining several important characteristics of what we would now call a general-purpose computer.

Walter Isaacson, in his excellently readable book The Innovators, published in 2014, describes how Ada “explored four concepts that would have historial resonance a century later when the computer was finally born”:

  • Ada recognised that the Analytical Engine, unlike the Difference Engine, was truly a general-purpose device, because it could not only be programmed to do one thing, but also, and comparatively easily, be reprogrammed to perform some completely different task.

In Ada’s own words (this was an age in which scientific literature still had rather more in touch with literature than perhaps it does today):

The Difference Engine can in reality (as has been already partly explained) do nothing but add; and any other processes, not excepting those of simple subtraction, multiplication and division, can be performed by it only just to that extent in which it is possible, by judicious mathematical arrangement and artifices, to reduce them to a series of additions. The method of differences is, in fact, a method of additions; and as it includes within its means a larger number of results attainable by addition simply, than any other mathematical principle, it was very appropriately selected as the basis on which to construct an Adding Machine, so as to give to the powers of such a machine the widest possible range. The Analytical Engine, on the contrary, can either add, subtract, multiply or divide with equal facility; and performs each of these four operations in a direct manner, without the aid of any of the other three. This one fact implies everything; and it is scarcely necessary to point out, for instance, that while the Difference Engine can merely tabulate, and is incapable of developing, the Analytical Engine can either tabulate or develope.

  • Ada realised that the Analytical Engine was not limited to encoding and computing with numbers. Although digital, and based on an ability to perform numerical calculations, these digital operations, she explained, could in theory represent logical propositions (as we take for granted today in if ... then ... else ... end if statements), musical notes, and so on.

As Ada put it:

[The Analytical Engine] might act upon other things besides number, were objects found whose mutual fundamental relations could be expressed by those of the abstract science of operations, and which should be also susceptible of adaptations to the action of the operating notation and mechanism of the engine. Supposing, for instance, that the fundamental relations of pitched sounds in the science of harmony and of musical composition were susceptible of such expression and adaptations, the engine might compose elaborate and scientific pieces of music of any degree of complexity or extent. he Analytical Engine is an embodying of the science of operations, constructed with peculiar reference to abstract number as the subject of those operations.

  • Ada came up with the concept of reusing parts of what we now call programs. In this sense, she can be said to have invented the concept of the subroutine, including recursive subroutines (functions that simplify the solution by breaking a calculation into a series of similar subcalculations, and then calling themselves).
  • Ada first usefully addressed the question “Can machines think?” This is an issue that has worried us ever since.

The Frankenstein connection

Ada’s father (though she never met him) was the infamous poet Lord Byron, who memorably spent a rainy holiday in Switzerland writing horror stories with his literary chums Percy and Mary Shelley.

Byron’s and Percy Shelley’s efforts in this friendly writing competition are entirely forgotten today, but Mary Shelley’s seminal novel Frankenstein; or, The Modern Prometheus (published in 1818) is popular and well-respected to this day.

The Frankenstein story famously explored the moral dilemmas surrounding what we might today refer to as artificial intelligence. (Frankenstein, don’t forget, was the scientist who conducted the experiment, not the AI that emerged from the project.)

Ada, however, didn’t seem to share her father’s friend’s dystopian concerns about Analytical Engines, or indeed about computers in general.

She offered the opinion, in the final section of her Notes by the Translator, that:

The Analytical Engine has no pretensions whatever to originate anything. It can do whatever we know how to order it to perform. It can follow analysis; but it has no power of anticipating any analytical relations or truths. Its province is to assist us in making available what we are already acquainted with. This it is calculated to effect primarily and chiefly of course, through its executive faculties; but it is likely to exert an indirect and reciprocal influence on science itself in another manner. For, in so distributing and combining the truths and the formulæ of analysis, that they may become most easily and rapidly amenable to the mechanical combinations of the engine, the relations and the nature of many subjects in that science are necessarily thrown into new lights, and more profoundly investigated. This is a decidedly indirect, and a somewhat speculative, consequence of such an invention.

Just over 100 years later, when Alan Turing famously revisited the issue of artificial intelligence in his own paper Computing Machinery and Intelligence, and introduced his now-famous Turing Test, he dubbed this Lady Lovelace’s Objection.

What to do?

Next time you find yourself writing code such as…

 -- A funky thing: the Ackermann function. -- Computable, but not primitive recursive! -- (You can't write it with plain old for -- loops, yet you can be sure it will finish, -- even if it takes a loooooooong time.) local ack = function(m,n) if m == 0 then return n+1 end if n == 0 then return ack(m-1,1) end return ack(m-1,ack(m,n-1)) end

…remember that recursive subroutines of this sort all started in the scientific imagination of someone who knew what a computer should look like, and what it probably would look like, but yet lived (and sadly died very young) 100 years before any such device ever existed for her to hack on for real.

Hacking on actual computers is one thing, but hacking purposefully on imaginary computers is, these days, something we can only imagine.

Happy Ada Lovelace Day!


go top