Post-quantum cryptography – new algorithm “gone in 60 minutes”

We’ve written about PQC, short for post-quantum cryptography, several times before.

In case you’ve missed all the media excitement of the past few years about so-called quantum computing…

…it is (if you will pardon what some experts will probably consider a reckless oversimplification) a way of building computing devices that can keep track of multiple possible outcomes of a calculation at the same time.

With a lot of care, and perhaps a bit of luck, this means that you can rewrite some types of algorithm to home in on the right answer, or at least correctly discard a whole slew of wrong answers, without trying and testing every possible outcome one-by-one.

Two interesting cryptanalytical speedups are possible using a quantum computing device, assuming a suitably powerful and reliable one can actually be constructed:

  • Grover’s quantum search algorithm. Usually, if you want to search a randomly-ordered set of answers to see if yours is on the list, you would expect to plough through entire list, at worst, before getting a definitive answer. For example, if you wanted to find the 128-bit AES decryption key to unscramble a document, you’d need to search the list of all possible keys, starting at 000..001, ..2, ..3, and so on, all the way up to FFF..FFF (16 bytes’ worth of FF), to be certain of completing the problem. In other words, you’ve have to budget to try all 2128 possible keys before either finding the right key, or determining that there wasn’t one. Grover’s algorithm, however, given a big and powerful enough quantum computer, claims to be able to complete the same feat with the square root of the usual effort, thus cracking the code, in theory, in just 264 tries instead.
  • Shor’s quantum factorisation algorithm. Several contemporary encryption algorithms rely on the fact that multiplying two large prime numbers together can be done quickly, whereas dividing their product back into the two numbers that you started with is as good as impossible. To get a feel for this, try multiplying 59×87 using pen-and-paper. It might take a minute or so to get it out (5133 is the answer), but it’s not that hard. Now try the other way. Divide, say, 4171 back into its two factors. Much harder! (It’s 43×97.) Now imagine doing this with a number that’s 600 digits long. Loosely speaking, you’re stuck with trying to divide the 600 digit number by every possible 300 digit prime number until you hit the jackpot, or find there isn’t an answer. Shor’s algorithm, however, promises to solve this problem with the logarithm of the usual effort. Thus factoring a number of 2048 binary digits should take just twice as long as factoring a 1024-bit number, not twice as long as factoring a 2047-bit number, representing a huge speedup.

Countering the threat

The threat from Grover’s algorithm can be countered simply by boosting the size of the the numbers you’re using by squaring them, which means doubling the number of bits in your cryptographic hash or your symmetric encryption key. (In other words, if you think SHA-256 is fine right now, using SHA-512 instead would provide a PQC-resistant alternative.)

But Shor’s algorithm can’t be countered quite so easily.

A public key of 2048 bits would need its size increased exponentially, not simply by squaring, so that instead of a key of 2×2048=4096 bits, either you’d need a new key with the impossible size of 22048 bits…

…or you’d have to adopt a completely new sort of post-quantum encryption system to which Shor’s algorithm didn’t apply.

Well, US standards body NIST has been running a PQC “competition” since late 2017.

The process has been open to everyone, with all participants welcome, all algorithms openly published, and public scrutiny not merely possible but actively encouraged:

Call for Proposals. [Closed 2017-11-30]. […] It is intended that the new public-key cryptography standards will specify one or more additional unclassified, publicly disclosed digital signature, public-key encryption, and key-establishment algorithms that are available worldwide, and are capable of protecting sensitive government information well into the foreseeable future, including after the advent of quantum computers.

After three rounds of submissions and discussions, NIST announced, on 2022-07-05, that it had chosen four algorithms that it considered “standards” with immediate effect, all with delighful-sounding names: CRYSTALS-KYBER, CRYSTALS-Dilithium, FALCON, and SPHINCS+.

The first one (CRYSTALS-KYBER) is used as what’s called a Key Agreement Mechanism (KEM), where two ends of a public communication channel securely concoct a one-time private encryption key for exchanging a session’s worth of data confidentially. (Simply put: snoopers just get shredded cabbage, so they can’t eavesdrop on the conversation.)

The other three algorithms are used for Digital Signatures, whereby you can ensuring that the data you got out at your end matches exactly what the sender put in at the other, thus preventing tampering and assuring integrity. (Simply put: if anyone tries to corrupt or mess with the data, you’ll know.)

More algorithms needed

At the same timeas announcing the new standards, NIST also announced a fourth round of its competition, putting a further four algorithms forward as possible alternative KEMs. (Remember that, at the time of writing, we already have three approved digital signature algorithms to choose from, but only one official KEM.)

These were: BIKE, Classic McEliece, HQC and SIKE.

Intriguingly, the McEliece algorithm was invented way back in the 1970s by American cryptographer Robert Mc Eliece, who died in 2019, well after NIST’s contest was already underway.

It never caught on, however, because it required huge amounts of key material compared to the popular alternative of the day, the Diffie-Hellman-Merkle algorithm (DHM, or sometimes just DH).

Unfortunately, one of the three Round Four algorithms, namely SIKE, appears to have been cracked.

In a brain-twisting paper entitled AN EFFICIENT KEY RECOVERY ATTACK ON SIDH (PRELIMINARY VERSION), Belgian cryptographers Wouter Castryk and Thomas Decru seem to have dealt something of a deadly blow to the SIKE algorithm

In case you’re wondering, SIKE is short for Supersingular Isogeny Key Encapsulation, and SIDH stands for Supersingular Isogeny Diffie-Hellman, a specific use of the SIKE algorithm whereby two ends of a communication channel perform a DHM-like “cryptodance” to exchange a bunch of public data that allows each end to derive a private value to to use as a one-time secret encryption key.

We’re not going to try to explain the attack here; we’ll just repeat what the paper claims, namely that:

Very loosely put, the inputs here include the public data provided by one of the participants in the key establishment cryptodance, along with the pre-determined (and therefore publicly-known) parameters used in the process.

But the output that’s extracted (the information referred to as the isogeny φ above) is supposed to be the never-revealed part of the process – the so-called private key.

In other words, from public information alone, such as the data exchanged opnely during key setup, the cryptographers claim to be able to recover the private key of one of the participants.

And once you know my private key, you can easily and undetectably pretend to be me, so the encryption process is broken.

Apparently, the key-cracking algorithm takes about an hour to do its work, using just a single CPU core with the kind of processing power you’d find in an everyday laptop.

That’s against the SIKE algorithm when configured to meet Level 1, NIST’s basic grade of encryption security.

What to do?

Nothing!

(That’s the good news.)

As the authors of the paper suggest, after noting that their result is still preliminary, “with the current state of affairs, SIDH appears to be fully broken for any publicly generated base curve.”

(That’s the bad news.)

However, give that the SIKE algorithm isn’t officially approved yet, it can now either be adapted to thwart this particular attack (something that the authors admit may be possible), or simply dropped altogether.

Whatever finally happens to SIKE, this is an excellent reminder of why trying to invent your own encryption algorithms is fraught with danger.

It’s also a pointed example of why proprietary encryption systems that rely on the secrecy of the algorithm itself to maintain their security are simply unacceptable in 2022.

If a PQC algorithm such as SIKE survived persual and probing by experts from around the globe for more than five years, despite being disclosed specifically so that it could be subjected to public scrutiny…

…then there’s no need to ask yourself how well your home-made, hidden-from-view encryption algorithms are likely to fare when released into the wild!


go top