Solving the length-1347 McEliece challenge

26 February 2023

What's the context?

In 2019, Julien Lavauzelle, Matthieu Lequesne, and Nicolas Aragon created a series of challenges to "assess the practical hardness of problems in coding theory".

In particular, the "McEliece-Goppa Syndrome Decoding" challenges represent public keys and ciphertexts in the McEliece cryptosystem. These challenges have various lengths, including miniature versions starting at length 20, more challenging problems at lengths around 1000, and further problems all the way up to cryptographic lengths.

A Eurocrypt 2022 paper by Andre Esser, Alexander May, and Floyd Zweydinger announced solutions to the length-1223 and length-1284 challenges.

What's the news?

The point of this page is to announce a new record: a solution to the length-1347 challenge.

Does this mean that attack algorithms are getting better?

No. This computation simply ran the attack software from a 2008 paper: Daniel J. Bernstein, Tanja Lange, and Christiane Peters, "Attacking and defending the McEliece cryptosystem", PQCrypto 2008.

There have been only slight speedups in ISD algorithms since 2008. The fact that 15-year-old attack software remains competitive with the latest attack software in setting records is a remarkable testament to the stability of the McEliece attack picture.

Who carried out this computation?

Daniel J. Bernstein ran the computation and wrote this page. Credit for the record should go to Bernstein, Lange, and Peters, since the computation was simply running the software from the 2008 paper.

What computers were used?

This computation was carried out on

The lucky computer that found the solution was "colossus6" at Academia Sinica.

How does this challenge compare to the challenge solved in 2008?

The 2008 computation was an order of magnitude smaller. In more detail, that challenge had 50 "errors" instead of 25, making it somewhat harder, but had length only 1024 instead of 1347, making it easier.

If this is the same software, how is it solving larger challenges?

The software was written from the outset to handle many different lengths (and different numbers of errors).

Compared to CPUs from 15 years ago, current CPUs from Intel and AMD have more cores and do more computation per second per core, making larger computations affordable.

If new CPUs keep producing new records, how do we recognize better attack algorithms?

Beyond setting records, cryptographers analyze how well different algorithms perform on the same hardware.

Sometimes the hardware is specified as a projection of what is available to large-scale attackers today, or a projection of what will be available to large-scale attackers in, say, 15 years. Sometimes the hardware is specified as a lower-cost hardware platform available to academics for carrying out experiments.

For example, the Eurocrypt 2022 paper specified 4 AMD EPYC 7742 CPUs (256 cores) and reported that its new software took, on average, 37.47 days for length 1284. The 2008 software takes, on average, only 31.56 days on 4 AMD EPYC 7742 CPUs for length 1284. A more detailed analysis shows that somewhat better speeds are possible on these CPUs, for example from modifying the 2008 software to take advantage of 256-bit vector instructions.

Decoding is like random searches for a cipher key in that some experiments are lucky (faster than average) and some are unlucky (slower than average). It is important for comparisons to use averages rather than random single experiments.

It is also important to note that different algorithms can scale differently. The best algorithm for a challenge is not necessarily the best algorithm for a larger challenge. Algorithm analyses are used to predict cryptographic security levels; scaled-down computations are used as a double-check on the analyses.

How does this software's performance compare to the algorithm analysis?

For length 1347, the analysis predicted that each iteration of the computation would find a solution with probability 1/241.6, so on average 241.6 iterations would be required. This experiment happened to take 241.7 iterations, slightly more than the average.

The time for each iteration depends on the CPU, most importantly on cache performance. As one example, "colossus6" has 2 AMD EPYC 7742 CPUs and had 128 iterations running in parallel on 128 cores; each core spent 2.3 million cycles on each iteration.

Because of the random nature of the search, considerably more iterations than average would not have been surprising, and considerably fewer iterations than average would also not have been surprising. Systematic experiments with the same software for smaller sizes showed iteration counts following the distribution predicted by the algorithm analysis.

Does this computation threaten Classic McEliece?

No. In selecting parameter sets, Classic McEliece already accounted for this attack algorithm, newer algorithms, and advances in computer technology, including large-scale attacks using special-purpose hardware.

Breaking the smallest selected Classic McEliece parameter set, mceliece348864 (using length 3488 and 64 errors), would need a computation billions of times larger than the total computation invested worldwide in the past year of Bitcoin mining.

The Classic McEliece parameter sets recommended for long-term security are mceliece6960119 (length 6960 and 119 errors) and mceliece6688128 (length 6688 and 128 errors). These parameter sets provide a large security margin against future quantum computers.

Do other public-key cryptosystems provide similar stability?


Ciphers such as AES and ChaCha20 provide similar stability but are not public-key cryptosystems.

Elliptic-curve cryptography provides similar stability against non-quantum computers, but will be broken by quantum computers running Shor's algorithm.

Lattice-based cryptography is much less stable. For example, Lindner and Peikert stated in 2010 that a dimension-256 lattice system appeared to be "at least as secure as AES-128"; the same system has a much lower security level against subsequent lattice attacks.