The cr.yp.to blog



2025.04.23: McEliece standardization: Looking at what's happening, and analyzing rationales. #nist #iso #deployment #performance #security

Once upon a time, NIST started working on standardizing post-quantum cryptography, and announced that "The goal of this process is to select a number of acceptable candidate cryptosystems for standardization".

By now NIST has quite a few standards for post-quantum signatures. It has already standardized Dilithium (ML-DSA), LMS, SPHINCS+ (SLH-DSA), and XMSS. It said in 2022 that it will also standardize Falcon (FN-DSA) "because its small bandwidth may be necessary in certain applications". It is evaluating more options for post-quantum signatures, such as small-signature large-key options. Evidently NIST will end up with at least six post-quantum signature standards.

For post-quantum encryption, NIST's offerings are much more sparse. NIST has just one standard, namely Kyber (ML-KEM). It said in March 2025 that it also plans to standardize HQC; supposedly the patent on HQC won't be an issue because of an upcoming FRAND license; but an April 2025 posting regarding design flaws in HQC prompted an HQC team announcement that HQC would be modified. Doesn't look like HQC is ready for usage yet.

Wait. What about the increasingly widely deployed McEliece cryptosystem? Well, NIST's 2025 report said the following:

After the ISO standardization process has been completed, NIST may consider developing a standard for Classic McEliece based on the ISO standard.

So maybe NIST will eventually have three standards for post-quantum encryption. But, short term, the only choice that NIST is offering is Kyber.

You shouldn't be surprised to hear that ISO is already considering more KEMs than just Kyber. NIST's focus on Kyber is controversial, in part because of patent concerns, in part because general lattice attacks keep getting better, in part because many attack avenues against Kyber still require cryptanalysis, and in part because focusing on any single option would sound overconfident given the sheer number of broken post-quantum proposals.

There are more standardization agencies, and many more cryptographic decision-makers. This blog post is aimed at anyone interested in comparing post-quantum options. I'll look at the reasons that NIST gives for not already going ahead with McEliece, and I'll pinpoint a remarkable number of ways that those reasons deviate from the facts. I've organized this into seven sections:

At the end there's a short coda and recommendations.

I'm one of many members of the Classic McEliece team, but I'm speaking for myself in this blog post. All of the team-authored documents are clearly labeled as such: see, e.g., "Classic McEliece" on top of the Classic McEliece web pages, the Classic McEliece security guide, and the newly posted 529 report. I haven't made a proposal for the team to speak out about NIST's pattern of misinformation, nor do I think that such a proposal would be within scope. Something nevertheless needs to be said.

Classic McEliece. Classic McEliece directly relies on the security of the cryptosystem that McEliece introduced in 1978. That's the reason for the name. The only cryptosystem changes allowed are changes that definitely don't damage security.

The core argument for the McEliece system is not just its age, and not just the number of papers that have been studying its security, but instead how well its security has held up, much better than any of the other options for post-quantum public-key encryption.

To quantify this, CryptAttackTester provides high-assurance cost analysis for the state-of-the-art attacks and earlier attacks. As a concrete example, for mceliece348864 (the smallest supported Classic McEliece size), CryptAttackTester reports 2156.96 bit operations using attack ideas from the 1980s, and 2150.59 bit operations using current attacks.

(For comparison, each improvement in general lattice attacks is obscured by severe complications and uncertainties in attack analyses, but here's a quick way to see the massive overall improvements. FrodoKEM says that it is an implementation of a 2010 paper. That paper estimates "2150 operations" to break lattice dimension 256. NIST's latest claims are "an uncertainty window of 2135 to 2158" bit operations to break lattice dimension 512, specifically kyber512. FrodoKEM's smallest recommendation is lattice dimension 640.)

When I say "security", I'm referring to one-wayness (or "OW-CPA" for snobs). The attacker sees a legitimately generated public key, and sees a legitimately generated ciphertext, where the plaintext was chosen completely at random; breaking one-wayness means computing the plaintext. This is the simplest goal of public-key encryption.

In particular, in McEliece's cryptosystem, a ciphertext has the form Gm+e. Here G is the public key, a matrix describing a randomly chosen "binary Goppa code". The pair (m,e) is the plaintext; e is a length-n bit vector where the bits at exactly t positions are equal to 1, and m is a vector. For mceliece348864, the parameters n and t are 3488 and 64. One-wayness means that it's hard to find (m,e) given the public key G and the ciphertext Gm+e.

One-wayness was McEliece's goal, although he described it differently. One-wayness is also what many papers since 1978 have been trying to break, with only marginal progress, as illustrated by the numbers 2156.96 and 2150.59 above.

Classic McEliece incorporates a ciphertext-compression mechanism that Niederreiter introduced in 1986. With Niederreiter's compressed ciphertexts, the attacker sees a public key H and a ciphertext He where the plaintext e is chosen completely at random, and breaking one-wayness means computing e. It's easy to see that an attack against this one-wayness problem implies an attack with essentially the same effectiveness against one-wayness for the original McEliece system; that's why it's acceptable for Classic McEliece to use compressed ciphertexts. (The relevant relationship between H and G is HG = 0, so if you're given Gm+e then you can quickly compute H(Gm+e) = He, so if you can compute e from He then you can compute e from Gm+e, and then computing m from (Gm+e)−e = Gm is just linear algebra.)

Classic McEliece has a more powerful goal than one-wayness: it's a KEM designed to stop chosen-ciphertext attacks. (For snobs: it's designed to provide "IND-CCA2" security.) This makes Classic McEliece suitable for incorporation into a vast range of protocols. Internally, Classic McEliece takes four steps to defend against chosen-ciphertext attacks:

The security analysis of these features wasn't complete when Classic McEliece was introduced in 2017, but by 2019 there was a tight proof of QROM IND-CCA2 security assuming solely OW-CPA security of the underlying cryptosystem.

(The proof applies only for "deterministic" systems. This covers McEliece and some versions of NTRU. For comparison, the best theorems available for Kyber and HQC allow QROM IND-CCA2 to be much easier to break than one-wayness.)

The proof doesn't rely on pc. Meanwhile there was a question about whether the use of pc in ISARA's QC-MDPC patent, patent 9912479, could be stretched to cover Classic McEliece. This would be quite a stretch, plus there's completely clear prior art for the patent, plus all of the overlap with Classic McEliece was already in McBits in 2013; but, just in case, the team decided to support non-pc options too, to "proactively eliminate any concerns regarding U.S. patent 9912479".

The official specs and software continue to support both pc and non-pc, and in particular the current software for mceliece6960119pc and mceliece8192128pc remains interoperable with the 2017 software. Classic McEliece added support for more sizes in 2019, along with the "f" option, where key generation is more complicated but faster (and interoperable with non-"f"). Classic McEliece also pinned down every detail of how randomness is used in 2020; this doesn't matter for interoperability, but improves testing and analysis. Several important software components now have computer-checked proofs, although there are still further pieces that need to be filled in.

Choosing parameter sets. Regarding the mceliece6960119 parameter set, the original 2017 Classic McEliece documentation said "This parameter set was proposed in the attack paper [8] that broke the original McEliece parameters (10,1024,50)." This 2008 attack paper proposed the 6960 size "to maximize the difficulty of our attack" for "keys limited to ... 220 bytes". Meanwhile powers of 2, as in the larger mceliece8192128 parameter set, allow some small code simplifications.

NIST then asked for lower-security parameter sets. The team complied, while explicitly following the original approach of aiming for optimal security subject to size limits and power-of-2 constraints. For example, the mceliece348864 parameter set is designed for "optimal security within 218 bytes if n and t are required to be multiples of 32". The team continues to recommend its high-security parameter sets mceliece6*: this includes the original mceliece6960119, and a mceliece6688128 that also fits into 1MB while adding the same multiple-of-32 requirement.

What's the security level of these parameter sets? CryptAttackTester says 2150.59 bit operations for mceliece348864, as I mentioned above; 2190.50 bit operations for mceliece460896; and slightly above 2257 bit operations for mceliece6*.

The 2017 documentation had actually said that the number of bit operations to break mceliece6960119 was "considerably below 2256" (rather than above 2257). The underlying analyses were done by hand—this was before CryptAttackTester—and had estimated the bit operations for most steps in the attacks, including a critical list-building step, but had missed the bit operations needed to compute collisions between lists; this turns out to underestimate security levels by 10 bits. One finds the same mistake in a 2021 paper saying 2180 bit operations for mceliece460896 (rather than 2190.50). The CryptAttackTester paper pinpointed where the 2021 paper was skipping these bit operations. The mistake will take time to eradicate from the literature: for example, it was copied into a 2023 paper a few months before CryptAttackTester appeared, and the mistake was still in the final version of that paper the next year. This last link is paywalled, sorry.

With CryptAttackTester, the computer is checking the human's analysis. Omitting the bit operations for any algorithm step sets off alarms. Having precise, accurate, computer-checked attack analyses allows confident measurements of the small attack improvements that have taken place over the past half century, and of any further improvements that might appear.

Note that, when parameters are chosen to maximize security subject to size limits, the parameters aren't affected at all if security is discovered to be 10 bits higher. Actually, the change isn't exactly the constant 10, but the variations are gradual and end up making essentially zero difference in parameter selection. In other words, the way that Classic McEliece chooses parameters is robust against changes in the attack analyses.

To put the above numbers in perspective, CryptAttackTester says 2141.89 bit operations for a simple brute-force attack against AES-128, not including the latest optimizations. Bitcoin did about 2112 bit operations last year. A large-scale attacker today can go beyond 2112, but would still have trouble reaching 2120. Furthermore, as already noted in the 2017 documentation, McEliece attacks that minimize bit operations are much more memory-intensive than cipher attacks.

On the other hand, many applications of AES-128 are already vulnerable today to multi-target attacks. Computer technology is continuing to improve. Quantum attacks are coming. There are even combined algorithms for quantum multi-target attacks. Papers on multi-target attacks and quantum attacks against McEliece have had quantitatively somewhat less effect than such attacks against AES, but, instead of cutting things close and having to worry about the details, why not simply use mceliece6* for long-term security?

The only reasonable answer would be an application where the McEliece cost is an issue. People rolling out McEliece in more and more applications have found again and again that the performance is perfectly fine. Maybe there's an occasional application where mceliece6* isn't affordable but mceliece4* or mceliece3* is, so those are supported in the official software, but of course the top priority is security.

Ciphertext size. When you hear a cryptosystem emphasizing a long list of security features, you might guess that the cryptosystem performs worse than alternatives, given various other examples of tensions between performance and security. And then this guess seems to be confirmed by the public-key sizes: 0.25MB keys for mceliece3*, 0.5MB keys for mceliece4*, and 1MB keys for mceliece6*.

But Classic McEliece also has a compensating performance advantage: remarkably small ciphertexts, thanks to the Niederreiter compression mentioned above.

Classic McEliece isn't designed merely for what the snobs call "IND-CPA" security, safety for a one-time key, safety for a key used for just one ciphertext. It's designed for IND-CCA2 security, safety for a static key, safety for a key used for many ciphertexts.

The number of ciphertexts per key has an obvious effect on analysis of total costs. If an application sends a key and then sends N ciphertexts, then the application is sending 768N+800 bytes with kyber512, or 96N+261120 bytes with mceliece348864. To minimize network traffic with these options, one should choose kyber512 for keys where the expected N is below 388, but one should choose mceliece348864 for keys where the expected N is above 388. Similar comments apply for parameters targeting higher security levels.

In other words: NIST has damaged the performance of its post-quantum portfolio by delaying Classic McEliece standardization.

Post-quantum cryptography also involves computation, but looking at the numbers shows that this doesn't have a big effect on overall costs: the sizes are more important. In a paper quantifying costs for file encryption as in Microsoft's EFS, I translated everything into dollar costs, for example showing how to purchase 251 Intel Skylake CPU cycles per dollar, while a dollar of Internet traffic is roughly 240 bytes. I then looked at the bytes and cycles for Kyber, Classic McEliece, etc. For example, transmitting a mceliece348864 key is 261120 bytes, or roughly 0.24 microdollars, while generating the key in the first place (using mceliece348864f, which as I mentioned is interoperable; mceliece348864 is a little slower) is around 30 million Skylake cycles, or 0.01 microdollars.

Are static keys important? I'll quote a public comment by John Mattsson from telecom company Ericsson:

We strongly think NIST should standardize Classic McEliece, which has properties that makes it the best choice in many different applications. We are planning to use Classic McEliece. ...

The small ciphertexts and good performance makes Classic McEliece the best choice for many applications of static encapsulation keys of which there are many (WireGuard, S/MIME, IMSI encryption, File encryption, Noise, EDHOC, etc.). For many such applications, key generation time is not important, and the public key can be provisioned out-of-band. When the public key is provisioned in-band, Classic McEliece has the best performance after a few hundred encapsulations. For static encapsulation use cases where ML-KEM provides the best performance, Classic McEliece is the best backup algorithm. The memory requirement can be kept low by streaming the key.

The text "applications of static encapsulation keys of which there are many (WireGuard, S/MIME, IMSI encryption, File encryption, Noise, EDHOC, etc.)" is pointing to one example after another of this scenario. Static keys are central components of a wide variety of cryptographic protocols and applications.

Static keys are also repeatedly highlighted in NIST standards. Consider, for example, NIST SP 800-56A, NIST's DH standard, where Section 6 spends fifty pages describing different ways to use DH, including the following:

Almost every option covered in those fifty pages involves static keys. The only exception is "C(2e,0s)".

Or look at NIST SP 800-57, NIST's key-management standard. Along with describing ephemeral keys, this standard specifically allows "years" for the lifetime of five different types of static keys:

Even in the new NIST report that I'll be saying much more about later, you can find NIST rejecting BIKE on the basis of concerns about BIKE's safety for static keys:

While NIST has confidence in the indistinguishability under chosen-plaintext attack (IND-CPA) security of BIKE and HQC, both schemes require a sufficiently low decryption failure rate (DFR) in order to be IND-CCA2-secure. There is evidence that HQC has a sufficiently low DFR and recent work indicates that with minor modifications, BIKE achieves the same [26]. However, NIST does not consider the DFR analysis for BIKE to be as mature as that for HQC.

So there's overwhelming evidence of the importance of static keys. Obviously Classic McEliece is the lowest-cost option for those.

Beyond minimizing total costs for static keys, small ciphertexts have an engineering virtue, as one can see by looking at PQ-WireGuard; at PQ-WireGuard's successor, the Rosenpass VPN; or, for a different application, at our new PQConnect. These are packet-based protocols that rely on the smallness of Classic McEliece ciphertexts to meet Internet packet-size limits. Switching from Classic McEliece to a lattice system would need a redesigned packet structure that uses more packets during key exchange, increasing fragility and increasing exposure to denial-of-service attacks.

More broadly, packet-based protocols can often squeeze a high-security Classic McEliece ciphertext (e.g., 194 bytes for mceliece6960119) into a corner of a packet that's mostly used for something else. For comparison, the 1568-byte ciphertexts for kyber1024 are simply too big. Squeezing a Kyber ciphertext into a packet requires limiting security levels and focusing on packets where the other data is very small. Rosenpass uses Classic McEliece for static keys plus (for forward secrecy) Kyber for ephemeral keys, but that's just kyber512, exactly because of packet-size limits.

Meanwhile the Classic McEliece keys used for these ciphertexts are sent in advance through stream-based protocols. Those stream-based protocols internally split data across packets; but keys are sent much less frequently than ciphertexts are, and the user is almost never waiting for key transmissions, so fragility and denial of service are much smaller issues here.

Ciphertext size: behind the scenes. As another application of the idea that performance is in tension with security, people often ask whether the uniquely small ciphertexts are a sign that McEliece hasn't been studied. If someone comes along with a surprisingly efficient cryptosystem, it's probably because nobody has tried breaking it, right?

The usual answer is: look, here's all this evidence of McEliece having been studied. But there's actually a deeper answer coming from the mathematics of these cryptosystems.

The "decoder" used inside McEliece decryption is particularly powerful, while the alternatives are regressions from this, using objectively weaker decoders (typically in pursuit of reducing key size). For any particular ciphertext size, McEliece ends up with more randomness in ciphertexts than the alternatives. This makes the differences in size-security tradeoffs unsurprising, and means that some of the nightmare scenarios for code/lattice attacks still wouldn't affect McEliece.

The weaker decoders used in the alternatives also have much less diversity than the McEliece decoders. Key-recovery attacks typically end up as the fastest attacks against the alternatives, while many years of papers studying McEliece key-recovery attacks have consistently found those attacks to be absurdly slow. I'll say more about that slowness below.

ISO standardization. ISO has strict secrecy rules regarding its deliberations, but it does make some procedural information public. In particular, if you poke around the web page for ISO project 86890 then you'll see that the project is currently considering a Draft International Standard (DIS) that includes Classic McEliece, FrodoKEM, and ML-KEM, as an amendment to an existing standard, ISO/IEC 18033-2.

This doesn't guarantee anything. If you hear people saying that ISO "is standardizing" something, remember that ISO sometimes terminates standardization projects without issuing standards. Still, the fact that ISO is considering a draft sounds good.

What exactly is in the draft? ISO's secrecy prohibits distribution of "any content from draft standards". But some information is available in violation of those rules: GCHQ's Peter Campbell publicly leaked that "the first working draft of ISO/IEC 18033-2/AMD2" was "almost ... verbatim" the same as a public document.

(In context, Campbell was trying to slow down Classic McEliece actions in yet another standardization organization. His leak was part of an easily debunked accusation of copyright violation.)

Ultimately all of these documents are based on public specifications authored by the Classic McEliece team. Standardization organizations vary in superficial rules about formatting and wording, but those rules don't force specs to be frivolously rewritten from scratch. As for the actual cryptosystem, it's completely unsurprising to hear that a standardization organization is simply going ahead with Classic McEliece as is.

Typically, when a standardization agency (in this case, ISO) is considering something, other standardization agencies (for example, NIST) will note the interest as something positive. But, no, NIST claims that ISO's interest in Classic McEliece is a negative vote for Classic McEliece, simply because the ISO standardization process is ongoing. Here's what NIST writes about this in its 2025 report:

Classic McEliece is currently under consideration for standardization by the International Organization for Standardization (ISO). Concurrent standardization of Classic McEliece by NIST and ISO risks the creation of incompatible standards.

Wow, incompatible standards sound bad! Imagine a company that is trying to build one product that simultaneously complies with NIST and ISO standards for Classic McEliece, but finds that this is impossible! Okay, it's also impossible to comply with a NIST standard that doesn't even exist; but isn't it worth a delay in the NIST standard to make sure the standards are compatible?

Um, let's think about this.

How exactly would going ahead now produce an incompatibility? Are we supposed to imagine that multiple concurrent standardization processes are like multiple students sitting down to separately take an exam, with maybe different answers coming out in the end? Even if ISO has secrecy rules, don't these standardization agencies talk to each other? For example, doesn't NIST send people to ISO meetings?

Yes, of course they do. Consider, for example, NIST's Lidong (Lily) Chen, who managed NIST's Cryptographic Technology Group from 2012 through 2023, whose "leadership in creating post-quantum cryptography standards has positioned her as a key figure in the field"; and whose "involvement with IEEE-SA, ISO, and other standards organizations highlights her active role in shaping the future of cryptographic and security standards globally". See the word "ISO" there?

Here's another fact that ISO has made public on the web page for ISO project 86890: the project entered stage "10.99, New project approved" in May 2023. Remember that the project also includes Kyber (ML-KEM). Meanwhile NIST had announced in July 2022 that it would standardize Kyber, but it didn't release the initial public draft of its Kyber standard until August 2023, and didn't release the final standard until August 2024.

So: Between May 2023 and August 2024, NIST and ISO were concurrently standardizing Kyber. Was this overlap risking incompatible Kyber standards, if ISO does issue a Kyber standard?

The ISO web page says that this ISO project entered stage 40.20, "DIS ballot initiated", on 19 February 2025. Under ISO rules, the DIS stage is after a draft standard has been approved by the relevant committee. This doesn't mean it's approved by ISO, but it means that the process is close to the end. So, even if NIST weren't involved in ISO and thus were unable to see some secret ISO changes to Classic McEliece before ISO issues its standard (if ISO does issue a standard), this would be finished long before NIST could get through its own standardization bureaucracy. For comparison, NIST says that it expects to issue an HQC standard in 2027.

I'm not saying that incompatibilities are impossible. For example: ISO has a public rule saying that post-quantum cryptosystems should reach 2128 post-quantum security. Grover's algorithm then means that mceliece3* doesn't qualify. The team recommends mceliece6* in any case for long-term security. What happens if NIST claims that, for performance, it's willing to standardize only mceliece3*?

What I'm saying is that potential incompatibilities aren't connected to the timeline. NIST's claim of a connection doesn't match the observed facts.

Deployment. Back in 2021, NIST announced an October 2021 deadline for round-3 input on its Post-Quantum Cryptography Standardization Project. After that, NIST should have immediately announced selections for standardization, including Classic McEliece standardization, along with committing to keeping the specs stable so that people could stop worrying about potential incompatibilities. To the extent that people are waiting for NIST, every day of delay is another day of user data being given to attackers.

Instead NIST burned a large chunk of a year negotiating what eventually turned out to be two Kyber patent licenses. NIST admitted in April 2022 that "the delay is not due to technical considerations". In July 2022, NIST issued a report selecting only Kyber for standardization, while delaying consideration of Classic McEliece:

NIST is confident in the security of Classic McEliece and would be comfortable standardizing the submitted parameter sets (under a different claimed security strength in some cases). However, it is unclear whether Classic McEliece represents the best option for enough applications to justify standardizing it at this time. For general- purpose systems wishing to base their security on codes rather than lattices, BIKE or HQC may represent a more attractive option. For applications that need a very small ciphertext, SIKE may turn out to be more attractive. NIST will, therefore, consider Classic McEliece in the fourth round along with BIKE, HQC, and SIKE. NIST would like feedback on specific use cases for which Classic McEliece would be a good solution.

The phrase "applications that need a very small ciphertext" appears to be alluding to another comment in the report citing PQ-WireGuard. Recall that PQ-WireGuard is an example of a protocol that relies on the smallness of Classic McEliece ciphertexts to fit into Internet packet-size limits.

The same report says the following: "One of the difficult choices NIST faced was deciding between KYBER, NTRU, and Saber ... With regard to performance, KYBER was near the top (if not the top) in most benchmarks." NIST didn't claim that any applications "need" Kyber's speed advantage. TLS ephemeral keys are the main focus of Kyber benchmarks, but the CECPQ2 results showed that NTRU already had negligible costs for TLS ephemeral keys. NIST also didn't explain why it was limiting consideration of Classic McEliece's ciphertext-size advantage to applications that "need" small ciphertexts.

A few weeks after NIST's report said "For applications that need a very small ciphertext, SIKE may turn out to be more attractive" (not that SIKE's ciphertexts were as small as Classic McEliece ciphertexts!), SIKE was publicly smashed. NIST didn't issue a statement saying what its revised plan was for handling applications that "need" a very small ciphertext. Whatever exactly this "need" criterion means, NIST's attention to the criterion seems remarkably selective.

NIST did repeat its request for Classic McEliece feedback on its pqc-forum mailing list. NIST received a variety of responses giving

You can see many of the responses in the same thread. There were also more responses later. For example, remember the quote that I gave above from John Mattsson, the one starting "We strongly think NIST should standardize Classic McEliece, which has properties that makes it the best choice in many different applications. We are planning to use Classic McEliece" and then explaining why? That was part of an official comment filed with NIST.

NIST invited submission teams to provide 60-minute seminar presentations in September 2024. There have been various Classic McEliece presenters; I happened to be the presenter for this talk. I think the talk came out well. Within the talk, one slide was on "Classic McEliece software for more and more environments", including several third-party examples. The next slide was on "Examples of McEliece applications", including existing deployments of Classic McEliece in high-speed optical networks, hardware security modules, the Mullvad VPN, and the Rosenpass VPN, just in case NIST had somehow managed to miss these. Everything was backed by links, of course, and I linked to the slides two days later in a message to NIST's mailing list.

For some reason, NIST didn't provide the video until April 2025, and still hasn't posted the video on its own web page. NIST posted video for every previous seminar talk much faster than that.

Anyway, what did NIST say in its 2025 report about all this feedback? NIST started by repeating its original request in the past tense:

NIST requested feedback on specific use cases for which Classic McEliece would be a good solution.

Obviously NIST then cited the responses, and said that, yes, there are already various Classic McEliece deployments such as Mullvad, plus strong interest in Classic McEliece standardization.

Oops, wait. I've just double-checked the report, and I have to withdraw this "Obviously" sentence:

Here's the full paragraph from NIST's report (emphasis added):

In IR 8413 [2], NIST requested feedback on specific use cases for which Classic McEliece would be a good solution. Responses noted that Classic McEliece may provide better performance than BIKE or HQC for applications in which a public key can be transferred once and then used for several encapsulations (e.g., file encryption and virtual private networks [VPNs]) due to its small ciphertext size and fast encapsulation and decapsulation. There was also some interest in Classic McEliece based on the perception that it is a conservative choice. However, the interest expressed in Classic McEliece was limited, and having more standards to implement adds complexity to protocols and PQC migration.

This paragraph isn't even internally coherent: if, as NIST claims, people won't use Classic McEliece, then how exactly would standardizing it add complexity to protocols, or to the migration process? Also, how do we reconcile this with NIST eagerly churning out standards for post-quantum signatures, and, beyond post-quantum cryptography, similarly having many different options for some other cryptographic tasks?

But there are much bigger problems with NIST's paragraph. The two parts that I've put in boldface are outright false information to the reader about the feedback that NIST had received. For example, anyone who reads Mattsson's comment sees that the words "best performance" were comparing to all other KEMs, including Kyber (ML-KEM), not just BIKE and HQC; and sees that "best choice" was a definite statement about the listed applications, not merely a "may".

Maybe even more important is the deception by omission. NIST asked for "feedback on specific use cases for which Classic McEliece would be a good solution". NIST received feedback about existing Classic McEliece deployments, specific use cases where Classic McEliece was already working well, but hides this fact from the reader.

It's also fascinating how NIST's report avoids the standard terminology of static keys vs. ephemeral keys. Except for a slip-up in one line on page 12, NIST renames ephemeral keys as "general applications" (or similar phrases such as "most common applications" or "general-purpose") and renames static keys as some much longer, much more obscure-sounding text (such as "applications in which a public key can be transferred once and then used for several encapsulations"), giving readers the impression that the only important keys are ephemeral keys. Nice job, MiNISTry of Truth.

NIST's official project rules said that "NIST intends to standardize post-quantum alternatives to its existing standards for digital signatures (FIPS 186) and key establishment (SP 800-56A, SP 800-56B)". Remember that 800-56A is NIST's DH standard, with static keys all over the place. How does NIST think we should handle those? Answer: Pretend they don't exist.

NIST's 2025 report cites a paper on "Post-quantum XML and SAML Single Sign-On". NIST claims that Classic McEliece has "much larger data sizes" than BIKE for XML; similarly for SAML SSO. But this can't be right: these are static-key applications where the key doesn't have to be sent repeatedly. The paper itself says "if we removed the public key from the KeyInfo element, Classic McEliece would be the most bandwidth-efficient XML public encryption algorithm". There was also a prompt followup on NIST's mailing list:

It is a shame that Classic McEliece wasn’t selected, as it seems eminently suitable for SAML/OIDC and other SSO/Federation use cases where keys are often established out of band and changed rarely. ... The KeyInfo is entirely optional in SAML (https://www.w3.org/TR/xmlenc-core/#sec-Extensions-to-KeyInfo) and including the public key in it makes no sense at all.

NIST still hasn't issued an erratum. When NIST thinks an application is using ephemeral keys, it highlights Classic McEliece's cost disadvantage; when NIST learns that the same application is actually using static keys and showing a Classic McEliece cost advantage, NIST stays silent.

Ultimately what matters is that Classic McEliece is in fact being used, protecting more and more user data. Instead of accurately reporting this, NIST makes it sound as if people merely said that Classic McEliece's security features and efficiency features might be of interest.

Interlude, note 1: one performance number. Pages 5 through 7 of NIST's 2025 report say that they present "representative benchmarks" and sizes for BIKE, HQC, and Classic McEliece. The report also makes various comments about the numbers, so it's not as if this was just a paste-and-forget table.

In particular, the table labeled "Performance of Classic McEliece in thousands of cycles on x86_64 [1]" reports "114 189" kcycles, more than 100 million cycles, for mceliece348864f key generation.

I said above that mceliece348864f key generation takes around 30 million Skylake cycles. Why is NIST reporting "benchmarks" that are 4x slower than the readily verifiable Classic McEliece software speeds?

Could this be because of differences in clock speeds or in the number of cores? No. 30 million is a single-core cycle count.

Could it be because of other CPU differences, since Intel and AMD keep improving the number of operations per cycle? No. Reference "[1]" in NIST's report says "Open quantum safe (OQS) algorithm performance visualizations. Available at https: //openquantumsafe.org/benchmarking." Poking around a bit starting from that link shows that these are numbers for "Intel(R) Xeon(R) Platinum 8259CL CPU", whose microarchitecture is Cascade Lake, which has only minor cycle-count differences from Skylake.

Oooh, wait, I have an idea. Could NIST's 4x exaggeration of the Classic McEliece software timings have something to do with the very first paragraph on the web page that NIST cites? Here's the paragraph (boldface in original):

These pages visualize measurements taken by the now-defunct OQS profiling project. This project is not currently maintained, and these measurements are not up to date.

An April 2024 archive of the same page shows that the warning was already there at that point.

The Classic McEliece software speeds are the result of hard work over many years by Tung Chou, another member of the Classic McEliece team. Almost all components of the official Classic McEliece software are from him. One of his speedups was much faster key-generation code published in 2019; this speedup was also reported in the 2020 Classic McEliece documentation.

It is astonishing to see NIST issuing a report in 2025 with benchmarks of code that's six years out of date, and presenting those as benchmarks of the 2022 Classic McEliece submission, especially when the source that NIST cites is a page that says at the top that it's presenting obsolete measurements from a defunct benchmarking project.

I pointed this out as part of a public message a month ago on NIST's pqc-forum mailing list. NIST still hasn't apologized, and still hasn't issued a correction to its report.

Interlude, note 2: FrodoKEM. There are other victims of NIST's evident unwillingness to stick to the facts of how cryptography actually performs inside applications. Let's look at FrodoKEM.

In 2020, NIST's reason for eliminating FrodoKEM seemed to be that, since FrodoKEM used 20000 bytes plus 2000 kcycles for one-time key exchange, FrodoKEM did not have "acceptable performance in widely used applications overall".

In 2025, regarding BIKE and HQC, NIST wrote that it "found it likely that either performance profile would be acceptable for most general applications". For the same situation of one-time key exchange, HQC-128 uses almost 7000 bytes and almost 700 kcycles, according to NIST's numbers.

So NIST is saying that 7000 bytes and 700 kcycles are acceptable for "most general applications", while 3x as much is unacceptable. Where's the justification for this?

There are many different applications. Budgets vary from one application to another. Data volume varies from one application to another. The amount of data per public-key operation varies from one application to another. Doing the work to build a chart of acceptable public-key costs per application should end up with tons of variation from one application to another. An occasional application happening to have a limit that's below FrodoKEM and above HQC wouldn't be a surprise, but how are most applications supposed to end up in that interval?

Surely an agency deciding to throw FrodoKEM away on the basis of supposed acceptability in applications should be able to justify this. So, in 2020, I filed an official comment to request explanation. NIST's complete answer was the following:

While it is not possible to speak for what every user of our standards would or wouldn't find “acceptable”, there is a pretty large difference between the performance of Frodo on the one hand and Kyber, NTRU, and Saber on the other hand. We are therefore more confident that Kyber, NTRU, or Saber will be considered “acceptable” for most users than that Frodo will.

In other words, NIST didn't investigate the facts: it simply postulated that microbenchmark differences are important, so that it could use those differences to make a decision. Might as well also select benchmark numbers from last decade to exaggerate the differences and make the decision even easier, right?

Security: overview. NIST's selection reports are haphazard collections of statements about each candidate. There aren't even comparison tables beyond microbenchmarks, never mind statements of decision models explaining how the comparison factors were used. It is, in particular, unclear whether or how NIST's comments on Classic McEliece security factored into its decision to delay Classic McEliece standardization.

Announcement slides from NIST's Dustin Moody ask "Classic McEliece or not?" (separately from "BIKE or HQC?"). The negative answer is preceded by some performance statements, a bullet item "Would it be used?" (again ignoring the feedback to NIST saying that it's already being used), an indented bullet item "Limited interest", and a bullet item "ISO standardization". Security isn't mentioned. NIST's 2025 report says that NIST "remains confident in the security of Classic McEliece".

On the other hand, NIST's report is also full of misinformation about the security of Classic McEliece. Even if this misinformation wasn't a factor for NIST, it should be corrected for the record, so that it doesn't pollute other decision-making processes, processes that are more concerned with security than NIST is.

The general pattern of the misinformation is as follows. There are two different statements, A and B. NIST ignores the Classic McEliece security analysis, which says A. NIST leads the reader to believe that the security analysis instead says B, and that attacks have forced changes in B. Meanwhile the reality is that the attacks haven't forced changes in A. NIST leaves the reader falsely believing that attacks have forced changes in the Classic McEliece security analysis.

Let's look at how this works. I'll start with an example where the deception comes from an indefensible ambiguity in NIST's text, and then I'll move on to examples where NIST is much more explicitly saying things that simply aren't true.

Security: multi-ciphertext attacks. The 2017 Classic McEliece documentation included the following paragraph specifically on multi-target attacks: "In a multi-message attack scenario, the cost of finding the private key is spread across many messages. There are also faster multi-message attacks that do not rely on finding the private key; see, e.g., [31] and [51]. Rather than analyzing multi-message security in detail, we rely on the general fact that attacking T targets cannot gain more than a factor T. Our expected security levels are so high that this is not a concern for any foreseeable value of T."

Now look at the following text from NIST's 2025 report:

In a multi-ciphertext setting, a further improvement [58] can reduce the cost of decoding a single ciphertext by a factor equal to approximately the square root of the number of ciphertexts.

This is ambiguous. Some readers will assume that the "further improvement" is a new attack improvement that was missed in the Classic McEliece security analysis. Other readers will assume the opposite, or will take the time to check and see that [58] a paper from 2011, the newer of the two papers cited for exactly this topic in the 2017 Classic McEliece documentation. Most readers will miss the fact that Classic McEliece, from the outset, took the robust approach of recommending larger parameters.

It would have taken just a moment for NIST to remove the ambiguity: "In a multi-ciphertext setting, further improvements such as [58] can reduce the cost of decoding a single ciphertext by a factor equal to approximately the square root of the number of ciphertexts. Classic McEliece recommends high security levels to protect against the worst-case possibility of T-target attacks saving a factor T."

Because readers won't necessarily get the wrong idea from NIST's existing text, NIST can try to evade responsibility for misleading other readers. But NIST's official project rules list "the quality of the analysis provided by the submitter" as a feature to be considered. NIST's Kyber selection praised the (supposedly) "thorough and detailed security analysis" in the round-3 Kyber submission, which was a radical rewrite of the failed security analysis in the round-2 Kyber submission. Surely it's even better that every version of the Classic McEliece documentation reported, and solidly protected against, multi-target attacks. NIST not only doesn't mention this, but it leads some readers to believe that multi-ciphertext attacks have forced changes in the security analysis.

Security: key attacks. A natural approach to attacking one-wayness is to try to recover the private key from the public key. There was a recent McEliece key-recovery competition with a $10000 prize. The competition was won by Lorenz Panny, whose attack streamlines Sendrier's "support splitting algorithm" from the turn of the century.

The attack took about 258 CPU cycles (with many bit operations per cycle) to recover very-low-security McEliece keys, specifically with parameters (n,t) = (253,5). If the attack were scaled up to McEliece's originally suggested (n,t) = (1024,50) then it would use more than 2400 operations; that's a size where plaintext recovery was demonstrated in 2008. As I said earlier, McEliece key-recovery attacks are absurdly slow.

Now let's look at second place in the key-recovery competition: "Rocco Mora's team" used what are called "algebraic attacks" to solve challenge 68, with (n,t) = (57,2). Um, ok.

What do you do if you're a researcher who has tried to break something and failed? You publish a note saying what you tried and how unsuccessful it is, so that other people don't have to waste time repeating your work. There are quite a few McEliece papers like this, analyzing the costs of super-slow attacks.

Maybe you notice that the attack idea is more efficient for a different problem, so you write that up: for example, this attack doesn't break McEliece, but it breaks some other cryptosystem in the literature, which is an important warning for people who might have considered that cryptosystem. There are quite a few papers like this too.

Algebraic attacks aren't a new idea. They're a general-purpose technique for taking systematic combinations of all of the cryptosystem equations you can think of, hoping to obtain useful information. Usually this is super-slow, but occasionally it's faster, so people try it against all sorts of cryptosystems.

A 2010 paper from Faugère, Otmani, Perret, and Tillich tried this against the McEliece system, and failed, but pointed out that if t was very small then the algorithm distinguishes public keys from random.

The paper shamelessly labeled itself as a "breakthrough in code-based cryptography". See, if you can recover private keys then you can also distinguish public keys from random, so, in the opposite direction, maybe a public-key distinguisher will lead to a key-recovery attack! Also, if you can recover keys for larger t then you can probably recover keys for smaller t (since there's a broad pattern of larger t making attacks harder), so maybe an attack for smaller t will lead to an attack for larger t!

Many followup papers on algebraic attacks have similarly failed to break McEliece. One of my favorite examples, a 2023 paper from Couvreur, Mora, and Tillich, is like the 2010 paper in describing itself as a "breakthrough". On page 26, this paper estimates 22231 operations to distinguish mceliece348864 keys from random.

Remember that Classic McEliece builds QROM IND-CCA2 security purely from the one-wayness (OW-CPA) of the original McEliece system. A key-recovery attack breaks one-wayness (and breaks IND-CCA2); a mere key distinguisher doesn't. Furthermore, even if this distinguisher can somehow be upgraded to an attack, 22231 is vastly slower than other ways to break one-wayness. So there are two clear reasons that this paper isn't affecting the Classic McEliece security analysis.

But maybe these lines will be crossed! Maybe this insanely expensive public-key distinguisher will lead to a feasible key-recovery attack!

Look at this: a 2024 paper called "The syzygy distinguisher" has, on page 26, an estimate of only 2529 operations to distinguish mceliece348864 keys from random! That's clearly far below 22231! Best-paper award!

Okay, okay, 2529 is still vastly slower than an easy seed search reviewed in the security guide and the 529 report. But maybe the next paper will be faster!

Meanwhile algebraic attacks have crossed the line between public-key distinguishers and key-recovery attacks, as illustrated by Mora's team successfully demonstrating an algebraic attack for (n,t) = (57,2). Okay, that's a solid step backwards from support splitting, but maybe the next paper will be faster!

See how the same thing can be said about any failed attack? It's content-free to say that maybe there will be a followup that reduces the security of the system. What matters for risk analysis is that a bunch of people have been publicly trying and failing for many years to reduce the McEliece security level, looking closely at every aspect of the McEliece system, while people keep succeeding in reducing the security level of lattice cryptosystems, even while many attack avenues against those systems remain unexplored.

What isn't content-free is to take a failed attack against Classic McEliece and falsely present it as a successful attack against Classic McEliece. This brings me to NIST.

NIST's 2025 report describes the assumption "that row-reduced parity check matrices for the binary Goppa codes used by Classic McEliece are indistinguishable from row-reduced parity check matrices for random linear codes of the same dimensions", as if Classic McEliece relied on this key-indistinguishability property.

This is a strawman. The Classic McEliece security guide explicitly obtains QROM IND-CCA2 security purely from one-wayness. There's even a paragraph commenting on key indistinguishability and separating it from what Classic McEliece needs ("perhaps there are distinguishers even if OW-CPA is secure"). So NIST is misrepresenting what Classic McEliece is relying upon.

The 2529 attack still doesn't break this strawman assumption: it's ludicrously slow. NIST suppresses the 2529 information, and claims that there has been "significant progress in cryptanalysis techniques that are applicable to key recovery and the related problem of distinguishing a Goppa code from a random linear code", citing the syzygy-distinguisher paper and few other recent papers, not mentioning that these are more of the failures to break Classic McEliece.

NIST admits that algebraic attacks are "far from concretely affecting the security of the submitted parameter sets of Classic McEliece". This doesn't change the fact that the reader of NIST's text is left believing, falsely, that attacks have improved against something Classic McEliece is assuming to be hard.

Security: guarantees. NIST caps off its discussion of algebraic attacks by bringing up another strawman, claiming that there's an "argument that the long-term security of Classic McEliece is guaranteed by its long history of cryptanalysis". (NIST writes that algebraic attacks "somewhat weaken" this argument.) The reader understands this to be an argument made in the Classic McEliece documentation.

Here's what the Classic McEliece rationale actually says:

The McEliece system is one of the oldest proposals, almost as old as RSA. RSA has suffered dramatic security losses, while the McEliece system has maintained a spectacular security track record unmatched by any other proposals for post-quantum encryption. This is the fundamental reason to use the McEliece system.

This is saying that Classic McEliece's security track record is better than all of the other options. The team has submitted ample documentation of ways to measure this. Instead of reporting this argument, NIST grossly misrepresents it and pretends that Classic McEliece is living in some fantasy world of guaranteed security.

NIST concludes that it "remains confident in the security of Classic McEliece, although recent progress in cryptanalysis somewhat undermines the case for treating it as an especially conservative choice". But the recent papers that NIST uses to impugn the Classic McEliece track record are, in fact, failures to reduce Classic McEliece's security level. Meanwhile NIST continues to claim confidence in Kyber, and ignores recent papers that reduce the security of Kyber. So NIST is getting the risk comparison backwards.

Security: comparing assumptions. There's another way that NIST's 2025 report misrepresents assumptions. This is an exception to the pattern of NIST telling the reader that attacks have forced changes in the Classic McEliece security analysis: instead this one is misinformation about the relationship between assumptions made by different systems. Specifically, NIST writes the following:

Unlike the other code-based candidates, the only coding-theory hardness assumptions required by HQC’s security proof are parameterizations of the decisional Quasi-Cyclic Syndrome Decoding (QCSD) assumption. BIKE additionally assumes the hardness of Quasi-Cyclic Codeword Finding (QCCF), and Classic McEliece requires assumptions concerning binary Goppa codes [27, 28].

The reader understands this to mean that HQC is making solely this "QCSD" assumption; BIKE is assuming "QCSD" plus "QCCF"; and Classic McEliece is assuming "QCSD" plus "assumptions concerning binary Goppa codes".

I won't comment on BIKE here, but NIST's statement about Classic McEliece is flatly wrong. The simple, explicit, amply documented fact is that the Classic McEliece security analysis obtains QROM IND-CCA2 purely from one-wayness of the original McEliece system.

What the reader naturally concludes from NIST's statement is that every way that HQC could fail would also be a failure in Classic McEliece (and not vice versa). Here are four counterexamples, ways that HQC can fail with no effect on Classic McEliece:

The only correct part of what NIST writes is that the McEliece assumption involves Goppa codes while the HQC assumption doesn't.

Security: category assignments. NIST's official project rules said that submissions have to be at least as secure as AES-128 key search "with respect to all metrics that NIST deems to be potentially relevant to practical security"; that "NIST intends to consider a variety of possible metrics"; and that "NIST will also consider input from the cryptographic community regarding this question".

NIST issued "preliminary guidance" with a table of estimated AES/SHA "classical and quantum gate counts" depending on a limit on "circuit depth". NIST's SHA numbers are highly inaccurate unless one adds further limits to account for how unrealistic a memory-access "gate" is. As another complication, NIST noted that "the cost per quantum gate could be billions or trillions of times the cost per classical gate", which is true but makes it even more difficult for anyone to figure out what NIST's requirements actually mean.

Given the lack of clarity regarding the requirements, it would be a big mistake for any cryptosystem to try to work backwards from a security level to a parameter set targeting exactly that security level. Reaching that security level could easily be spoiled by any minor change in how security is measured, creating a fake perception of attack improvements, even for a super-stable cryptosystem. NIST's ambiguities weren't minor: look at how NIST claimed in August 2016 that SHA-256's post-quantum security level was 80 bits and then claimed in December 2016 that it was 146 bits, in both cases referring to the cost of finding SHA-256 collisions.

Classic McEliece never made the mistake of working backwards from security levels. Remember that Classic McEliece instead chose parameters on the basis of size limits, a much more robust approach. Also, it was always clear that security was comfortably above AES-128, plus NIST's rules said that NIST would "consider input" on how to measure security, plus NIST had written separately that "We're not going to kick out a scheme just because they set their parameters wrong".

Unfortunately, NIST was requiring submissions to say not just whether they were above the AES-128 "floor", but to pick one of five "categories" for each parameter set: above AES-128, above SHA-256, above AES-192, above SHA-384, or above AES-256.

So the 2017 Classic McEliece documentation summarized what was known about the cost of breaking the mceliece6960119 parameter set, explaining that the comparison to AES-256 depended on the cost metric. Specifically, mceliece6960119 is somewhat below AES-256 as measured by bit operations, but, after explaining why other metrics differed, the documentation concluded as follows: "We expect that switching from a bit-operation analysis to a cost analysis will show that this parameter set is more expensive to break than AES-256 pre-quantum and much more expensive to break than AES-256 post-quantum." Subsequent analysis has been in line with this expectation.

Analogous comments apply to the mceliece460896 parameter set that was added in 2019. This parameter set is below AES-192 in bit operations, but accounting for memory access and quantum computing should easily push it above AES-192. (Non-quantum attacks won't reach AES-192 in the foreseeable future, whereas quantum attacks might, so the quantum comparison is more important than the non-quantum comparison.) For mceliece348864, the story is simpler, since mceliece348864 is above AES-128 in bit operations.

Now let's look at what NIST's 2025 report says about this. First of all, instead of acknowledging the robust way that Classic McEliece actually chose parameters, NIST again and again falsely indicates that Classic McEliece is mapping from target security levels backwards to parameters:

But Classic McEliece never made this mistake. (Also, NIST's counting is wrong: for example, did NIST not notice mceliece8192128?)

Now here's the NIST claim that makes clear why this matters:

Independent estimates [56, 71] of the cost of information set decoding algorithms have long suggested that Classic McEliece’s parameter sets (i.e., mceliece460896 and mceliece460896f) that claim Category 3 security fall short of their security target. However, NIST remains confident that these parameter sets at least meet the criteria for Category 2 security.

The papers that NIST cites here are papers from 2021 and 2023 that I mentioned above as making an old 10-bit mistake but otherwise being in line with CryptAttackTester.

What NIST is telling the reader is that Classic McEliece screwed up its security analysis so badly that mceliece460896 falls "short of its security target", as shown by two independent papers. It doesn't really matter whether this is because of missing speedups or because of misanalyzing the known algorithms: what matters is the instability, the idea that the Classic McEliece security estimates can't be trusted.

But Classic McEliece didn't screw up. The documentation has always said that the comparisons to AES depend on the choice of cost metric, and has always explained the most important reasons for this. In 2022, Classic McEliece added a dedicated security guide, and inside that spent two pages on the AES comparisons that NIST was asking for, going through the available numbers and drawing the following conclusions:

Since the underlying facts have not changed, the submission continues to assign its selected parameter sets to "categories" 1, 3, 5, 5, 5 respectively. As before, these assignments are based on counting realistic costs for memory.

If NIST instead decides to make "category" assignments on the basis of bit operations with free memory access, then the correct assignments will instead be 1, 2, 4, 4, 5. This does not reflect any instability in the Classic McEliece security estimates: the submission has always been careful to distinguish between these two different types of accounting for the costs of attacks.

NIST's 2025 report ignores everything the documentation says, and waves at various bit-operation counts to claim, falsely, that the "security target" for mceliece460896 has been broken.

For comparison, let's look at NIST's handling of Kyber. Kyber already in 2017 relied on "the cost of access into exponentially large memory" as part of arguing that kyber512, kyber768, and kyber1024 were harder to break than AES-128, AES-192, and AES-256 respectively. NIST's latest statement on the Kyber security level similarly relies on a discussion of "the cost of memory access in lattice reduction algorithms" to conclude that "NIST's best guess for the realistic cost of attacking Kyber512 is the equivalent of about 2160 bit operations/ gates, with a plausible range of uncertainty being something like 2140 to 2180" so it is "highly unlikely that the known sources of uncertainty are large enough to make Kyber512 significantly less secure than AES128".

If NIST were insisting on counting bit operations, and insisting on meeting its security "floor" rather than allowing a vague "significantly" escape hatch, then it would have to fall back to its "uncertainty window of 2135 to 2158" bit operations, and drawing conclusions about the comparison to AES-128 would be impossible, never mind the three newer papers knocking out other pillars of NIST's analysis.

The attack analyses for Classic McEliece are much more stable and precise than for Kyber. The dependence on memory-access costs is only for the comparisons to AES-192 and AES-256. But this time NIST insists on counting bit operations without an escape hatch.

Back in 2016, when NIST wrote that "We're not going to kick out a scheme just because they set their parameters wrong", it continued by saying "Depending on how far off the estimate was, and how unanticipated the attack, we may take it as a sign the algorithm isn't mature enough". NIST's "fall short" text leads readers to believe that there was such a failure in the security analysis for Classic McEliece, maybe not something big but something that makes Classic McEliece sound as shaky as lattices. The actual issue here is instead NIST flip-flopping on which cost metric it's asking people to use.

Coda. There's a long history of NIST standardizing cryptography later shown to be breakable, often under NSA influence, such as DES, DSA, and Dual EC.

NIST's public Dual EC post-mortem sounded like a real effort to improve NIST's processes in the interests of security. Many aspects of NIST's official rules for post-quantum standardization sounded great: "NIST will perform a thorough analysis of the submitted algorithms in a manner that is open and transparent to the public"; "the goal of this process is to select a number of acceptable candidate cryptosystems for standardization"; "in some cases, it may not be possible to make a well-supported judgment that one candidate is 'better' than another"; "NIST believes it is critical that this process leads to cryptographic standards that can be freely implemented in security technologies and products"; "the security provided by a cryptographic scheme is the most important factor in the evaluation".

NIST secretly described the Dual EC post-mortem as reputation management:

Managed the PR and Reputational issues raised with “Snowden” allegations of NIST corrupting standards and its work. Rebuilt international trust in NIST encryption processes and faith in NISTs ability to adhere to our core values. This required working with a special VCAT subcommittee, open publishing of all related work items, responding to multiple FOIAs, re-setting MOUs and interactions with the NSA and active discussions with standards bodies and international partners.

For post-quantum cryptography, if security were the most important factor in the evaluation then NIST would have ended up telling people to use SPHINCS+ unless they really need the performance of Dilithium, rather than telling people to use Dilithium unless they really need the confidence of SPHINCS+. Secretly, NIST marked many of its project documents as "Not for public distribution", and was continually talking to NSA, which was pushing lattices and then pushing specifically Kyber.

They say that one should never attribute to malice what can be adequately explained by stupidity. It's still amazing how many mistakes NIST has made in favor of Kyber.

Recommendations. I hope that Kyber isn't breakable. But the core lattice one-wayness attacks and analyses are very complicated and keep changing, with apparently neverending opportunities for further speedups. Will the cliff stop crumbling before Kyber falls off the edge? Also, when cryptanalysts are finding better attacks against these core problems, what's their incentive for studying other aspects of the Kyber attack surface, such as the possibility of Kyber's QROM IND-CCA2 security level being much lower than its one-wayness security level?

I would rather use a cryptosystem where the analyses are simpler and more stable, a cryptosystem where the attack surface is smaller and more thoroughly explored.

I understand how the recommended 1MB Classic McEliece key size makes people worry about costs for some high-volume applications. But the sizes of keys and ciphertexts have to be weighted by how often those are sent. Tallying costs for one static-key application after another shows again and again that Classic McEliece is less expensive than Kyber.

Ephemeral keys are different, but it makes no sense to allow the pursuit of ephemeral-key performance to drag down static-key performance, and, more importantly, to drag down security for applications that can afford any of these cryptosystems. Remember that sending a high-security Classic McEliece key through the Internet today costs only about a microdollar.

So my recommendation is simple. Use Classic McEliece wherever you can. For situations where you can't, use lattices; that's higher risk, but hopefully holds up. Finally, to limit the damage in case of cryptosystem failures or software failures, make sure to roll out PQ as ECC+PQ.


Version: This is version 2025.04.23 of the 20250423-mceliece.html web page.