Should you be investing time and effort in upgrading to post-quantum cryptography? You may have encountered the following arguments for minimizing the investment.
The cost-is-high argument. You'll be discouraged from taking action to upgrade if you're surrounded by people saying how hard this is. For example: "Post-quantum cryptography is the biggest cybersecurity transition ever!" (If you haven't heard this sort of thing: here are some random examples.)
People will often make the transition sound complicated the same way they can make anything sound complicated, namely by adding layers of bureaucracy. Here's how I summarized this in a 2023 talk to the Federal Reserve TechLab: "We'll form a committee to devise an action plan to inventory current usage of cryptography to support future assessment of the steps needed to build a best-practices playbook for meeting the performance challenges of upgrading to post-quantum cryptography, with a target date after I retire."
You'll be much more motivated to upgrade if you instead hear examples of post-quantum crypto already being deployed. It can't be that difficult if it's already working for millions of users.
The popular OpenSSH remote-administration tool rolled out post-quantum crypto in 2022. Google rolled out post-quantum crypto for its internal communications later the same year. Cloudflare, which hosts a considerable fraction of the Internet's web sites, reports that 33% of its connections are using post-quantum crypto as of January 2025.
I'll take a moment here to advertise some of my own work with various collaborators. If you're using Linux as a sysadmin or on the desktop, try our new easy-to-install PQConnect tool, which wraps end-to-end post-quantum cryptography around unmodified applications. If you're a developer, try out the simple API for libmceliece and libntruprime.
The benefits-do-not-exist argument. For the rest of this blog post, I'll focus on another way that you could be discouraged from taking action. The core benefit of upgrading is supposed to be improved security against quantum computers; but why should you bother putting any effort into that if you've heard that quantum computers simply won't work for the foreseeable future?
Deployment examples and easy-to-use software packages don't address this question. Looking more broadly at the "cybersecurity" landscape finds many examples of "security" mechanisms that don't actually provide security, that often actively damage security, and that distract us from making real progress. Why should we think that post-quantum crypto isn't one of these pseudo-security mechanisms?
Nowadays I often see people dodging this question by resorting to childish peer pressure: "All the cool kids are working on post-quantum crypto! Last kid to join is a loser!" Or an adult's form of peer pressure: "You'll get money in the following ways for working on post-quantum crypto!"
There are many papers on RC4; does this mean that we should use RC4? How about QKD? As for money, NSA paid the RSA company $10 million to deploy Dual EC as the default RNG in RSA's BSAFE library; for years this was secret, so it wasn't damaging the company's reputation; does this mean that deploying Dual EC was the right thing to do?
Maybe you're at a company that's already doing something about post-quantum cryptography because of current money flows, regulations, marketing, etc.; but you'd like to know whether to do more than the bare minimum, and as part of that you'd like to understand the future risks. You're hearing from quite a few sources that quantum computing is just hype and won't work. You'd like to know whether that's true.
What I'll do here is look at the content of some arguments that quantum computers won't break RSA-2048. In particular, I'll explain why you should disregard those arguments.
Arriving at the exponential-energy argument. Most of the arguments I'll look at are recent, but I want to start with an older example from my own experience, because this lets me illustrate not just what the argument is but also the natural thought process that leads to arguments like this.
Let's rewind to May 1994. Peter Shor gave an invited talk "Polynomial time algorithms for discrete logarithms and factoring on a quantum computer" at the first Algorithmic Number Theory Symposium.
I was one of the grad students attending. I had already spent a lot of time studying algorithms for integer factorization. I had already written my first paper on the topic. I was interested enough that I knew I would be writing more papers on the topic. But I couldn't see a reason to care about attacks using hypothetical computers.
Wait, what about "intercept now, decrypt later" attacks? Answer: The reality back then was a much faster attack called "intercept now, read now, and modify now if you want, because the data isn't encrypted or authenticated in the first place".
Sure, there was some cryptography, but it was limited by export controls and by patents. The recurring villains, namely NSA and the RSA company, were already working together: they had made a deal to promote low-security cryptography. NSA exempted 512-bit RSA and 40-bit RC4 from export controls. The RSA company sold 512-bit RSA and 40-bit RC4 for use in various products. A popular product called Lotus Notes (acquired by IBM in 1995) was advertised as "a pioneer in providing transparent strong RSA-based cryptography" even though its cryptography was overtly sabotaged.
By the turn of the century, the situation was different in some important ways. The DH patent had expired. The RSA patent had expired. The U.S. export controls had been declared unconstitutional. An RSA-512 challenge from the RSA company had been publicly factored.
On the other hand, low-security cryptography was solidly entrenched in protocols and software. People kept pointing to the costs of cryptography as a reason to continue limiting cryptographic security levels or to use no cryptography at all.
So, okay, I was working on something I could help with, namely reducing the costs of cryptographic software. For example, I set new speed records in 2001 for ECC using the NSA/NIST P-224 elliptic curve. (Review of my experience implementing the P-224 curve: Did not enjoy. Would not recommend. Implementing Curve25519 is simpler and less error-prone.)
But, wait, would quantum computers be built? If so then, no matter what virtues RSA and ECC had for addressing the immediate problem of improving cryptographic deployment, they would be broken by Shor's algorithm. What a disaster that would be!
Humans faced with disaster tend to optimistically imagine ways that the disaster will be avoided. Given the reality of more and more user data being encrypted with RSA and ECC, the world will be a better place if every effort to build a quantum computer runs into some insurmountable physical obstacle. Less power for the attackers! Longer-lasting benefits from the effort we've invested in RSA and ECC!
Shor's paper briefly mentions that "some proposed analog machines that seem able to solve NP-complete problems in polynomial time have required the machining of exponentially precise parts, or an exponential amount of energy". Shor's paper doesn't state the amount of energy used by Shor's algorithm. Aha, that's the cheat! The energy for Shor's algorithm must be exponential in the problem size!
Challenging the exponential-energy argument. Structurally, this exponential-energy argument consists of wishful thinking followed by confirmation bias. This doesn't meet scientific standards for drawing conclusions.
Scientists are free to formulate questions such as "Does Shor's algorithm use more than a polynomial amount of energy?". But drawing a conclusion such as "Yes, in fact an exponential amount of energy" or "No, it uses a polynomial amount of energy" requires following established procedures that are designed to catch errors.
If a statement X is false, are you going to catch the error by observing that the truth of X will make the world a better place? Nope.
Are you going to catch the error by picking evidence that seems to match X, such as the evidence that you used to formulate X in the first place? Nope.
So I found myself asking questions such as how much energy Shor's algorithm uses, and studying the literature on quantum computing, and attending a conference "Models of Quantum Computing" in October 2002.
By the end of 2002, I had concluded that the literature was describing in detail how to build a physical machine that will carry out Shor's algorithm. The machine is organized in space as an array of physical data-storage areas called "quantum bits", or "qubits". The computation is organized in time as a series of "cycles". Starting from the machine description and a textbook description of physics, you can calculate how the qubits evolve in each cycle. The number of cycles is bounded by a polynomial in the problem size, the number of qubits is bounded by a polynomial in the problem size, and each qubit uses a limited amount of energy on each machine cycle, so the total energy use is also polynomial. (Side note to the inevitable nitpickers: assume a cosmological constant compatible with the size of the computation, for example by plugging in the specific polynomials and problem sizes that matter for cryptography.) The placement of machine components doesn't have to be perfect: if the manufacturing meets particular tolerances then the algorithm will work.
A 2019 paper from Gidney and Ekerå presents an integrated description of a machine of this type (plus many improvements after 2002). Back in 2002, the material wasn't as nicely organized in the literature, but it was there, showing that the physical costs are bounded by a polynomial in the problem size.
We still had bigger problems with all the data that was unencrypted, or that was encrypted with low-security cryptography. But there was a path to addressing those problems, and then how would we deal with the risk of quantum computers being built?
I still wanted to believe that quantum computing would fail for some reason. Could there be an error or gap in the calculations in the literature? I certainly hadn't checked every detail. Could the textbook description of physics be missing some effect that would make the machine fail? Could the necessary manufacturing turn out to be so difficult that it wouldn't happen for the next 100 years? Maybe. But "maybe" isn't an excuse for a lack of preparation.
I coined the phrase "post-quantum cryptography" in 2003: "I'm thinking about publishing a paper on post-quantum cryptography. This isn't too early to start planning ahead for the very real possibility of quantum computers."
The static-predictions argument. Let's now look at more recent arguments that quantum computing won't work. I'll start with Mikhail Dyakonov's 2018 article "The Case Against Quantum Computing", with subtitle "The proposed strategy relies on manipulating with high precision an unimaginably huge number of variables". I'll get back to the subtitle topics below, but let me first address a non-technical argument given in the article.
The article claims that "optimistic experts" estimate "5 to 10 years" for useful quantum computers, that more "cautious ones predict 20 to 30 years", and that "similar predictions have been voiced, by the way, for the last 20 years". (Note that the word "optimistic" here is treating quantum computers as a good thing.)
The article doesn't give any references justifying these claims about what has been predicted. So I wrote to Dyakonov. I asked whether there was any verifiable data to back up the "for the last 20 years" statement. I continued as follows: "I understand that 'voiced' might be referring to off-the-record conversations, but this isn't clear from what you write, and it's different from the current situation where predictions such as https://eprint.iacr.org/2015/1075.pdf are on the record from named experts." This 2015 reference is to an article by Mosca saying "I estimate a 1/7 chance of breaking RSA-2048 by 2026 and a 1/2 chance by 2031".
Dyakonov responded by claiming that one example was a 2002 roadmap that had been cited in his article. That roadmap stated an objective of developing "by 2012 a suite of viable emerging-QC technologies of sufficient complexity to function as quantum computer-science test-beds in which architectural and algorithmic issues can be explored". The roadmap also stated that this was "a desired outcome, not a prediction". So, no, this is explicitly not an example of a timeline prediction.
The number-of-variables argument. The Dyakonov article then summarizes some basic aspects of quantum mechanics. In particular, the normal mathematical description of the state of n qubits involves 2n variables.
For example, for 4 qubits, there are 16 variables: v[0000], v[0001], v[0010], and so on through v[1111]. If you measure the 4 qubits, you'll see a string of 4 bits:
The variables always have these squares adding up to 1, so the total probability is 1 as you'd expect.
(An equivalent description that I like to use in explaining quantum algorithms allows the sum of squares to be anything nonzero, and then divides each probability by that sum so that the total probability is again 1.)
The normal mathematical description of the state of 1000 qubits involves 21000 variables. This number might sound crazy at first, but makes perfect sense once you realize that measuring 1000 qubits might produce any sequence of 1000 bits, making it natural to ask about the probability of each of the 21000 different possible outputs.
The article claims that this somehow doesn't make sense: "A useful quantum computer needs to process a set of continuous parameters that is larger than the number of subatomic particles in the observable universe. At this point in a description of a possible future technology, a hardheaded engineer loses interest".
But the computer is operating on only 1000 qubits. The word "process" gives the impression that the computer is tediously manipulating each of the 21000 variables; that's not true. Those variables are just components of a way to mathematically describe what's going on.
Let's ignore quantum computers for a moment. Think about software in your laptop that uses randomness to compute a string of 1000 bits. That's 128 bytes, a tiny fraction of the storage in your laptop.
There are 21000 possibilities for the string of 1000 bits. Mathematically, it's perfectly natural to ask about the probability of each of the 21000 different possible strings. Those probabilities are 21000 variables describing what has happened. If your laptop carries out more computations that modify the string of 1000 bits, then mathematically there are 21000 new variables describing the probabilities for each possibility for the new string.
Does this mean that your laptop is tediously manipulating 21000 variables? No. Your laptop is manipulating just 1000 bits.
I'm not saying that computation on qubits is the same as computation on random bits. When you look at the details, you see that quantum computation allows a useful extra trick, setting up interference patterns between positive and negative variables. But an argument saying "quantum computing is impossible because there are so many variables" can't be right: it also says that your laptop is impossible.
The error-correction argument. Dyakonov's article goes on to emphasize that physical devices have errors: "In the physical world, continuous quantities (be they voltages or the parameters defining quantum-mechanical wave functions) can be neither measured nor manipulated exactly. ... Indeed, all of the assumptions that theorists make about the preparation of qubits into a given state, the operation of the quantum gates, the reliability of the measurements, and so forth, cannot be fulfilled exactly. They can only be approached with some limited precision. So, the real question is: What precision is required? With what exactitude must, say, the square root of 2 (an irrational number that enters into many of the relevant quantum operations) be experimentally realized? Should it be approximated as 1.41 or as 1.41421356237? Or is even more precision needed? There are no clear answers to these crucial questions."
On 23 November 2018, the date that I wrote to Dyakonov, the paragraph ended even more strongly: "Amazingly, not only are there no clear answers to these crucial questions, but they were never even discussed!"
In fact, the literature already contained extensive discussion of these crucial questions. For example, the surface code, an example of "quantum error correction", is a specific way to combine many physical qubits into a much more reliable "logical" qubit. There are detailed tables quantifying the error rate of the logical qubit, depending on the error rate of the physical qubits and the number of physical qubits used.
Surface codes work well if each physical operation has an error rate below 0.1%. Somewhat higher error rates than 0.1% can also work, although then the surface code needs more physical qubits for the same level of reliability of the logical qubit. Everything disintegrates for physical error rates around 1% or above: for such high error rates, the surface code is useless. In the opposite direction, lower physical error rates than 0.1% make the surface code more efficient.
Dyakonov edited his article on 30 November 2018, publicly noting that "some readers pointed out to the author instances in the literature that had considered these issues". But the claim that there are "no clear answers" to the precision question is contradicted by the same papers. In short: 1.414 is fine.
Most papers on quantum algorithms don't consider error correction: they assume reliable operations on logical qubits. The quantum-computer engineer is responsible for providing those operations. As an analogy, programmers writing software for your laptop aren't thinking about transistors and electrical signals. The software assumes reliable operations on bits; the computer engineer is responsible for providing those operations.
This layering doesn't mean that errors are being ignored. It means that the task of error correction is being modularized. One layer handles error correction, so that higher layers can validly assume error-free operation.
Interlude: landing on the Moon. Imagine someone 70 years ago writing the following: "You think you can land on the Moon? Bollocks! All of your landings so far have been on Earth, and extrapolating the heights says it'll take you thousands of years to get to the Moon."
For the engineers at that point working on actually getting something to land on the Moon, the obvious first question is whether there's enough power to propel something that far from Earth, and the obvious second question is whether there's enough control to target the Moon. You can't succeed unless you have enough power and enough control.
Luna 2 had enough power and enough control to crash-land on the Moon in 1959. Apollo 11, with more control, landed humans safely on the Moon in 1969.
If you look at a graph of landing heights as a function of time, then you'll see Earth for many years but then suddenly (surprise!) a Moon landing. Before the Moon landing occurs, the graph is flat, making it useless for extrapolating the relevant technology development.
In other words: If you think there's some insurmountable obstacle to a Moon landing, and you're wrong about that, will you catch the error by looking at a graph of landing heights before the Moon landing occurs? Nope.
It makes much more sense to look at a graph measuring the available power, and a graph measuring control. Those graphs let you see intermediate steps in technology development.
The RSA-bits argument. Peter Gutmann posted slides "Why quantum cryptanalysis is bollocks" in 2024. I think it's fair to summarize what the slides say about quantum computers as follows: "You think you can use a quantum computer to break RSA-2048? Bollocks! All of your quantum factorizations so far have been tiny, and extrapolating the sizes says it'll take you thousands of years to get to RSA-2048."
For the engineers working on actually building a machine to break RSA-2048 with Shor's algorithm, the obvious first question is whether there are enough physical qubits to run the algorithm, and the obvious second question is whether the qubits are reliable enough. You can't succeed unless you have enough qubits and enough reliability.
There's actually a tradeoff curve between the number of qubits and the error rate of each qubit, for reasons noted in the above discussion of the surface code. For example, you can break RSA-2048 using millions of physical qubits each with a 0.01% error rate, or using tens of millions of physical qubits each with a 0.1% error rate.
Breaking RSA-2048 isn't exactly analogous to the Moon landing. There's a mini-Moon, RSA-1024, that's somewhat closer than RSA-2048. Current knowledge says that discrete logarithms on the NSA/NIST B-163 curve are even closer. A side advantage of using discrete logarithms as the first public demo is that the standard type of discrete-logarithm challenge, with both inputs generated as hashes, doesn't allow the same sort of cheating that factorization challenges do. Maybe the RSA company didn't erase the secret factors of their RSA-2048 challenge. Maybe the factors were stolen from them.
Anyway, the basic point about how to measure technological development is the same as for the Moon landing. A model of steady progress in the number of qubits and in the reliability of qubits implies that the graph of quantum factorizations will look like RSA-tiny, RSA-tiny, RSA-tiny, RSA-tiny, etc., and then suddenly RSA-1024, and soon after that RSA-2048. The initial period of useless demos is completely explained by the well-understood prerequisites for quantum error correction: we need a large enough collection of physical qubits with low enough error rates.
Gutmann's RSA-bits argument is in the context of an understandable complaint about resources being taken away from stopping "the actual attacks that are happening". Certainly there are many other security problems to address. We'd like to believe that quantum computing is impossible and can simply be ignored. But this is just wishful thinking again; it isn't evidence. The only part of Gutmann's slides that actually addresses quantum computing is the RSA-bits argument.
Interlude: how non-quantum error correction works. Let's look at how your laptop sends data through Wi-Fi. Programmers talk about sending a "packet" of data, a string of bits, typically several thousand bits at once. What your laptop is actually sending through the air is a physical representation of those bits as radio waves.
The receiver doesn't receive exactly the same radio waves. There's noise getting in the way. Maybe bit 0, sent as wave 0, ends up being received as wave 0.01. That's okay: the receiver interprets that as bit 0.
Sometimes there's a bigger error. Maybe bit 0, sent as wave 0, ends up being received as wave 0.8, which the receiver interprets as bit 1. Oops.
Here's the first solution that you might come up with. The sender repeats each bit three times. Normally the receiver sees 000, meaning 0, or 111, meaning 1. If the receiver sees another pattern of three bits, it takes a majority vote of those three bits. For example, maybe 000 has the first bit flipped in transit to produce 100; the receiver will take a majority vote of 100 to recover 0.
What if the moment of noise stretches out slightly longer, flipping 000 to 110? Let's not put the repeated bits right next to each other. Instead let's take a packet with bits ABCDEFG (for example, 0101111) and send ABCDEFGABCDEFGABCDEFG (for example, 010111101011110101111). The receiver will take majority votes of the AAA bits, the BBB bits, etc.
Now a brief burst of noise isn't a problem. Could something else spoil the received data?
One answer would be a higher level of ongoing noise, producing such frequent errors that sometimes two errors happen to be at the right distance to spoil the majority votes. Maybe we should repeat each bit 5 times instead of 3 times, although that still won't be enough if the noise level is too high. Maybe we should work on reducing the noise level: move closer to the Wi-Fi router, for example.
Another answer would be noise that isn't so frequent but that happens to be synchronized with the error-correction details. A burst of radio noise could physically reverberate at the right frequency to spoil two of the AAA bits, or three of the AAAAA bits if we're repeating each bit 5 times.
There are fancier mathematical techniques for error correction. For example, using a "binary Goppa code", you can communicate 6528 bits as 8192 bits in a way that's guaranteed to be able to correct any pattern of 128 or fewer errors. The error-correction mechanism is more complicated than majority votes, but it isn't particularly expensive.
Now suppose someone claims that, no matter what you do to send data and correct errors, there will be noise sources that pay attention to what you did and generate enough noise to block communication.
A demo of successful communication immediately debunks this claim. But let's say it's early days and you haven't built a demo yet. Does that make the claim plausible? Wouldn't you expect an explanation of how the universe is supposed to be generating enough noise to block communication?
The error-synchronization argument. Let's now look at Gil Kalai's slides "Why quantum computers cannot work" from 2013.
The slides give the following supposed "explanation for why (fault-tolerant) quantum computers cannot be built: Quantum systems based on special-purpose quantum devices are subject to noise which systematically depends on the quantum evolution of the system; this dependence reflects dependence of the noise on the quantum device, and the dependence of the quantum device on the quantum evolution it is performing. Here, 'a quantum device' refers both to human-made and to natural devices. This systematic dependence causes error-rate for general-purpose quantum computers to scale up."
Sounds like a machine running Shor's algorithm will have its results swamped by errors. But, wait, the literature includes detailed calculations saying what the machine is doing at each moment, including calculations of how errors produced by noise will be corrected. Which step in the calculation is Kalai claiming to be wrong?
The closest that I find to an answer is Kalai conjecturing that quantum error correction will never work. For example: "Conjecture 1: (No quantum error-correction): In every implementation of quantum error-correcting codes with one encoded qubit, the probability of not getting the intended qubit is at least some δ > 0, independently of the number of qubits used for encoding." As another example: "Conjecture 4: Error synchronization: In any noisy quantum computer in a highly entangled state there will be a strong effect of error synchronization."
Let's think about this. Kalai is arguing that noise "systematically depends on the quantum evolution of the system", producing synchronized errors that will break error correction. Why is this not also an argument that radio noise depends on how you encode data to send through Wi-Fi, producing synchronized errors that will break error correction?
The answer has to be in the word "quantum". There has to be some explanation of how the universe is supposed to produce synchronized errors that inevitably break quantum error correction without breaking non-quantum error correction. But I've looked high and low and don't see where Kalai is giving any such explanation. The following sentence sounds to me like he's admitting that he doesn't have an explanation: "The mathematical challenge is to understand the systematic laws for this dependence."
There's nothing wrong with asking whether some unforeseen or inadequately analyzed physical effect could sabotage quantum error correction. Unless and until there are public attack demos, I'll continue to hope that quantum computing will fail. But, again, wishful thinking is not evidence.
Google announced experiments in August 2024 where a surface code produced a logical qubit measurably more reliable than any of the underlying physical qubits. This doesn't quite contradict Kalai's Conjecture 1 yet: maybe the unspecified cutoff "δ" in the conjecture is below Google's current error rate, while still being large enough to spoil any big quantum computation. But this wiggle room in the conjecture is just another reflection of the fact that the conjecture never had an explanation in the first place.
Visibility arguments. The last essay I'll look at isn't actually arguing that quantum computers are impossible. Instead it's arguing that you can relax since you'll see quantum computers coming.
This is a 2023 essay "When a quantum computer is able to break our encryption, it won't be a secret" by Edward Parker. The subtitle is "Quantum computers may eventually have devastating impacts on cybersecurity—but we’ll probably see the threat coming in time to set up counters." This is telling you that ("probably") you don't have to act yet: you can wait for the signals of "the threat coming", and then you can take action.
This can't be right if you're encrypting data that needs to stay confidential for a long time, longer than the visibility of the threat: you're abandoning the data to "intercept now, decrypt later" attacks, which are mentioned later in the essay. But let's assume that your concerns are with shorter-term confidentiality or with integrity, and that you're not worried about being able to upgrade rapidly enough. I'll focus on how the essay argues that quantum attacks won't be a secret.
The secrecy-is-hard argument. The essay points to an early leak of a draft of a paper from 77 authors with Google's first claim of "quantum supremacy". The essay concludes that it is "very difficult to keep groundbreaking progress in quantum computing secret".
But it is a mistake to conflate an early leak of public work with leaks of classified work. In the United States, a single disclosure of classified cryptographic information is punishable not just by being fired but also by 10 years in prison. This threat doesn't even matter for government agents who seriously believe that "everything the United States Government does is good": they want to stay quiet so that their attacks are more successful!
There are procedures for eventually declassifying information that the government can no longer justify keeping secret. This information is often surprising. Sometimes it hits major news sites, despite being many years out of date. So there's ample evidence of the government often managing to keep secrets, even big secrets. Presumably the information that's still kept secret would be even more surprising.
Sure, whistleblowers sometimes remove government secrecy, as illustrated by Snowden's exposure of all sorts of misbehavior by NSA; I'll mention some specific Snowden documents below. But do these whistleblowers expose such a large fraction of government secrets that we can reasonably expect some whistleblower to promptly leak the existence of NSA's quantum attacks? I'm skeptical. More to the point, the essay I'm commenting on doesn't make a case for this, nor does it explain why we should expect public visibility into any of the other big attackers.
The we're-well-funded argument. The essay points to more than a billion dollars publicly invested in quantum computing by 2021, and concludes that staying ahead of this would "need enormous financial resources".
Is a billion dollars supposed to be a lot of money? NSA's yearly budget was about $3.6 billion thirty years ago, and about $10 billion fifteen years ago, out of a $50 billion "black budget", never mind the rest of the U.S. military-industrial complex.
Today the total U.S. military budget is approaching a trillion dollars per year. It's easy to imagine a secret case being made for investing tens of billions of dollars per year in building quantum computers. So why exactly shouldn't that happen?
Let's rewind to 2012, when public investment in quantum computing was much smaller than it is today. I gave a talk listing $2.2 million for defense contractor Raytheon as "one of many publicly announced quantum-computing grants from government agencies". There was already more money than that being invested secretly in quantum attacks, for example in an $80 million NSA program "Penetrating Hard Targets", as shown at the beginning of 2014 by news stories based on the Snowden documents. Shouldn't we expect that the secret investment has expanded since then?
My median estimate is that attackers are 3 years ahead of the public in building quantum computers. I wouldn't be surprised by 1 year, or by 5 years. There are big uncertainties in models of (1) how much the attackers are spending and (2) how much influence that investment has upon technology development.
Meanwhile what the essay is claiming is that "When a quantum computer is able to break our encryption, it won't be a secret". We're supposed to believe that attackers aren't ahead of the public at all? NSA has wasted all of the money it has spent on quantum attacks?
The essay later has a related argument that "commercial applications of quantum computing will very likely become technically feasible before decryption does", meaning that an attack organization "would face an enormous economic incentive to use its quantum computer for commercial applications rather than for intelligence collection".
No, that's not how attack organizations work. Sure, NSA has a long history of industrial espionage; this doesn't mean that NSA's goal is to make money. NSA's mission is "signals intelligence". Taking NSA's precious quantum computers and using them for something other than attacks would slow down NSA's signals intelligence, while risking disclosure of NSA's highly classified attack capabilities.
The we're-the-best argument. The essay continues as follows: "Any organization attempting to secretly develop a CRQC would need to acquire world-class talent—and if many of the greatest technical experts suddenly left their organizations or stopped publishing in the technical literature, then that fact would immediately be fairly evident, just as it was during the Manhattan Project."
The words "as it was" are linking to a book review. If you check the book review then you see that it actually says almost exactly the opposite: "The project was secret to the public but public to the US' adversaries." If the analogy holds up between that project and quantum attacks, then various large-scale attackers will deduce that the other large-scale attackers are also carrying out quantum attacks, but we won't know that this is happening.
More importantly, there's a glaring flaw in the analogy. The Manhattan Project was a wartime project that suddenly started in 1942 and that ended up dropping bombs in 1945. NSA's quantum-attack project isn't a sudden wartime project: we know, thanks to Snowden, that it was already underway more than a decade ago.
Don Coppersmith, a prize-winning cryptanalyst whose many public contributions include an important early modification of Shor's algorithm, stopped publishing attacks more than twenty years ago—because he moved from IBM to IDA, an FFRDC controlled by NSA. You'd think that an essay talking about attackers hiring world-class talent would mention this.
The essay shows no signs of tracking what has actually happened on this topic. If "stopped publishing" is supposed to be a signal that everyone will see, why does the essay talk about it as some potential future danger signal? The danger signal was already triggered decades ago.
The flight argument. Finally, the essay says that a quantum computer "might be physically difficult to hide". The essay notes that estimating the required physical resources is difficult, but says that the essay author's "recent research suggests that a CRQC might plausibly draw 125 megawatts of electrical power".
I checked the cited paper. The paper estimates about 6 watts per physical qubit. The paper multiplies this by numbers from the aforementioned 2019 paper by Gidney and Ekerå, namely 20 million physical qubits for 8 hours to factor RSA-2048. Yup, 6 watts per qubit times 20 million qubits is around 125 megawatts.
The paper continues by saying that this is "about the power consumption of a Boeing 747 aircraft in flight".
The paper doesn't give a citation here, but I guess the calculation is as follows: a Boeing 747 uses 4 liters of fuel per second; a liter of fuel has about 30 megajoules of energy. As a sanity check, a single CF6-50 turbine, with a diameter of 105 inches and a length of 183 inches, is rated for over 50 megawatts, and a Boeing 747 has four of those, so there's no evident obstacle to producing 125 megawatts.
Sounds like, assuming the 6-watt estimate per qubit, this RSA-2048 attack will consume as much energy as an 8-hour flight of a Boeing 747. How does the essay get from this to the idea that the attack won't be carried out in secret?
Further reading. Another analysis of quantum-computing-won't-work arguments is a recent talk by Scott Aaronson on "the main reasons why people regarded this as not entirely real": namely, "it sounded too good to be true", "the general thesis of technological stagnation", "eternally just over the horizon", "doubts about quantum mechanics itself", and "correlated noise that kills QC". I'm saying more about some of those topics (and covering some further topics) but skipping others.