The cr.yp.to blog



2016.05.16: Security fraud in Europe's "Quantum Manifesto"

"Europe plans giant billion-euro quantum technologies project," says a 21 April 2016 news story in Nature. "The European Commission has quietly announced plans to launch a €1-billion (US$1.13 billion) project to boost a raft of quantum technologies—from secure communication networks to ultra-precise gravity sensors and clocks."

The relevant press release from the European Commission is actually titled "European Cloud Initiative to give Europe a global lead in the data-driven economy" and says "The Commission today presented its blueprint for cloud-based services and world-class data infrastructure to ensure science, business and public services reap benefits of big data revolution." The word "quantum" appears just twice in the press release:

The word appears just once in the European Commission's accompanying "fact sheet", proposing to spend "€1 billion for a large-scale EU-wide quantum technologies flagship" as part of spending 6.7 billion Euros on a "European Cloud Initiative".

Mission creep, part 1: from big data to quantum computing

It is clear that if large universal quantum computers are built then they will be much faster than conventional supercomputers for many important computations. In particular, Grover's algorithm will speed up very large "combinatorial searches" that arise in many areas of science and that consume huge amounts of computation today. The scientific literature contains ample justification for continuing to fund research into

But the interesting quantum computations are not big-data computations. They are big computations on small data. The big-data computations that people carry out, and want to carry out, fundamentally involve much more input and output, exactly the weak point of quantum algorithms. Even under extremely optimistic 20-year projections of progress in building quantum computers, big-data computations cannot reasonably be expected to see a quantum speedup. No, Grover's algorithm will not let Google search the Internet more quickly.

The European Commission says that its goal is to "give Europe a global lead in the data-driven economy" so that everyone reaps "benefits of big data revolution". How could they have thought that this goal justifies putting massive funding (1 billion Euros, 15% of the total European Cloud Initiative funding) into quantum computation? Here are three theories:

Each of these theories raises troubling questions regarding the mechanisms used to build public interest in funding scientific research.

Mission creep, part 2: from quantum computing to quantum everything

For the rest of this blog post, let's simply assume that it's reasonable to spend 1 billion Euros to look into "the potential of quantum technologies which hold the promise to solve computational problems beyond current supercomputers." This is still quite different from what the Nature story said: namely, that the 1 billion Euros are "to boost a raft of quantum technologies—from secure communication networks to ultra-precise gravity sensors and clocks."

Where does Nature get the idea that the European Commission is funding an unfocused "raft of quantum technologies" rather than quantum computing in particular? The story quotes Tommaso Calarco, who "co-authored a blueprint behind the initiative, which was published in March, called the Quantum Manifesto"; and says that this "initiative" was "driven by an 18-month dialogue between the commission and a group of researchers who, at the organization’s request, produced the manifesto."

The Quantum Manifesto is a 19-page white paper full of pictures and sidebars, obviously aimed at politicians. Its title is "Quantum Manifesto: A New Era of Technology" ("Draft—March 2016"). The Manifesto has two other pages with statements in very large fonts: page 2 says "This manifesto is a call to launch an ambitious European initiative in quantum technologies, needed to ensure Europe's leading role in a technological revolution now under way" and page 6 says "Europe needs strategic investment now in order to lead the second quantum revolution. Building upon its scientific excellence, Europe has the opportunity to create a competitive industry for long-term prosperity and security."

The six items highlighted in the Manifesto's "Quantum Technologies Timeline" are "Atomic quantum clocks"; "Quantum sensors"; "Intercity quantum link"; "Quantum simulators" (special-purpose quantum computers for physics simulation); "Quantum-safe communication network"; and "Universal quantum computers". The four main topics highlighted in sidebars are "Quantum communication"; "Quantum simulators"; "Quantum sensors"; and "Quantum computers".

The Manifesto doesn't claim that "Atomic quantum clocks" and "Quantum sensors" and "Intercity quantum link" and "Quantum-safe communication network" are relevant to the goal stated by Oettinger ("to solve computational problems beyond current supercomputers") or to the rationale stated by the European Commission for funding a quantum-technology flagship ("the basis for the next generation of supercomputers"). Instead the Manifesto claims that these are beneficial for their own sake. For example, the Manifesto mentions "gravity and magnetic sensors for health care, geosurvey and security"; it doesn't claim supercomputing as an application (and I can't imagine how anyone would try to justify such a claim).

In short, at least half of the scope of the Manifesto, presumably at least half of the billion Euros in funding, is quite blatantly hijacking "European Cloud Initiative" funding and diverting it to different goals.

It is interesting to note that a subsequent Nature editorial described the spread of Manifesto topics as a "mistake" and recommended instead "focusing investment on one high-risk, high-gain goal—such as a universal quantum computer". The editorial board doesn't seem to have noticed that quantum computing was the sole rationale stated by the European Commission for funding this flagship!

The dark side of quantum computing

If I saw a funding proposal for, say, the coal industry, then I would expect it to have an extensive discussion of the environmental impact of the coal industry. I would expect it to allocate massive funding towards reducing this impact. And I would expect the politicians to solicit feedback from environmental experts: is this funding really the best way to address the problem, or should it instead be spent in other ways?

Similarly, I think that anyone proposing funding for quantum computing is ethically obliged to highlight the fact that quantum technologies are a huge threat to security.

We all rely on the security of cryptographic signatures on software updates. But RSA and ECC, the most popular signature mechanisms today, are both known to be broken in polynomial time by Shor's quantum algorithm. This algorithm is smaller and faster than most of the algorithms used to justify investment in building large universal quantum computers. There are no obvious obstacles to attackers building quantum computers that will rapidly break RSA keys and ECC keys, allowing the attackers to forge software updates and seize control of all of our computers.

Post-quantum cryptography is society's most plausible path towards preventing the quantum apocalypse. Most importantly, this research area includes

Example of a question in quantum cryptanalysis: recent research has found a polynomial-time quantum algorithm to break the Smart–Vercauteren lattice-based cryptosystem; can the attacks be extended to other lattice-based cryptosystems? Example of a question in post-quantum cryptographic engineering: researchers have built confidence in the security of the SHA-3 hash function and hash-based signature systems; but can we deploy these signature systems while meeting the performance and usability requirements of applications?

Obviously we need to replace RSA and ECC before attackers are armed with quantum computers running Shor's algorithm. There are several reasons that action is particularly urgent. First, there is a long path from papers to widespread deployment. Second, many deployed devices last for years without being upgraded. Third, larger and larger fractions of Internet traffic are being recorded, including the private communications of doctors, journalists, lawyers, diplomats, therapists, human-rights workers, etc., again protected primarily by RSA and ECC.

There is a significant risk that all of the benefits of quantum computing during the next 20 years will be outweighed by the security devastation caused by quantum computing during the same period. Does this mean that public research into quantum computing should be halted? I don't think so. I don't believe that halting research will be effective in stopping attackers. NSA, for example, has its own quantum-computing budget, as shown by the Snowden revelations. What will be effective in stopping attackers is post-quantum cryptography.

How security is advertised in the Quantum Manifesto

This brings me to what really bugs me about the Quantum Manifesto. Instead of highlighting the security threat of quantum technology and recommending funding for a scientifically justified response, the Manifesto makes the thoroughly deceptive claim that quantum technology improves security.

Later I'll get to the details of this claim. First let's look at some examples of how prominent this security advertising is:

Security is mentioned a total of 30 times in the manifesto, with quantum technologies consistently portrayed as the hero saving the day. This is like a coal-industry proposal proudly portraying coal as being good for the environment. Coal has far-reaching applications to improve the environment! Whatever threat coal might pose to the environment, coal itself is the solution! Coal will lead to a cleaner European Union!

This coal-industry example is imaginary (I hope), but I've seen many other examples of funding requests that rely critically on exaggerated claims of societal benefits. Of course, people in research area X who make exaggerated claims of impact within X can expect to find their claims shredded by reviewers from the same area, but how will they be punished if they make exaggerated claims of impact on something else? This is particularly important for exaggerated claims of security impact: security has always been notoriously difficult for users to evaluate.

Suppose you're a politician seeing a bunch of physicists asking for a quarter of a billion Euros for "quantum communication". The core justification for this request is the claim that "quantum communication" will provide security benefits. Don't you want to hear an assessment of this claim from security experts? Obviously society is facing security problems, but this doesn't mean that you should mindlessly throw money at anything that claims to be a solution. Don't you want to know whether the security community views this expenditure as a smart way to address the problems, or as an astoundingly stupid way to address the problems?

The claims of amazing security benefits from "quantum communication" aren't new. My experience is that most security experts simply dismiss quantum communication on the basis of its prohibitive cost (see below), but it isn't hard to find literature analyzing the security claims in more detail. For example:

The European Commission could easily have, and should have, assembled a panel of security experts to publicly evaluate the security claims in the Quantum Manifesto. The Manifesto authors should of course have been allowed to provide references and further input, and to answer questions from the security experts. The result would have been a public Security Impact Assessment, just like a public Environmental Impact Assessment.

Obviously this security review never happened. The Manifesto says that it is "endorsed by a broad community of industries, research institutes and scientists in Europe" and is accompanied by an online list of thousands of signatories; but the list looks more like a rather narrow community of people who are hoping that the Manifesto makes money for them, such as quantum physicists and their students. Security review, like environmental review, requires experts who are skeptical.

Security failures of physical cryptography, part 1: locked-briefcase cryptography

The snake oil peddler became a stock character in Western movies: a traveling "doctor" with dubious credentials, selling fake medicines with boisterous marketing hype, often supported by pseudo-scientific evidence. To increase sales, an accomplice in the crowd (a shill) would often attest to the value of the product in an effort to provoke buying enthusiasm. The "doctor" would leave town before his customers realized they had been cheated. —"Snake oil" entry in Wikipedia

Security experts commonly use the term "snake oil" for products whose security hype far exceeds their security value.

Imagine, for example, that the manufacturers of lockable briefcases start advertising "provably secure locked-briefcase cryptography". The salesmen explain that locked-briefcase cryptography uses the magical power of locks to physically protect information against all possible attacks:

There is a mathematical proof that locked-briefcase cryptography hides all information from the attacker! We should therefore replace cryptography on the Internet with locked-briefcase cryptography on a new "locked-briefcase Internet"! End of sales pitch.

Should we build a "locked-briefcase Internet"? Security experts can fully justify any of the following answers to this question:

Security experts will often opt for the second answer because the cost of a locked-briefcase Internet seems easier to understand than the security failures. Security experts will often opt for the third answer because they have real work to do. But merely giving the second and third answers, and skipping the first answer, leaves the briefcase manufacturers in a position to request massive research funding aimed at reducing the costs of their magical "provably secure locked-briefcase Internet". Funding agencies need to understand that locked-briefcase cryptography is more expensive and less secure than alternatives.

So what's wrong with the security of locked-briefcase cryptography? Here are some of the obvious avenues of attack:

The bottom line is that there are many low-cost attacks against locked-briefcase cryptography.

Security advantages of real-world cryptography

Real-world cryptography—algorithmic cryptography—starts with the recognition that putting secret information close to the attacker has always been a security nightmare. It is terribly difficult even to fully understand, let alone limit, all the ways that the attacker can physically interact with the secrets.

Real-world cryptography instead keeps Alice and Bob's secrets heavily shielded inside Alice and Bob's computers, the same computers that are storing and processing the secrets in the first place. Alice encrypts a secret message, mathematically transforming the message into ciphertext, before sending it through the Internet to Bob.

Real-world cryptography relies on published cryptographic systems with comprehensive specifications that have convincingly survived many years of publicly documented attack efforts from a large research community aiming at clear security goals. I'm not saying that this process has reached perfection (the remaining problems are major motivations for my own research); I'm saying that the security of these systems is much, much, much better understood than the security of locked-briefcase cryptography.

What happens if the attacker secretly convinced the cryptographic software author to keep copies of Alice's messages, or to place a weakness into the ciphertext? Most serious real-world cryptography has transitioned to open-source software, which is subjected to increasingly comprehensive security reviews and in some cases formal verification. There are also increasingly sophisticated efforts to verify the security of compilers, operating systems, and the underlying chips. All of this work is required for the broader goal of computer security (which Alice and Bob would need even if they magically had secure communication by some other mechanism), and real-world cryptography is leading the way.

What happens if the attacker overhears the secret key exchanged by Alice and Bob? One of the most spectacular advances in real-world cryptography was the advent of public-key cryptography four decades ago. Alice generates a secret key and a corresponding public key; she gives the public key to Bob, and keeps the secret key safely hidden. Bob generates his own secret key and the corresponding public key, and gives the public key to Alice. Alice uses Bob's public key to encrypt information for Bob, and Bob uses Alice's public key to verify that the information comes from Alice. The system is designed to be secure even if attackers see all the public keys.

Of course, if an attacker manages to quietly replace the public keys with his own public keys, then he can fool Alice into encrypting data to him, and he can fool Bob into accepting data from him. But the fact that keys are public allows the keys to be easily sent through multiple channels (certified by "certificate authorities", broadcast through intermediates, double-checked by "transparency", etc.). An attacker who does not control all of the channels between Alice and Bob will be unable to quietly replace keys.

Again, I'm not saying that real-world cryptography has already reached perfection. I'm merely saying that real-world cryptography provides huge security advantages over locked-briefcase cryptography. Locked-briefcase cryptography isn't even trying to tackle tough security problems that real-world cryptography has been addressing for many years.

Security failures of physical cryptography, part 2: quantum cryptography

Like locked-briefcase cryptography, quantum cryptography tries to use physical techniques to protect information. The physical details are different, making quantum cryptography much more expensive than locked-briefcase cryptography, but the same fundamental security problems remain.

The centerpiece of quantum cryptography is "quantum key distribution", notably the "BB84" and "E91" protocols. E91-type protocols require maintaining long-distance entanglement, making them even more expensive than BB84-type protocols. There are papers claiming "provable security" for BB84-type protocols, and papers claiming "provable security" for E91-type protocols.

A company named ID Quantique has been selling quantum-cryptography hardware, specifically hardware for BB84-type protocols, since 2004. ID Quantique claims that quantum cryptography provides "absolute security, guaranteed by the fundamental laws of physics." However, Vadim Makarov and his collaborators have shown that the ID Quantique devices are vulnerable to control by attackers, that various subsequent countermeasures are still vulnerable, and that analogous vulnerabilities in another quantum-key-distribution system are completely exploitable at low cost. The most reasonable extrapolation from the literature is that all of ID Quantique's devices are exploitable.

How can a product be broken if it provides "absolute security, guaranteed by the fundamental laws of physics"? I've heard several different answers from proponents of quantum cryptography:

Let me get back to the question: how can a product be broken if it provides "absolute security, guaranteed by the fundamental laws of physics"? The correct answer is that the hypotheses of the theorems, the assumptions made in the theorems, are not the laws of physics. The hypotheses include

This is like assuming that the world outside a locked briefcase cannot interact with the contents of the briefcase, and that the only attack possibility is to inspect the outside of the briefcase. The attacker actually has many other options, as Makarov's attacks illustrate. In short, the "provable security" of quantum cryptography draws a useless, inaccurate conclusion starting from unrealistic, oversimplified hypotheses.

One might imagine that the poor security of modern quantum cryptography is merely a short-term accident. One might imagine that future redefinitions of quantum cryptography will exclude ID Quantique's mistakes and bring us to a happy future of quantum devices that really do provide "absolute security, guaranteed by the fundamental laws of physics". But a careful look at the literature provides no reason to believe that this will ever happen, and my aforementioned March 2016 paper explains serious obstacles to making this happen. It seems reasonably clear that the laws of physics actually guarantee the breakability of quantum cryptography by an attacker who collects enough data and performs enough computation (perhaps including quantum computation). Quantum cryptography is advertised as having security independent of "the conjectured difficulty of computing certain functions", but a closer look shows that this advertisement is completely wrong.

Fundamentally, quantum cryptography makes the same mistake as locked-briefcase cryptography: it aims for security in an oversimplified model of the physical world, takes resources away from more serious security techniques, and ends up damaging security in the real world. Quantum cryptography, like locked-briefcase cryptography, also relies on a preexisting secure channel (to authenticate choices of "bases" by Alice and Bob), and does nothing to address the problems handled by public-key cryptography. Quantum cryptography is even more vulnerable than locked-briefcase cryptography to denial-of-service attacks. Furthermore, quantum cryptography is even easier than locked-briefcase cryptography for the manufacturer to subvert, and is practically impossible to audit.

Examples of security statements in the Quantum Manifesto

"Communication security is of strategic importance to consumers, enterprises and governments alike. At present, it is provided by encryption via classical computers": Yes, security is important. Yes, confidentiality is provided by encryption. But security includes more than confidentiality: it also includes integrity, which is provided by authentication, and availability (protection against denial-of-service attacks), which is provided by a mix of techniques. There are also huge differences between secret-key techniques and public-key techniques.

"[Encryption via classical computers] which could be broken by a quantum computer": I mentioned above that the most popular public-key systems, RSA and ECC, are both known to be broken in polynomial time by Shor's quantum algorithm. But there are other public-key systems that have resisted all attempts at quantum attack. Furthermore, there is no reason to believe that quantum computers threaten secret-key systems such as AES-256. Saying that these systems "could be broken by a quantum computer" is idle speculation devoid of content.

"Secure solutions based on quantum encryption are immune to this risk, and are commercially available today": This is ludicrously inaccurate. The "solutions based on quantum encryption" that are "commercially available today" all seem to be breakable. The possibilities for future "solutions based on quantum encryption" are not immune to the risks of quantum computers; see above regarding the meaningless "provable security" of quantum cryptography. Furthermore, "quantum encryption" has the basic security properties of a secret-key stream cipher; it is not a replacement for the public-key systems broken by Shor's algorithm.

"Quantum information is secure because it cannot be cloned": This is accurate as a technical statement regarding the abstract concept of "quantum information", but it will lead most readers to incorrect conclusions regarding the real-world security of quantum cryptography. The information of interest to users is not "quantum information", and its security is not guaranteed by the abstract unclonability of "quantum information". The aforementioned attacks against quantum-cryptography systems do not make any effort to clone "quantum information"; they focus on the actual goal, namely eavesdropping upon all information of interest to users.

"The advantage of trusted-node schemes is that they provide access for lawful intercept, as required by many nation states": A recent United Nations "Report of the Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression" concludes that "States should avoid all measures that weaken the security that individuals may enjoy online, such as backdoors, weak encryption standards and key escrows." Escrowing copies of all users' cryptographic keys at "trusted nodes" for government use is a horrifying security weakness and should not be advertised as a feature. There are reports of key escrow being required in a few repressive countries, such as Iran, but this is a human-rights violation. We are under an ethical obligation to protect human rights, not to violate them.

"As soon as this happens [quantum repeaters reaching the market], true internet-wide quantum-safe security could become a reality": This is obviously wrong. There is a gigantic cost difference between merely reaching the market and being deployed "internet-wide". As for "true quantum-safe security", see above.

"Based on quantum coherence, data can be protected in a completely secure way that makes eavesdropping impossible. Given the explosive growth of cybercrime and espionage, this is a highly strategic capability": Quantum technology does not securely protect data, and does not make eavesdropping impossible. If the goal is to protect against cybercrime and espionage then there are many, many, many better ways to spend money.

Concluding thoughts

Consider two scientists, Alice and Bob. Alice honestly and accurately and comprehensibly reports her previous accomplishments and the prospects for her future research. Bob, whether out of greed or out of ignorance, exaggerates his previous accomplishments and the prospects for his future research.

Do we want this exaggeration to produce more research funding for Bob? Obviously not. In fact, we want to discourage Bob from exaggerating in the first place. But this doesn't work if the incentives against exaggeration are so weak that the expected cost of exaggeration is outweighed by the expected benefits.

For many years I've been watching quantum cryptographers wildly exaggerate the security impact of their work. Public corrections from security experts have had negligible effect: quantum cryptographers repeat the same wild exaggerations again and again and again. Apparently these exaggerations are now producing a quarter of a billion Euros in funding for quantum cryptography.

This incident clearly illustrates the incentives towards exaggeration. Where are the compensating incentives against exaggeration? Do we want other scientists to conclude that exaggeration is the path to success?

I mentioned earlier that people in research area X who make exaggerated claims of impact within X will typically be punished by their reviewers, other people in area X. But people in research area X who make exaggerated claims of impact upon Y are normally reviewed by people in X, not people in Y, and the people in X have a categorical incentive to endorse the same exaggerations.

Ultimately the victims are the users. A quarter of a billion Euros, despite being explicitly aimed at communication security, will actually be devoted to quantum technologies that are much less secure than modern real-world cryptography. The occasional users who can afford to deploy quantum cryptography won't realize how easy it is to break. Meanwhile a similar level of funding will be sensibly devoted to quantum computing, but without a proper acknowledgment of the resulting security apocalypse, and without a corresponding level of funding for the most plausible plan to prevent this apocalypse.


Version: This is version 2016.05.16 of the 20160516-quantum.html web page.