The cr.yp.to blog



2014.02.05: Entropy Attacks! The conventional wisdom says that hash outputs can't be controlled; the conventional wisdom is simply wrong.

The conventional wisdom is that hashing more entropy sources can't hurt: if H is any modern cryptographic hash function then H(x,y,z) is at least as good a random number as H(x,y), no matter how awful z is. So we pile one source on top of another, hashing them all together and hoping that at least one of them is good.

But what if z comes from a malicious source that can snoop on x and y? For example, imagine a malicious "secure randomness" USB device that's actually spying on all your other randomness sources through various side channels, or—worse—imagine RDRAND microcode that's looking at the randomness pool that it's about to be hashed into. I should note that none of the attacks described below rely on tampering with x or y, or otherwise modifying data outside the malicious entropy source; you can't stop these attacks by double-checking the integrity of data.

Of course, the malicious device will also be able to see other sensitive information, not just x and y. But this doesn't mean that it's cheap for the attacker to exfiltrate this information! The attacker needs to find a communication channel out of the spying device. Randomness generation influenced by the device is a particularly attractive choice of channel, as I'll explain below.

Here's an interesting example of an attack that can be carried out by this malicious source:

  1. Generate a random r.
  2. Try computing H(x,y,r).
  3. If H(x,y,r) doesn't start with bits 0000, go back to step 1.
  4. Output r as z.

This attack forces H(x,y,z) to start 0000, even if x and y were perfectly random. It's fast, taking just 16 computations of H on average.

Maybe the randomness generator doesn't actually output H(x,y,z); it uses H(x,y,z) as a seed for some generator G, and outputs G(H(x,y,z)). Okay: the attacker changes H to G(H), and again forces the output G(H(x,y,z)) to start 0000. Similarly, the attack isn't stopped by pre-hashing of the entropy source before it's mixed with other entropy sources. Every mix from the malicious entropy source lets the attacker produce another "random" number that starts 0000.

More generally, instead of producing "random" numbers that start with 0000, 0000, 0000, etc., the malicious entropy source can produce "random" numbers that start with successive 4-bit components of AESk(0),AESk(1),... where k is a secret key known only to the attacker. Nobody other than the attacker will be able to detect this pattern.

Recall that DSA and ECDSA require random "nonces" for signatures. It's easy to imagine someone grabbing each nonce as a new "random" number from the system's randomness generator. However, it's well known (see, e.g., https://www.isg.rhul.ac.uk/~sdg/igor-slides.pdf) that an attacker who can predict the first 4 bits of each nonce can quickly compute the user's secret key after a rather small number of signatures. Evidently hashing an extra entropy source does hurt—in the worst possible way; the attacker has the user's secret key!—contrary to the conventional wisdom stated above.

EdDSA (see https://ed25519.cr.yp.to) is different. It uses randomness once to generate a secret key and is then completely deterministic in its signature generation (following 1997 Barwood, 1997 Wigley, et al.). The malicious entropy source can still control 4 bits of the secret key, speeding up discrete-log attacks by a factor of 4, but this isn't a problem—we use curves with ample security margins. The source can increase the 4 bits by carrying out exponentially more H computations, but this has to fit into the time available after inspecting x and y and before generating a "random" number.

Of course, there are many other uses of randomness in cryptography: for example, if you want forward secrecy then you're constantly generating new ECDH keys. Controlling 4 bits of each secret key isn't nearly as damaging as controlling 4 bits of DSA/ECDSA nonces—it's the same factor of 4 mentioned above—but, as I mentioned above, the malicious entropy source can also use randomness generation as a communication channel to the attacker. For example, the source controls the low bit of each public key with an average cost of just 2 public-key generations, and uses the concatenation of these low bits across public keys to communicate an encryption of the user's long-term key. This channel is undetectable, reasonably high bandwidth, and reasonably low cost.

On the other hand, there's no actual need for this huge pile of random numbers. If you've somehow managed to generate one secure 256-bit key then from that key you can derive all the "random" numbers you'll ever need for every cryptographic protocol—and you can do this derivation in a completely deterministic, auditable, testable way, as illustrated by EdDSA. (If you haven't managed to generate one secure 256-bit key then you have much bigger problems.)

With this as-deterministic-as-possible approach, the entire influence of the malicious entropy source is limited to controlling a few "random" bits somewhere. There are at least two obvious ways to further reduce this control:

Let me emphasize that what I'm advocating here, for security reasons, is a sharp transition between

This is exactly the opposite of what people tend to do today, namely adding new entropy all the time. The reason that new entropy is a problem is that each addition of entropy is a new opportunity for a malicious entropy source to control "random" outputs—breaking DSA, leaking secret keys, etc. The conventional wisdom says that hash outputs can't be controlled; the conventional wisdom is simply wrong.

(There are some special entropy sources for which this argument doesn't apply. For example, an attacker can't exert any serious control over the content of my keystrokes while I'm logged in; I don't see how hashing this particular content into my laptop's entropy pool can allow any attacks. But I also don't see how it helps.)

Is there any serious argument that adding new entropy all the time is a good thing? The Linux /dev/urandom manual page claims that without new entropy the user is "theoretically vulnerable to a cryptographic attack", but (as I've mentioned in various venues) this is a ludicrous argument—how can anyone simultaneously believe that

There are also people asserting that it's important for RNGs to provide "prediction resistance" against attackers who, once upon a time, saw the entire RNG state. But if the attacker sees the RNG state that was used to generate your long-term SSL keys, long-term PGP keys, etc., then what exactly are we gaining by coming up with unpredictable random numbers in the future? I'm reminded of a Mark Twain quote:

Behold, the fool saith, "Put not all thine eggs in the one basket"—which is but a manner of saying, "Scatter your money and your attention;" but the wise man saith, "Put all your eggs in the one basket and—WATCH THAT BASKET."

We obviously need systems that can maintain confidentiality of our long-term keys. If we have such systems, how is the attacker supposed to ever see the RNG state in the first place? Maybe "prediction resistance" can be given a theoretical definition for an isolated RNG system, but I don't see how it makes any sense for a complete cryptographic system.

[Advertisement: If you're interested in these topics, you might want to join the randomness-generation mailing list; I sent this there a few days ago. To subscribe, send email to randomness-generation+subscribe@googlegroups.com. There's also a mailing list cleverly named dsfjdssdfsd for discussion of randomness in IETF protocols. Randomness is also a frequent topic on more general cryptographic mailing lists.]

[2022.01.09 update: Updated links above.]

[2023.03.17 update: I should note related work from 2006, namely Section 3.2 of "Analysis of a strong pseudo random number generator" by Thomas Biege. The 2006 paper and this blog post are both analyzing what a malicious entropy source can do. After that setup, the 2006 work is explaining how a linear hash function gives a malicious entropy source full control over outputs, whereas this blog post is explaining how a modern cryptographic hash function gives a malicious entropy source partial control over outputs.]


Version: This is version 2023.03.17 of the 20140205-entropy.html web page.