The cr.yp.to blog



2024.10.28: The sins of the 90s: Questioning a puzzling claim about mass surveillance. #attackers #governments #corporations #surveillance #cryptowars

Meredith Whittaker, president of the Signal Foundation, gave an interesting talk at NDSS 2024 titled "AI, Encryption, and the Sins of the 90s".

I won't try to summarize everything the talk is saying: go watch the talk video yourself, or at least read through the transcript. But I'll say something here about what the "sins" part of the talk's title is referring to.

The talk says that, in the 1990s, "cryptosystems were still classified as munitions and subject to strict export controls". The talk describes the "crypto wars" as "a series of legal battles, campaigns, and policy debates that played out in the US across the 1990s", resulting in "the liberalization of strong encryption in 1999", allowing people to "develop and use strong encryption without being subject to controls".

OK, that sounds familiar. Which parts are the "sins"?

Answer: the talk claims that "the legacy of the crypto wars was to trade privacy for encryption—and to usher in an age of mass corporate surveillance".

Wow. That sounds bad, and surprising, definitely something worth understanding better. If cryptographic export controls had instead remained in place after 1999, how would that have improved privacy and reduced corporate surveillance?

Answer: the talk claims that, without strong cryptography, "the metastatic growth of SSL-protected commerce and RSA-protected corporate databases would not have been possible".

Wait, what? Let's look at the facts.

1. Would commerce exist without strong cryptography?

Internet commerce was already booming by 1999. Let's look specifically at the history of Amazon.

Amazon was founded in 1994. Its initial public stock offering was in 1997. Amazon was sued by Barnes & Noble in 1997, and was sued by Wal-Mart in 1998. Bezos was named Time Magazine's Person of the Year in 1999:

Bezos’ vision of the online retailing universe was so complete, his Amazon.com site so elegant and appealing, that it became from Day One the point of reference for anyone who had anything to sell online. And that, it turns out, is everyone.

Amazon's revenue was 15.75 million dollars in 1996, 147.79 million dollars in 1997, 609.82 million dollars in 1998, and 1.64 billion dollars in 1999. Amazon was competently executing a business plan that from the outset explicitly prioritized growth.

Where does anyone get the idea that continued cryptographic export controls would have stopped the growth of Internet commerce, rather than simply limiting the security level of Internet commerce? How do we reconcile this idea with the observed facts of Amazon already growing rapidly in the 1990s? The export controls were still in place; to the extent that Internet commerce was encrypted at all, it was encrypted primarily with a weak cryptosystem, namely 512-bit RSA.

Just to emphasize how fast Amazon's growth was at that point: Amazon's revenue was more than doubling every year. If that had kept up, Amazon's revenue in 2023 would have been more than 26000000 billion dollars. In reality, Amazon's revenue in 2023 was only 575 billion dollars.

Okay, okay, 575 billion dollars is a lot of money, and Amazon is now fighting antitrust regulators. But how is Amazon's growth before and after 1999 a story about a change in cryptography regulation, rather than a story about customers liking a convenient shopping site that provided fast, reasonably reliable deliveries of an ever-expanding collection of products at competitive prices?

These are natural questions for anyone checking whether the talk's claims match the available evidence. But the talk doesn't answer any of these questions. Look, for example, at the full paragraph containing the "would not have been possible" quote:

It’s not that 1999 wasn’t a win, at least in a narrow sense. Indeed, we can craft a counterfactual history in which the liberalization of encryption didn’t happen, in which we instead accepted some janky, backdoored, government-standard cryptosystem—some sad Clipper chip DES admixture—and that instead became the thing: a world in which strong cryptosystems did not receive the benefit of many eyes and open scrutiny. But of course the future from then to now would have been very different—not least of all because the metastatic growth of SSL-protected commerce and RSA-protected corporate databases would not have been possible.

Aside from irrelevant details, how is the "counterfactual history" of a "janky, backdoored, government-standard cryptosystem" different from the reality of export-controlled cryptography in the late 1990s, when 95% of SSL connections were limited to RSA-512? The explosion of Internet commerce was already happening at that point.

Where does the "would not have been possible" claim come from? I'm not allergic to the phrase "of course", but I try to limit it to cases where things are really obvious, which is definitely not the situation here.

2. Would commerce exist without security?

Government regulations are just one of many sources of weak cryptography. Weak cryptography, in turn, is just one of many sources of Internet-security failures.

Companies reported spending more than 0.5% of revenue in 2023 on things labeled as "cybersecurity". A cybersecurity company named CrowdStrike accidentally took down millions of Windows computers in July 2024, causing long service outages for many other companies. CrowdStrike had been given control over all those computers because it was saying that this would help protect those computers against attacks. Delta Airlines, in a lawsuit filed this month against CrowdStrike, said that the outage "crippled its operations for several days, costing more than $500 million in lost revenue and extra expenses". Meanwhile there are endless reports of ransomware running rampant, as illustrated by BlackCat disrupting various health-care services for weeks starting in February 2024.

And yet, despite the evident disruptions, Internet commerce continues.

Do we want better security to stop the attacks? Yes. Does not having better security mean that the entire system of Internet commerce will be destroyed?

Um, well, it's conceivable that there will be such a dramatic increase in attacks that we'll all retreat to non-Internet commerce (because, y'know, non-Internet commerce is secure). But somehow the attackers don't seem interested in killing the goose that lays the golden eggs.

Let's rewind to 1999. The CIH virus had destroyed data on a million computers, and was just one of many examples of attacks. This didn't stop the Internet from skyrocketing in popularity; it simply prompted effort to fix vulnerabilities.

One of the vulnerabilities at that time was the use of RSA-512. From the perspective of stopping attacks, this vulnerability was important to fix. But, from the same perspective, there were many other vulnerabilities that were also important to fix, including many that were cheaper to exploit than attacking RSA-512. My own experience is that exploitable buffer overflows were very easy to find back then.

Does it sound plausible if someone picks one of the system vulnerabilities in 1999 and claims that fixing this vulnerability is what made the difference between Internet commerce succeeding and Internet commerce failing? I'd expect such a claim to be backed by

Otherwise the claim sounds like nothing more than wishful thinking about the importance of some particular area of security.

3. Would corporate databases exist without strong cryptography?

Let's move on to the second part of the claim that, without strong cryptography, "the metastatic growth of SSL-protected commerce and RSA-protected corporate databases would not have been possible".

The mass-surveillance industry is much older than 1999. See, for example, the book "IBM and the Holocaust", which traces how IBM's punch-card databases were used to "organize nearly everything in Germany and then Nazi Europe, from the identification of the Jews in censuses, registrations, and ancestral tracing programs to the running of railroads and organizing of concentration camp slave labor".

Does a database not count as a "corporate database" if the decisions of what's going into the database are being made by a government, in this case the Nazis? Does that make the database less evil? Also, does the level of evil depend on whether this was a database operated by IBM for the Nazis or a database operated by the Nazis using technology provided by IBM? Somehow I don't think these distinctions mattered for people in the concentration camps.

As the 20th century continued, more and more powerful technology made surveillance less and less expensive. Here's a quote from a 2007 study "Engaging privacy and information technology in a digital age", issued by a committee formed by the U.S. National Academies of Sciences, Engineering, and Medicine:

Beginning in the late 1950s, the computer became a central tool of organizational surveillance. It addressed problems of space and time in the management of records and data analysis and fueled the trend of centralization of records. The power of databases to aggregate information previously scattered across diverse locations gave institutions the ability to create comprehensive personal profiles of individuals, frequently without their knowledge or cooperation. The possibility of the use of such power for authoritarian purposes awakened images of Orwellian dystopia in the minds of countless journalists, scholars, writers, and politicians during the 1960s, drawing wide-scale public attention to surveillance and lending urgency to the emerging legal debate over privacy rights.

One of the sectors that immediately benefited from the introduction of computer database technology was the credit-reporting industry. ... But the credit and insurance industries were not alone. Banks, utility companies, telephone companies, medical institutions, marketing firms, and many other businesses were compiling national and regional dossiers about their clients and competitors in quantities never before seen in the United States.

Surely the 1960s surveillance dossiers by "the credit-reporting industry" and "marketing firms" and so on count as examples of "corporate databases".

What's the mechanism by which continued cryptographic export controls would have supposedly stopped the growth of surveillance? How do we reconcile this with the observed facts of government surveillance and corporate surveillance already exploding in the second half of the 20th century, when cryptographic export controls were in place? Why does it matter for this growth whether databases were "RSA-protected" or not? Or, more to the point, whether they were protected by something stronger than RSA-512?

The talk doesn't answer any of these questions either.

4. Getting personal

I was, as the talk mentions, one of the people fighting export controls in the 1990s. The reason I was taking action is that I had studied the situation and was troubled by it. In particular, I had concluded that the export controls were contributing to attacks. If I was wrong about that, then I'd like to understand why.

The talk claims that moving to stronger cryptography had the negative effect of creating attacks: specifically, of creating corporate mass surveillance. I'd like to understand the rationale for this claim. But I don't see where the talk explains the supposed mechanism, or provides any evidence, or addresses the contrary evidence provided by 20th-century surveillance.

Beyond claiming that the actions against export controls contributed to corporate surveillance, the talk claims that these actions came from a narrow perspective of seeing the government as the only problem. For example:

In context, the talk is attributing these perspectives to those of us fighting the "crypto wars".

Here the talk is simply wrong. We're on record publicly explaining our goals, and those records demonstrate a much broader perspective than what the talk claims.

Consider, for example, the 1993 "Cypherpunk's Manifesto" from Eric Hughes. The manifesto is all about real-world privacy. Encryption isn't mentioned until the fifth paragraph; it's one of multiple items that privacy is described as relying upon. The next paragraph after that is as follows:

We cannot expect governments, corporations, or other large, faceless organizations to grant us privacy out of their beneficence. It is to their advantage to speak of us, and we should expect that they will speak. To try to prevent their speech is to fight against the realities of information. Information does not just want to be free, it longs to be free. Information expands to fill the available storage space. Information is Rumor's younger, stronger cousin; Information is fleeter of foot, has more eyes, knows more, and understands less than Rumor.

This document isn't "viewing the government as the sole threat": it's explicitly stating the opposite. It also isn't "conflating encryption with privacy".

As another example, here's are some 1995 quotes from the first brief that my lawyers filed in Bernstein v. U.S.:

The uses for cryptography range from protecting the privacy of attorney/client correspondence, financial transactions and medical records transmitted over wires, to preventing piracy of cable TV, cellular phone, telephone lines or satellite signals. Every bank ATM uses cryptography. ... If the government is successful here, it will eliminate anonymity, forcing citizens to reveal their private associations to the government and others, including high-tech criminals.

This isn't "focusing concerns about privacy invasion solely on governments": it explicitly includes other attackers. As the U.S. Court of Appeals for the Ninth Circuit put it in its 1999 decision in the case:

Whether we are surveilled by our government, by criminals, or by our neighbors, it is fair to say that never has our ability to shield our affairs from prying eyes been at such a low ebb. The availability and use of secure encryption may offer an opportunity to reclaim some portion of the privacy we have lost.

The government was the defendant in my court case. That's because this was a court case against regulations imposed by the government. But the records show that we were considering a broader range of attackers.

Export regulations weren't the only problems I started taking actions to address. Example from 1993: I wrote letters that stopped NIST's announced plan to give its DSA patent to PKP, a partnership between Caro-Kann Corporation and the RSA corporation. Example from 1995: I helped people understand that DH could be used as a replacement for RSA; the DH patent was due to expire in 1997, whereas the RSA patent wasn't due to expire until 2000. Do these actions sound like "ignoring (or even celebrating) the interests of market actors"?

The talk's narrative of supposed 1990s blindness is everywhere in the talk, not just in the quotes I've given above. For example, the talk says that "one of the world's most profitable business models" in 2024 is "mass surveillance of a scale and granularity unimaginable in the 1990s".

"Unimaginable"? Seriously? How is this not the scale and granularity of surveillance predicted by Orwell in the 1940s?

Oh, you want a surveillance-capitalism version? Richard Stallman's 1997 essay "The right to read" started with a dystopian short story about a future in which "you could go to prison for many years for letting someone else read your books". Here are two sentences from the story:

In his software class, Dan had learned that each book had a copyright monitor that reported when and where it was read, and by whom, to Central Licensing. (They used this information to catch reading pirates, but also to sell personal interest profiles to retailers.)

That's worldwide fine-grained surveillance for an unholy alliance of marketers and the government, just like the reality in 2024.

5. Searching for sources

The talk has a general statement that it draws on analyses by "Dr. Sarah Myers West, Dr. Chris Gilliard, Dr. Karina Rider, and Dr. Matthew Crain". The talk transcript ends with a list of references and URLs.

One of those sources is Rider's 2016 master's thesis "The Privacy Paradox: Privacy, Surveillance, and Encryption". Rider searched Congressional hearings starting in 1993 for the word "encryption", and then reviewed and summarized the arguments.

As an interesting example, the thesis quotes Microsoft lawyer Ira Rubinstein telling Congress in 1997 that "industry is in a position to assist law enforcement and national security in achieving their objectives because we are able to sell U.S. products in mass volume".

The thesis doesn't mention that there was already a well-established tradition of corporations making money by enabling government surveillance. Remember IBM working with the Nazis? How about IBM working with NSA to make DES weak enough for NSA to break?

Regular readers of my blog will recall that, in the 1990s, NSA modified its export controls to create special exceptions for low-security cryptography from the RSA corporation, specifically 40-bit RC2 and 40-bit RC4. This was the result of a public agreement between the government and the Software Publishers Association. Presumably NSA was happy solidifying the market position of cryptography that NSA could break. It's not as if the corporations involved were putting a higher priority on security than on making money.

As another example, consider Project Shamrock, in which telegraph companies sent NSA copies of millions of telegrams, even though the lawyers at three of those companies had "recommended against participation because they considered the program to be in violation of the law and FCC regulations". That's a quote from a 400-page Congressional study "Intelligence activities and the rights of Americans: Book II" issued in 1976.

The arrangement between telegraph companies and NSA was secret for decades. As one historian put it:

Data created and collected by these firms could be shared with the government quietly, protected from public scrutiny and outrage by the twin concealments of classification and corporate secrecy.

Oh, oops, that isn't actually a quote about the telegraph companies. It's a quote from the talk that this blog post is commenting upon, a quote specifically about what happened with Internet companies "following the 90s", as if the mass-surveillance industry were something new.

For people who study the history and incentives, it's completely unsurprising to see 21st-century examples of corporations and governments working together on surveillance. I mentioned some examples in a 2012 talk. The 2013 Snowden disclosures included more examples. I'll take a moment here to recommend my own 2015 talk about the corporate incentives, including further examples of attack activities by corporations. But the point I'd like to emphasize here is that corporate surveillance was already burgeoning in the 20th century. The surveillance scandals that Congress investigated in the 1970s weren't just government scandals.

Let's get back to the thesis:

The ability to ensure privacy for consumers in market transactions against criminals was therefore paired with offers to work cooperatively with government to ensure LEAs and intelligence agencies obtained decrypted communications.

What does "paired" mean?

Yes, there was an overlap between (1) the corporations asking Congress for freedom to sell strong cryptographic software and (2) the corporations offering to support government surveillance. Sometimes the corporations were pointing to #2 as an argument for #1.

But "paired" sounds to me like it's saying that one of these wouldn't exist without the other. That's not true. #1 and #2 are each examples of corporations pursuing their money-making goals; neither one relies on the other. The telegraph companies secretly delivering copies of telegrams to NSA weren't selling cryptographic software.

The thesis continues as follows:

Privacy from criminals in the market had the paradoxical effect of facilitating the contraction of privacy from police surveillance.

I guess this is the specific source of the surprising claim highlighted at the beginning of this blog post, namely that "the legacy of the crypto wars was to trade privacy for encryption—and to usher in an age of mass corporate surveillance". But, again, what's the mechanism that's supposed to have created this negative effect, and how do we reconcile this with the observed facts of what was happening already?

The thesis continues by claiming that "in the year 2000 ... NSA began investing billions of dollars in secret efforts to break commercial encryption systems".

No, 2000 isn't when that began. Tanja Lange and I recently wrote a paper "Safe curves for elliptic-curve cryptography", including an appendix that summarizes NSA's resources in the mid-1990s:

The Federation of American Scientists used public data to conclude in 1996 [98] that the "NSA budget is around $3.6 billion", including "roughly 20,000 direct-hire NSA staff". Even if personnel expenses for an average staff member were as high as $100000, NSA would have had $1.6 billion in 1996 to spend on equipment.

Declassification requests by journalists led to partial declassification in 2013 of internal NSA history books from 1998 and 1999. These books confirm the 20,000 number; see, e.g., [145, page 23]. These books also say [146, page 291] that NSA spent $199 million in 1984 on a single contract to buy 21,000 IBM PC XTs so as to put a PC on each desk; that NSA spent $150 million in 1985 on a single network-hardware contract; and that "computer power was the essential ingredient in cryptanalysis".

Another internal NSA document says that "since the middle of the last century, automation has been used as a way to greatly ease the making and breaking of codes" and gives examples of NSA's investments in cryptanalytic hardware.

Meanwhile NSA was also spending money attacking cryptography in other ways. NSA and CIA secretly purchased Crypto AG in 1970, and sabotaged the cryptography that Crypto AG was selling. As another example, NSA hoped that developing DES would "drive out competitors" such as Atalla Corporation.

The thesis attributes its claimed 2000 starting date to the Snowden documents. This doesn't have pinpoint references, but seems to be alluding to an internal GCHQ presentation from 2010. The presentation says that "for the past decade, NSA has lead an aggressive, multipronged effort to break widely used Internet encryption technologies" such as "SSL" and "SSH" and "VPNs"; that "cryptanalytic capabilities are now coming on line"; and that "vast amounts of encrypted Internet data which have up till now been discarded are now exploitable."

Is this saying that NSA started attacking cryptography around 2000? No. It's talking specifically about NSA trying, evidently with some level of success, to attack particular "Internet encryption technologies".

How is any of this supposed to show that preserving cryptographic export controls would have made surveillance harder?

Later the thesis describes the corporate-plus-government objectives as follows: "how could informational privacy in the market be assured so that American technology firms could dominate world markets, all while securing avenues for LEA surveillance?"

Yes, there's an objective of world domination for big technology firms such as, to take a 21st-century example, Facebook. Yes, mass surveillance is the centerpiece of Facebook's business model. Yes, Facebook shares data with the government.

But how did Facebook's rise to dominance supposedly rely on "informational privacy"? What's the mechanism by which preserving cryptographic export controls would supposedly have prevented Facebook's dominance? How do we reconcile this with reports from 2010 saying that connections to Facebook weren't even encrypted? Facebook had already grown to half a billion users at that point.

6. The future

I hope you're troubled by mass surveillance. I hope you have the time and energy to do something about it. I know many of my readers are doing this already.

Doing something doesn't mean magically solving the whole problem all at once. It means picking a specific task where you can reasonably hope to make progress, and working on that.

For example, maybe you engage in the policy fight against surveillance mandates. Or maybe you expose the money flow behind those mandates. Or, as a programming example, maybe you work on tools for decentralization. There's much more to do.

But wait: what if there's some fundamental incompatibility between stopping government surveillance and stopping corporate surveillance? If you try to stop government surveillance then maybe you're helping the corporations! If you try to stop corporate surveillance then maybe you're helping the government!

Instead of enthusiastically working on making the situation better, you start worrying that you might be making the situation worse. This sort of concern can be paralyzing. Safest not to do anything, right? No matter how bad the status quo is, if you avoid action then at least you know you aren't doing any damage. Primum non nocere.

When a talk claims that preserving cryptographic export controls would have stopped the mass-surveillance industry, the talk isn't just making a claim for an audience of historians. It's influencing future action. It's telling you that there's a tradeoff: an unhappy choice between stopping one bad thing and stopping another bad thing. It's telling you to pause, and to worry that your actions will similarly have bad effects. That's the most important reason to look at whether the claim is actually true, as I've been doing in this blog post.

I'll close with a recommendation for further reading: Phil Rogaway's paper "The moral character of cryptographic work".


Version: This is version 2024.10.28 of the 20241028-surveillance.html web page.