Thomas H. Ptacek

Thomas H. Ptacek

Internal Disclosure

Boring Premises

Virtually all software has bugs. We don’t know how to practicably build software without bugs.

Software security vulnerabilities leverage bugs (or chains of bugs) to trick software into doing things for attackers. We don’t completely understand which bugs will be useful to attackers. We know some patterns. But new patterns are routinely discovered. There’s a shorthand for this: “all bugs are security vulnerabilities”. Call it “The OpenBSD Doctrine”, because I think Theo de Raadt was the first person to popularize it.

Ergo, all software has vulnerabilities.

Two different software security experts looking at the same piece of software will find different bugs. The same expert looking at the same piece of software at different times will find different bugs. Everyone who has ever tried to scale a software security practice bears the bloody forehead of this truth: generating consistent results from human security inspection of software is an unsolved problem.

It’s partly for that reason that “late” discovery of a vulnerability – say, a vulnerability that’s been latent for years – isn’t unusual. Heartbleed was latent for years, and fit into a well-understood pattern of vulnerabilities in one of the most sensitive open-source codebases in the world.

Most companies allocate zero dollars to human security inspection of their software. But a few companies allocate a lot of dollars to it. You know most of their names: Microsoft, Google, Facebook, Amazon, Apple. These companies spend more on software security than most tech companies spend on software development.

What does Microsoft get in return for its galactic-scale security spend? Not vulnerability-free software. External researchers routinely find vulnerabilities in Microsoft software that Microsoft itself, and all of Microsoft’s third-party contractors, missed.

Instead, Microsoft’s internal security team: (1) reduces the number of vulnerabilities they ship, (2) builds and maintains the ability to respond systemically to new vulnerabilities, hunting and killing all similar vulnerabilities across their codebase, and (3) pushes the state of the art in software security forward, gradually improving their ability to do (1) and (2).

But make no mistake: Microsoft is still shipping vulnerabilities, just like Google, and just like Linus and Ubuntu. If this is a problem for you, find your brethren on Twitter and stage an attack on technology worthy of being chronicled in an anthem by Rush.

Some Positive Arguments

Vulnerabilities aren’t “breaches”. A breach occurs when someone exploits a vulnerability to harm people. The distinction matters, in part because it has legal implications, but also because all software has vulnerabilities and the term “breach” loses its usefulness if we have to accept that every piece of technology in the world is in a continuous state of breach.

There is no norm in professional software engineering that internally-discovered security vulnerabilities be disclosed. Companies with serious security teams discover internal vulnerabilities routinely, and you hear about virtually none of them. Every product you use has had high-severity vulnerabilities that were found and fixed without an advisory notice.

A Normative Argument

We could want there to be a new rule that internal vulnerability discoveries be disclosed. But we shouldn’t.

A mandate to disclose internal vulnerabilities would change incentives. Firms would have a reason not to want to find vulnerabilities. If this seems far-fetched, revisit the premises: already, most companies don’t look for them.

There would be subtler distortions, as well. Responsible companies would manage the incentive shift. They’d still work on hardening their software. But many would still want to avoid disclosures.

For instance, they’d call security patches something else. Remember what firms buy from their security teams: systemic response. When a smart team finds a cross-site scripting vulnerability in a Django template because something got mark_safe’d, they don’t just fix that problem; they also scour the code for other mark_safe’s, and, ideally, add a step to their PR and CI/CD process to detect new mark_safe’s.

Or, instead of investing in human software security inspection, which decisively addresses security flaws but creates work, firms would adopt countermeasure products: application firewalls and alarm systems. This already happens! And, for the most part, the products don’t work(1). They sure are lucrative, though.

What would we get in exchange for this disruption? I argue: nothing more than confirmation of something every expert already knows. It’s hard enough to get people to respond to the security advisories our field does publish – so hard that the only secure consumer systems are the ones that auto-update, relieving consumers of the burden of paying attention in the first place.


None of this is to say that the status quo is OK. From reporting, Facebook apparently managed to ship an account-takeover vulnerability in the process of building a birthday picture sharing feature. To build that marginal feature, a developer apparently had to come in contact with impersonation token code; Facebook was, and presumably still is, one broken birthday picture feature away from exposing user impersonation. That’s deeply alarming.

It’s possible that despite their gigantic spend, the security teams at Facebook, Microsoft, Apple, and Google are still outgunned by the human fallibility of their even more gigantic software development teams. (None of these teams want for expertise; all retain some of the best experts in the world. But they could be spread too thin). These are gigantic companies that generate cash at a velocity that is hard to fully appreciate. They can spend more. Their management teams – well, some of their management teams – can take security more seriously.

And it’s also possible that some of these tech giant business models are just untenable. We don’t replace every fossil fuel power plant in the country with nuclear because we’re afraid we don’t understand the waste containment problems. That’s a macro-scale commercial decision we’ve made in the name of risk management, and one which we know results in hundreds of deaths every year (I’m not necessarily an advocate for nuclear, I’m just saying). We should probably be engaging the same parts of our brains on cat photo sharing as well.

But none of that changes the dynamics of internal software security. Want to change the way we handle it? Primum non nocere.

(1): If you’re a friend or acquaintance of mine and you sell a security product that this this description I’m sure I’m not talking about you.