Thomas H. Ptacek

Internal Disclosure

Boring Premises

Virtually all software has bugs. We don’t know how to practicably build software without bugs.

Software security vulnerabilities leverage bugs (or chains of bugs) to trick software into doing things for attackers. We don’t completely understand which bugs will be useful to attackers. We know some patterns. But new patterns are routinely discovered. There’s a shorthand for this: “all bugs are security vulnerabilities”. Call it “The OpenBSD Doctrine”, because I think Theo de Raadt was the first person to popularize it.

Ergo, all software has vulnerabilities.

Two different software security experts looking at the same piece of software will find different bugs. The same expert looking at the same piece of software at different times will find different bugs. Everyone who has ever tried to scale a software security practice bears the bloody forehead of this truth: generating consistent results from human security inspection of software is an unsolved problem.

It’s partly for that reason that “late” discovery of a vulnerability – say, a vulnerability that’s been latent for years – isn’t unusual. Heartbleed was latent for years, and fit into a well-understood pattern of vulnerabilities in one of the most sensitive open-source codebases in the world.

Most companies allocate zero dollars to human security inspection of their software. But a few companies allocate a lot of dollars to it. You know most of their names: Microsoft, Google, Facebook, Amazon, Apple. These companies spend more on software security than most tech companies spend on software development.

What does Microsoft get in return for its galactic-scale security spend? Not vulnerability-free software. External researchers routinely find vulnerabilities in Microsoft software that Microsoft itself, and all of Microsoft’s third-party contractors, missed.

Instead, Microsoft’s internal security team: (1) reduces the number of vulnerabilities they ship, (2) builds and maintains the ability to respond systemically to new vulnerabilities, hunting and killing all similar vulnerabilities across their codebase, and (3) pushes the state of the art in software security forward, gradually improving their ability to do (1) and (2).

But make no mistake: Microsoft is still shipping vulnerabilities, just like Google, and just like Linus and Ubuntu. If this is a problem for you, find your brethren on Twitter and stage an attack on technology worthy of being chronicled in an anthem by Rush.

Some Positive Arguments

Vulnerabilities aren’t “breaches”. A breach occurs when someone exploits a vulnerability to harm people. The distinction matters, in part because it has legal implications, but also because all software has vulnerabilities and the term “breach” loses its usefulness if we have to accept that every piece of technology in the world is in a continuous state of breach.

There is no norm in professional software engineering that internally-discovered security vulnerabilities be disclosed. Companies with serious security teams discover internal vulnerabilities routinely, and you hear about virtually none of them. Every product you use has had high-severity vulnerabilities that were found and fixed without an advisory notice.

A Normative Argument

We could want there to be a new rule that internal vulnerability discoveries be disclosed. But we shouldn’t.

A mandate to disclose internal vulnerabilities would change incentives. Firms would have a reason not to want to find vulnerabilities. If this seems far-fetched, revisit the premises: already, most companies don’t look for them.

There would be subtler distortions, as well. Responsible companies would manage the incentive shift. They’d still work on hardening their software. But many would still want to avoid disclosures.

For instance, they’d call security patches something else. Remember what firms buy from their security teams: systemic response. When a smart team finds a cross-site scripting vulnerability in a Django template because something got mark_safe’d, they don’t just fix that problem; they also scour the code for other mark_safe’s, and, ideally, add a step to their PR and CI/CD process to detect new mark_safe’s.

Or, instead of investing in human software security inspection, which decisively addresses security flaws but creates work, firms would adopt countermeasure products: application firewalls and alarm systems. This already happens! And, for the most part, the products don’t work(1). They sure are lucrative, though.

What would we get in exchange for this disruption? I argue: nothing more than confirmation of something every expert already knows. It’s hard enough to get people to respond to the security advisories our field does publish – so hard that the only secure consumer systems are the ones that auto-update, relieving consumers of the burden of paying attention in the first place.

Disclaimers

None of this is to say that the status quo is OK. From reporting, Facebook apparently managed to ship an account-takeover vulnerability in the process of building a birthday picture sharing feature. To build that marginal feature, a developer apparently had to come in contact with impersonation token code; Facebook was, and presumably still is, one broken birthday picture feature away from exposing user impersonation. That’s deeply alarming.

It’s possible that despite their gigantic spend, the security teams at Facebook, Microsoft, Apple, and Google are still outgunned by the human fallibility of their even more gigantic software development teams. (None of these teams want for expertise; all retain some of the best experts in the world. But they could be spread too thin). These are gigantic companies that generate cash at a velocity that is hard to fully appreciate. They can spend more. Their management teams – well, some of their management teams – can take security more seriously.

And it’s also possible that some of these tech giant business models are just untenable. We don’t replace every fossil fuel power plant in the country with nuclear because we’re afraid we don’t understand the waste containment problems. That’s a macro-scale commercial decision we’ve made in the name of risk management, and one which we know results in hundreds of deaths every year (I’m not necessarily an advocate for nuclear, I’m just saying). We should probably be engaging the same parts of our brains on cat photo sharing as well.

But none of that changes the dynamics of internal software security. Want to change the way we handle it? Primum non nocere.

(1): If you’re a friend or acquaintance of mine and you sell a security product that this this description I’m sure I’m not talking about you.

You Need Deli Cups

Deli cups are one of mankind’s greatest inventions, and also one of the most important kitchen tools. But most of my friends don’t have any, or, if they do, they don’t have enough. You need all of them.

You don’t want, like, five of them. You want a box of hundreds. Never to be without them. There are two important species, the squat 16oz and the taller 24oz. You should have both, but push to shove, the 16’s are more versatile. Shop around on Amazon and you’ll find a couple hundred for $40 or so.

What’s the use of a deli cup? What isn’t the use of a deli cup?

  • A deli cup is a rigid zip-loc bag. Anything you’d put in a zip can go in a deli. For obvious reasons, liquids (soup, sauces, oils) work far better in delis than zips. But almost everything else is just as happy in a deli as in a zip, too. Delis are smaller than zips. That’s a good thing! A 16oz deli stores a humane portion for one person. Most people don’t re-use zips (I don’t either). Delis give you /optionality/: you can clean them out and put them back on the stack, or, if you’re cleaning out the back of your fridge, just chuck them in the recycling.
  • A deli cup is a cup. Imagine having no other kind of cup. It’s easy if you try.
  • A small stack of deli cups makes an ideal /mise en place/. Fuck ramekins.
  • Deli cups are pickling jars for fridge pickles.
  • Deli cups hold sugar, kosher salt, and bulk spices. The tall ones are big enough to hold a meaningful amount of flour or cornmeal.
  • Measure out the dry ingredients for biscuits or muffins or whatever. Several batches. Each in its own deli. Weeks of biscuits and muffins with no effort.
  • Strain fryer oil (through 2 sheets of paper towel) into a 24oz deli. I wasted so much oil before delis.
  • As luck would have it, a 24oz deli is almost exactly the perfect shape to make stick blender mayo or hollandaise. Egg yolk, salt, couple tbsps liquid, then whir while pouring oil or butter in. Foolproof.
  • 24oz delis are tall enough to store pens and pencils.
  • You could plant in delis. Erin says you’ll have to punch holes in the bottom. Small stack of garden delis.
  • A 16oz deli will store electronic parts.
  • Paint.

I honestly don’t understand why there is tupperware.

Having a practically limitless supply of deli cups is life-changing. Anybody can afford them. I would cook over sterno for a year before I gave up delis.

You need deli cups.

A unified timeline of Efail PGP disclosure events

This is an attempt to combine public sources regarding when various PGP vendors were notified about Efail.

Sources:

  • Efail: the Münster/Ruhr research team’s public Efail.de site.
  • #721: Enigmail bug #721
  • Koch: Werner Koch’s 5/14/2018 GnuPG mailing list post
  • Hansen: Robert J. Hansen’s Twitter feed.

10/25/17 (Efail)

~200 days before the embargo on the research breaks, the Efail team contacts the Mozilla Thunderbird team, opening bug #1411592 (this bug is still private today).

11/3/17 (Efail)

The Efail team contacts Google, presumably regarding the S/MIME variant of their attack.

11/23/17 (#721)

The Efail team opens bug #721, “Efail: Full Plaintext Recovery in PGP via Chosen-Ciphertext Attack”. This bug log is now public. The initial report includes an early draft of the Efail paper, and refers back to (private) Thunderbird bug #1420217.

11/24/17 (Koch)

According to Werner Koch from GnuPG, the Efail team sends an “advisory” (probably a version of the paper sent to Enigmail) to GnuPG. Koch replies that the PGP MDC prevents the Efail attack. Efail points out that they can roll back MDC PGP messages to non-MDC messages.

11/25/17 (#721)

Patrick Brunschwig from Enigmail discusses the Efail bug with Sebastian Schinzel from Efail (Brunschwig and Schinzel are the only representatives of their projects to converse so from now on I’ll refer to them as “Enigmail” and “Efail”).

11/26/17 (Koch)

Koch responds that GnuPG prints an error (starting in October of 2015) when the MDC is stripped from messages, and Enigmail is “doing something wrong”.

11/29/17 (Koch)

Efail asks Koch for a phone call; Koch never replies.

12/04/17 (#721)

Efail asks Enigmail about a timeline for disclosure.

12/05/17 (#721)

Enigmail tells Efail a release with fixes will be available “in about a week”, with a nightly build available immediately.

12/06/17 (#721)

Efail reminds Enigmail that they’re coordinating with other vendors and asks that details not be included in the patch; Enigmail agrees.

2/10/18 (Efail)

Multiple vendors, including Thunderbird and Apple Mail, are notified of the Efail “direct exfiltration” attack, which uses MIME directly to inject HTML rather than tampering with cipher text.

2/11/18 (#721)

Enigmail asks Efail for a disclosure date; Efail responds with the proposed announcement date (in April) and a preprint of their Usenix paper. The 2/11 preprint is fairly close to the final Usenix version, but is redacted so that only information pertinent to Enigmail is identifiable.

2/13/18 (#721)

Efail requests phone call with Enigmail, who agrees. Coordination happens off the bug tracker. Enigmail reports back new fixes, and says they’re backported.

2/15/18

90 days back from the ultimate disclosure date, this would be the start of the Google Project Zero mandatory disclosure window.

2/16/18 (Efail)

GPGTools is notified of the PGP-based (presumably: non-direct-exfiltration) variants of the Efail attack.

4/17/18

Proposed public announcement date as relayed to Enigmail in #721.

4/27/18 (Koch)

Werner Koch obtains a copy of the KMail preprint of the Efail paper. Koch discusses with some members of the GnuPG team. No further action is taken “[…] because due to the redaction we were not able to contact and help the developers of other MUAs which might be affected.”

5/11/18 (Hansen)

Robert Hansen of the GnuPG and Enigmail projects is notified by either Enigmail or GnuPG about Efail, and somehow receives the KMail version of the Efail paper.

5/14/18 (Hansen)

Efail is publicly announced. Robert J. Hansen “takes point” for handling of disclosure for one or both of Enigmail and GnuPG, claims not to have been given any information about Efail prior to the “samizdat” copy of the paper he obtained on 5/11/18 — presumably sometime after he failed to obtain it from a journalist, who, Hansen, says, indicated to him that the paper had been embargoed only to journalists and away from researchers. Hansen is affiliated with both GnuPG and Enigmail and is listed as a project team member for Enigmail, which received an almost complete preprint of the paper in February.

Koch

#721

Efail

So @lvh and I have nailed the Latacora hiring challenge down.

At Matasano we had a PHP web app challenge and a custom protocol challenge. We had other stuff, but most of our technical qualification was done based on a web app you tested and a protocol you reversed and tested.

Latacora does both offensive and defensive stuff, and we own the whole deployment, not just the app code.

So our first cut hiring challenge — I’m really happy with it and think it’s better than what we had at Matasano — is a combination Django and AWS challenge. It’s a sort of short pentest of an AWS application environment, where we give you AWS creds and you get us a list of vulnerabilities in the app and the network.

We’re still tinkering with follow-up challenges that are less about finding flaws and more about being able to build stuff up. But the extremely nice thing about assessment challenges is that they’re super easy to score. We want to be able to make technical qualification decisions mechanically and repeatably, so that it doesn’t matter who on our staff reviews candidate challenges.

We still interview! But we start every interview knowing already that the candidate is technically qualified, and they know that too. So our interviews should be way shorter — not an all-day gauntlet — and way less stressful for candidates.

Reminder: the Black Hat USA 2018 CFP is open for the next 4 weeks. It’s easy to put together a CFP response. BH is the industry’s best known practical offensive security conference.

I’m working with 9 other crypto pros to review the Cryptography track.

Some greatest hits:

Black Hat 2009: Moxie Marlinspike introduces NUL prefix attacks on SSL X.509 certificate parsing. Link

Black Hat 2010: Nate Lawson does a comprehensive tutorial on actually exploiting remote timing attacks in software written in Java, C, and other languages. Link

Black Hat 2010: Elie Bursztein, with Gourdin, Rydstedt, and Boneh present “Bad Memories”, application layer crypto bypass attacks Link

Black Hat 2013: Angelo, Harris, and Gluck introduce the BREACH attack, the application-layer version of Thai Duong and Juliano Rizzo’s CRIME. Link

Black Hat 2013: Me and Alex Stamos broke conventional Diffie Hellman. You may remember our “cryptopotomus” talk.

Black Hat 2012: Me and Mike Tracy did Crypto For Pentesters, a modernization of Chris Eng’s Black Hat 2006 talk. This the precursor/inspiration for Cryptopals.

Black Hat 2014:, me, Alex, @spdevlin and @Sc00bzT did a Cryptopals + Decryptocat talk. Link

Black Hat 2014: Antoine Delignat-Lavaud from INRIA does Cookie-Cutter, Host Confusion, and Triple Handshake, all in one presentation. A criminally underrated talk. Link

Black Hat 2015: @esizkur’s carry propagation talk — breaking bignum libraries out from underneath crypto libraries. This is still a trendy bug class. Link

Black Hat 2015: @CipherLaw and @matthew_d_green spent an hour talking through what legally-mandated backdoors in encryption products might actually look like. Link

Black Hat 2015: Also a strong talk on the mechanics of actually exploiting timing channels in web applications. Link

Black Hat 2016: @davidcadrian presents DROWN, the all-time greatest TLS crypto attack. Link

Black Hat 2016: Tom Van Goethem and Mathy Vanhoef’s HEIST attack, which cleverly exploits TCP window sizes to infer response lengths and weaponize CRIME/BREACH. Link

Black Hat 2016: @spdevlin and @hanno demonstrate Joux’s “Forbidden Attack” on AES-GCM to break TLS, using that attack to serve their slides from a GCHQ web site. Link

Black Hat has upped its crypto game in the last 10 years.

Black Hat 2017: Conference favorite Elie Bursztein returns, this time with the taxidermied corpse of SHA-1 slung over his shoulder. Link

Black Hat 2017: Bursztein with Picod and Audebert break AES-encrypted USB devices. Link

Black Hat 2017: Alex Radocea presents a break in the crypto protocol for iCloud Keychain. Link

Black Hat 2017: @veorq broke mbedTLS and Go’s crypto library with a new technique, differential fuzzing, for testing crypto libraries. Link

Black Hat 2017: Yogesh Swami from Rambus/Cryptography Research presented attacks on SGX attestation. Link

Refinements on old attacks. Practical exploitation of crypto vulnerabilities against new targets. Tutorials. New research advances. Hardware attacks. We’re interested in all of it.

I’m just one dude speaking only for myself, but I think modern crypto attacks are underappreciated and poorly understood by pentesters and vulnerability researchers. You don’t have to have broken TLS or presented a new research result (though those are welcome); if you can just demonstrate using crypto attacks to break real world systems, you’ve got a great submission.

If you’re working on something involving breaking cryptography: here’s the CFP.

Black Hat Submission Advice

There’s a model submission example on the CFP submission page but my general advice starts with: Don’t overthink it. Median accepted submission is probably just about ~1000 words.

DO. NOT. WRITE. YOUR. SUBMISSION. IN. THE. CFP. SUBMISSIONS. APP. Write it out fully somewhere else, then paste it in. Don’t draw ASCII art. The CFP app will ruin it.

Be careful what track you submit to. There is no reliable process inside BH for moving talks submitted from the wrong track to the right one. It might happen, it might not. Assume you’ll get 50% more attention from reviewers on your primary track than on your secondary one. (ie: reviewers covering Mobile Security track are interested in mobile; that’s why they’re there.) Make your primary track the one where you have the best pitch.

Each submission gets 10 different reviewers, each working through ~200+ submissions. They’ll give your submission a default ~5 minutes, so state clearly and repeatedly in every section what makes your work (1) novel and (2) impactful. Novelty and impact, stated clearly, in the abstract and outline. You’re already ahead of 80% of all submissions.

There is virtually no FOMO motivation in reviews. If you write a submission that is halfhearted about why it’s important, reviewers will assume it isn’t.

There’s so many submissions, a lot of reviewers are going to take any opportunity they’re given to quickly bucket your submission. So:

Warning: If you work for a company that does anything related to what your Black Hat submission talks about, find ways to make it clear that you won’t be talking about your employer. Reviewers are allergic to pitches and will assume that’s what you’re trying to do.

Warning: Getting into Arsenal can be hard hard. Unless you’re submitting to Arsenal, or the thing you built is literally one of the most important new things built this year, don’t make the core of your pitch be a tool you built. Everybody does that! Makes it too easy for reviewers to write a “would be good for Arsenal” review.

Warning: If you’re writing a “case study” talk, know that case studies are the easiest kind of talk to write, so you’re competing with dozens of other case studies. I like case studies and think we need more (I’m just one reviewer). But really play up what makes your case important.

Warning: Be careful about stale topics. I’m not going to say what they are; go look at last 4 years of BH. I’m not saying “don’t submit a machine learning cloud security” talk. I’m just saying probably don’t call it that.

Black Hat reviewers are overwhelmingly technical and most have spoken at BH previously. Don’t overwrite; assume we know what to look for in your technical details.

But: if you’re writing a talk like “I have a pattern of bugs that breaks every Relay and GraphQL application”, don’t assume reviewers super familiar with GQL. We’ll understand the talk but not necessarily why GQL matters. One of the best 2017 talks barely squeaked in because of this.

Seriously though: at ~1000 words, a decent CFP submission is less work than a medium-sized Reddit comment. It costs you very little time to submit. Whatever else you do, err on the side of submitting!