I’m not talking about 100% perfect, bug-free code. For all practical purposes, that’s impossible. I’m also not talking about doing what Microsoft does, or did, either. They’ve invested who knows how many hundreds of millions and the better part of a decade hiring every available expert, instituting a month long new code moratorium, and constantly improving their program. A program perhaps only for a mega corporation with billions in cash.
I’m talking about developing software just secure enough for say online banking, shopping, social networking, office collaboration, and other common Web-based applications that process sensitive data. Software secure enough to successfully fend off attackers who might spend several days giving you a free pen-test for their own personal entertainment or monetary gain. Do we know how to make software at least that secure? Really truly?
Think about how you’d answer if someone asked you how.
If you ask this question to a handful of “experts” and managed to cut through the laundry list of what NOT to do, you’ll hear a litany of appsec buzzword bingo. Phrases such as software security throughout the entire SDL, input validation, contextual output filtering, SQL parameterization, threat modeling, security QA testing, vulnerability scanning, developer training, source code reviews, etc. What you won’t hear is a consistent answer to the problem of developing secure software. No twelve step program, no just take this pill and call me in the morning, and certainly no P90X for software security. All the guidance is custom prescribed and unportable, which is probably the reason no one can ever agree.
Maybe this lack of agreement is caused because we can’t properly define “security” in anything except fuzzy terms. I also think this is because no one really knows how to create secure code, even less so at scale, let alone in the presence of agile development processes. If someone actually does know how to go about creating security code, they can’t quantify their process with hard data, and this includes Microsoft. We make exception for Dan J. Bernstein, of djbdns and qmail fame. So unless you are him, and no do not have data, you must admit you are guessing. Fortunately you’ll find yourself in the company of many.
During their session last week at the 3rd Annual Information Security Summit, Gary McGraw and John Stevens of Cigital touched on this very subject and eloquently articulated the two general types of software security guidance. One hand there is prescriptive guidance, like OWASP’s Enterprise Security API and Testing Guide, written by the Superman-type application security pros who recommend things that worked for them a time or two at a company or two. Their words. But, just like the Microsoft example, the average mortal person or organization may not be able to do the same thing, because they are not a super hero with super powers.
The suggested alternative is something like The Building Security In Maturity Model (BSIMM), which is a descriptive guidance. The BSIMM, a study of thirty real-world software security initiatives, has catalogued some 110 such common activities between them. So if you don’t know how to improve your software security program, then do what the other cool kids are doing and see how you currently measure up. For just ability to compare yourself against your peers, the BSIMM data has value, but there is a chasm like gap between what a set of organizations do and what the outcome is. How do we know if any of those activities actually make their software secure? That’s the question I had for Gary and John.
The science of software security has never tied controls to outcomes with any kind of statistical significance. Are all vulnerability classes equally affected when mandating developer training? What impact does threat modeling have in the type of vulnerabilities found in the average application? I’m sure you have some answers, at least some good guesses, but we have no data. We’ve no data on what controls or activities reduce what types of vulnerabilities, for how long, or to what degree, and if they are worth the investment. For example, when the average organization deploys static analysis software testing during QA it generally costs $X and reduces the number of high risk vulnerabilities in production of Y type(s) by Z%. Something just that simple.
What we do know, what is painfully obvious from the headlines and every industry study, is when the bad guys pick a target, they win, and with relative ease in only hours or days. If you, dear reader, want application security to come in anything other than a pizza box listening on the the network, we must figure out how to develop secure software — and be able to prove it.