Web Application Security

Follow-up: Secure (Enough) Software — Where Are the Requirements?

My recent post entitled, Secure (Enough) Software — Do we really know how?, sparked a very thoughtful comment by Mitja Kolsek (ACROS Security), which read more like a well-written blog post than anything else. Mitja goes onto explain one of the more fundamental challenges between the implicit and explicit (security requirements) forms of software security. He really hits the nail on the head. Such a good blog post is not something seen every day, so with Mitja’s permission, I’m republishing all readers to enjoy.

“A great article, Jeremiah, it nicely describes one of the biggest problems with application security: How do you prove that a piece of code is secure? But wait, let’s go back one step: what does “secure” (or “secure enough”) mean? To me, secure software means software that neither provides nor enables opportunities for breaking security requirements. And what are these security requirements?

In contrast to functional requirements, security requirements are usually not even mentioned in any meaningful way, much less explicitly specified by those ordering the software. So the developers have a clear understanding what the customer (or boss) wants in terms of functionalities while security is left to their own initiative and spare time.

When security experts review software products, we (consciously or less so) always have to build some set of implicit security requirements, based on our experience and our understanding of the product. So we assume that since there is user authentication in the product, it is implied that users should not be able to log in without their credentials. Authorization implies that user A is not supposed to have access to user B’s data unless where required. Presence of personal data implies that this data should be properly encrypted at all times and inaccessible to unauthorized users.

These may sound easy, but a complex product could have hundreds of such “atomic” requirements with many exceptions and conditions. Now how about the defects that allow running arbitrary code inside (or even outside) the product, such as unchecked buffers and unsanitized unparameterized SQL statements and cross-site scripting bugs?

We all understand that these are bad and implicitly forbidden in a secure product, so we add them to our list of security requirements. Finally there are unique and/or previously unknown types of vulnerabilities that one is, by definition, unable to include in any security requirements beforehand. My point is that in order to prove something (in this case security), we need to define it first.

Explicit security requirements seem to be a good way to do so. For many years we’ve been encouraging our customers to write up security requirements (or at least threat models, which can be partly translated into security requirements) and found that they help them understand their security models better, allowed them to see some design flaws in time to inexpensively fix them, and gave their developers useful guidelines for avoiding some likely security errors.

For those reviewing such products for security, these requirements provide useful information about the security model so that they know better what exactly they’re supposed to verify. Only when we define the security for any particular product can we tackle the (undoubtedly harder) process of proving. But even the “negative proof and fix” approach the industry is using today, i.e., subjecting a product to good vulnerability experts and hoping they don’t find anything or fix what they find, can be much improved with the use of explicit security requirements.

  • Girish Aralikatti

    Having explicit security requirements is a great way to start a program or a kick off new product development, though I feel there may be more challenges than we anticipated to prove (with the aid of test cases or other mechanisms) that the product really meets the stated security requirement. This skepticism probably stems from my belief that there are probably multiple ways to break a security requirement around, say for example, authentication and authorization. It depends to a certain extent on the creativity and ingenuity of the QA guy (or whoever else is responsible) to nail down the appropriate testing methodology including sufficiency of the test cases for these requirements. In this regard, I feel, trying to identify the “misuse” and “abuse” cases along with expected and standard “use cases” might prove beneficial.

  • kapil assudani

    Thanks for sharing these thoughts! Security requirements in my opinion must be a two step process. Lets say the business case is for a new product development. Subsequent to a business case, the business team ideally composes business use cases as they translate the business need to what they envision the solution to be. This is the time when security team needs to interface with the business, and form the first set of what we may call ” business security requirements”. This can be accomplished by writing business misuse cases for each provided business use case. The mitigation of these business use cases will translate to business security requirements, that get consumed when IT team comes in and writes functional requirements. Automatically, IT functional requirements have business security requirements baked into them. Next during logical design phase, the threat modeling must be performed for the logical architecture to compose technical security requirements – that eventually dictate the final logical architecture or design. So basically, your business security requirements and technical security requirements allow you to cover the space comprehensively and help make design decisions that would actually provide assurance for satisfying your requirements.