My wife and I have become very picky about the types of food that we eat ever since our first child was born. While there are multiple points of consideration, one of the most revealing sources of information as to the contents of food is the “Nutrition Facts” label. From this label we can ascertain the breakdown of fats, the breakdown of carbohydrates as well as vitamin density. Usually underneath the Nutrition Facts label is the Ingredients listing, sorted by volume. Such information as required by the Federal Food and Drug Administration (FDA) has helped customers make a more informed decision about the foods they consume.
You’re probably asking yourself… “Eric, why on earth are you telling me this?” The answer is simple… with the migration towards digital transformation and the onslaught of cyber attacks, we need a “Security Facts” label so that we as consumers may make more informed decisions about the risk we are inheriting from the use or acquisition of applications. This is neither an original nor a new concept. In fact, this idea has been discussed on more than one occasion as a part of the OWASP Application Security Metrics Project.
Think this is silly? Go ask Verizon about their Yahoo acquisition. Late last year, Yahoo disclosed two massive data breaches that ultimately reduced the acquisition price by $350 million. Do you believe, given the size of the breaches, that Yahoo was able to paint Verizon an accurate picture as to the security of Yahoo’s applications? Now what’s more fun (or not depending on where you stand), do you think Verizon has *any* confidence in Yahoo’s statements about the security of its applications and infrastructure, if any, prior to the disclosure of these data breaches? I’m willing to bet that one major reason in the reduction in acquisition price is the unforeseen cost in due diligence Verizon is incurring verifying Yahoo’s security relevant claims.
Now imagine an accurate Security Facts label was available for Yahoo assets. From the perspective of Verizon, such a label could have more quickly identified areas of concern thereby influencing acquisition price up-front as well as possibly reducing due diligence costs down the road. From the perspective of Yahoo, such a label would serve as a key performance indicator thereby influencing internal application security related activities, likely reducing the size and impact of any publicly disclosed breaches. Would this not, at a minimum, facilitate constructive dialog while incentivizing more positive behaviors?
At the end of the day, the Security Facts label is an aide in calculating risk. Unlike the “based on a 2,000 calorie diet” baseline stance taken by the FDA, a Security Facts label cannot make a baseline statement about risk acceptance. Rather, our Security Facts label must succinctly provide the data points necessary for external business units or external organizations to generally assess and gauge *their* risk in consuming the application.
Okay… so what would go into a Security Facts label as it pertains to our applications? Perhaps the first key piece of information to present are the known vulnerabilities and their breakdown… ideally by risk. Unfortunately for this discussion, this presents a dilemma. Risk is computed based on likelihood of discoverability / exploitability and the *impact to the business* if exploited, and thus is a measure unique to the consumer. As such, the label would be projecting the severity based on the provider’s risk policy, not necessarily the consumer’s risk policy. Therefore, such a label cannot be static. Rather, it should be auto-generated based on the inputs of the consumer. For example, consumer A may consider “A3 – Cross-Site Scripting” to have a default impact of “High”, whereas consumer B may rank it with a default impact of “Medium”. The Security Labels resulting from these inputs are now inherently *unique to the individual consumer’s risk acceptance policy*. Now if you’re a security vendor (::cough:: WhiteHat Security ::cough::), you are in a great position to provide a breakdown by severity because *you* set the severity baselines and *you* provide your customers with the ability to dynamically adjust those baselines based on the customer’s risk policies (or at least you should).
Secondly, we would want to know the breakdown of known vulnerabilities across an industry recognized taxonomy, such as the OWASP Top Ten Project. If our Security Facts label indicated that 80% of all vulnerabilities are under “A2 – Broken Authentication and Session Management”, then the corresponding application likely suffers from some serious identity related architectural issues. Thirdly, our Security Facts label should provide some insights into historical trends. What are the vulnerability introduction rates and vulnerability remediation rates over the past 6 or 12 months? This can help provide a sense if things are getting better. At WhiteHat Security, we’ve seen a strong uptick in remediation rates within development teams that partake in our Computer Based Training offering. If you’re not fixing the vulnerabilities, you’re just generating data for the sake of data. Application security training is key to enabling developers to act on the vulnerability data.
Finally, we would want to gauge *how* the data for this label was generated. Measuring and analyzing security activities is helpful here. How many were discovered (if at all) during… threat modeling? code review? penetration testing? public disclosure? Etc. The difficulty here lies in the assumption that such activities are taking place at scale. This is where automation throughout the software development process is critical. WhiteHat Security customers address much of this challenge through our automated, continuous dynamic and static vulnerability identification and management solution – WhiteHat Sentinel.
Thus far we’ve placed emphasis on known vulnerabilities. What about unknown vulnerabilities, also known as “zero day” vulnerabilities? The challenge here is it is impossible to know the unknown (i.e., what we missed). Discovering all possible vulnerabilities is a theoretical goal, not a realistic one. Instead, the Security Label should provide some evidence of security rigor applied to the application in question. What security activities were performed? What is their frequency? Imagine you are reviewing a Security Label whose known vulnerability data looks relatively positive – few if any known vulnerabilities, with strong historical trends. However, what if all known vulnerabilities were generated based solely on a penetration test performed twice a year? So no threat modeling, no code reviews, no on-going automated testing… just some consultants with a proxy. How confident are you that the producer of this application discovered all possible vulnerabilities? I’m willing to bet some serious issues were missed.
As an appendix to our Security Facts label should be an “ingredients” section outlining the 3rd party components, frameworks and libraries, that were used to construct this application. The ingredients should not only list the component’s name, but also its version, number of known vulnerabilities and possibly its licensing. Would you trust your data with an application leveraging a Struts 2 dependency with two remote code injection vulnerabilities? Software Composition Analysis, or SCA, is critical to providing such information at scale. WhiteHat Sentinel Source customers should be happy to know that SCA is baked directly into the offering at no additional cost.
The application security space has seen tremendous growth and maturity over the past 5-10 years making this a more realistic concept. While it may not be in the form of an actual Nutrition Label, I’ve seen many organizations be able to provide such visibility through the culmination of one or more dashboards… can you?