Vulnerabilities-Web Application Security

Web Developer Resources are Scarce, Security is a Trade-Off

If a Web Developer doesn’t release a revenue generating feature on time, the business will FOR A FACT lose money. If a Web Developer doesn’t fix a vulnerability, it MAY be exploited, and MAY cost the business money. Neither is guaranteed. Since Web Developer resources are scarce, how should the business decide the right course action from a justifiable risk-management perspective?

This Web Developer resource trade-off is extremely difficult to quantify and why I believe website vulnerability remediation rates are only at 63% taking an average of 38 days to fix. 

If the Application Security industry wants the business to listen to our guidance, we must answer this fundamental question. Until such time, application security comes in a pizza box.

 

 

  • Dave Ockwell-Jenner

    Doesn’t this apply to any “cost centre” type activity within an organization? We’ve all heard the story of the sales guy being pissed off at the rest of the organization because he’s the only one generating revenue whilst everyone else is ‘cost’. Perhaps there are some lessons to be learned beyond the software development / security realms – for instance how does Facilities deal with this same dilemma?

    I think you might be right with respect to this being a contributing factor to the ‘remediation latency’ issue, purely because fixing is a lower priority than selling. First person to solve this, wins 🙂

    • http://www.whitehatsec.com/ Jeremiah Grossman

      @Dave: You are likely correct. The core concept of the post likely applies to many areas of business expense. The challenge we have in “application security” is that the [potential] costs [of a breach] come sometime in the future and are difficult to predict / quantify while in the present. Secondly, infosec pros often can’t fix the website vulnerabilities on the own, like they might with a patch from Adobe/Oracle at the network / host layers. At the app-layer, they must instead barter for scarce developer resources.

  • Andreas

    Isn’t the “real” root issue that in IT there are security people and then there’s “the other guys” (developers, sys admins, operations staff, etc)? If security was taught from the beginning, there would be far less people working exclusively with security, and far less vulnerabilities due to thoughtless coding, implementation or configuration. I’m not saying that every IT class should have a security module, but rather the opposite. Security shouldn’t even be a subject if it was really integrated.

    An example: a friend of mine recently graduated from studying to becoming a developer. He studied for 3 years and the only security he learned was from an add-on module that was not mandatory. I’m not saying that the module should have been mandatory, I’m saying that it should not have even existed. Instead it should have been expanded and spread out over the 3 years of training. It should also not have been called “security”, it should simply be labeled “best practice”.

    Maybe a bit off the subject, but still valid and related I think.

    • http://www.whitehatsec.com/ Jeremiah Grossman

      I agree, we need more education earlier for software developers. At the same time, I’m not certain we know [backed with data], what best-practices actually result in “more secure” outcomes. Clearly not all best-practice activities cost the same to implement, and affect the same types of issues to the same degree. Think parametrized SQL statements versus output encoding. The biggest question though, is how to get there. Businesses, including my own, would very much value candidates who have true software security skills. The challenge is how can a hiring manager easily verify the candidates claim. Once you tie job opportunities to software security, we’ll get more “security education.”

      • Andreas

        While I agree in principal, I still think the division between security people and other IT people needs to be dramatically reduced. Otherwise it will be the same old story of security standing outside of the dev office or datacenter shouting “DON’T FORGET ABOUT SECURITY”, while the ones inside are giving the finger. Not literally of course, though that would be kinda fun 🙂

        • Dave Ockwell-Jenner

          I’m fortunate to be working as a “security guy” in an organization where the security function and software development are firm partners. I train all our developers and architects in secure coding and threat modeling – right from day one. Where we see enormous benefit is awareness–once developers are aware there is an improved way of doing something, they are keen to explore and learn more. I suppose I’m preying on their pride somewhat, but the results are great: developers asking questions, spotting bugs and sharing with each other how to best fix them. It’s not perfect, but to Andreas’ point – this blending of InfoSec and Development is critical.

  • http://@GabrielGumbs Gabriel Gumbs

    Application Risk as expressed in $ = standard deviation of development resources / mean of development resources.

    27 chars left over.

    Gabe-

  • http://@GabrielGumbs Gabriel Gumbs

    “@jeremiahg: @GabrielGumbs create me a formula will ya? I’ll plug in the variable values. ;)”

    Very roughly…

    Avg. cost of function release = Hourly Dev Cost x Avg. Number of hours per release

    Cost to fix per vuln class = Hourly Dev Cost x Mean time to fix of vuln class

    Calculate mean time to fix of all vuln classes

    Standard deviation of developer resources = Difference of mean time to fix of all vuln classes & Avg. cost of function release from each mean time to fix per vuln class squared, divided by number of data points in this population squared.

    This will not calculate risk, however risk based decisions can be made based on resources as a $ at this point.

  • Marcin Antkiewicz

    I think there is an analogy to a risk management concept in finance. I do not claim that risks or behaviors in Finance have to translate to, or have an impact in AppSec, but I find the parallel striking. From the Wikipedia entry on Potential Future Exposure:

    “[..]it is easy for a trader to set up a “Black Swan” type trade that will pay a moderate money upfront, but [can] yield catastrophic losses[..]. This is akin to writing a large amount of insurance contracts against a rare but catastrophic risk. The vast majority of the time – and for many years running – the trader can appear to have a highly profitable strategy (even if the trade actually had negative expected value). When the rare event occurs, the person (or more likely her employer) who wrote the “insurance” (or in options terminology – the person who was “short a put or call” / “shorted volatility” / was “short delta”) sustains massive losses and may go bankrupt. These sorts of trades are behind most major collapses in the past 30 years – including much of the savings and loan crisis of the 1980s, Kidder Peabody, Enron, AIG, Lehman and even recent (relatively “small”) losses at JP Morgan.”

  • http://@GabrielGumbs Gabriel Gumbs

    http://bit.ly/YViCQ8

    Not as rough as I originally thought. Will work on making normalizing the language and presentable.

    Gabe-

  • Pingback: Is It Really True That Application Security has “Best-Practices”? | WhiteHat Security Blog()