That’s the question I recently posed in a twitter exchange with @securityninja and @manicode. For those unfamiliar with the Pareto principle, also known as the 80/20 rule, is where roughly 80% of the effects come from 20% of the causes. This Pareto principle phenomenon can be seen in economics, agriculture, land ownership, and so on. I think it may also apply to developers and software security — particularly software vulnerabilities.
Personal experience would have most of us agreeing that not all developers are equally productive. We’ve all seen where a few developers generate way more useful code in a given amount of time than others. If this be the case, then it may stand to reason that the opposite could be true — where a few developers are responsible for a bulk of the shoddy vulnerable code. Think about it, when vulnerabilities are introduced, are they fairly evenly attributable across the developer population or clumped together within a smaller group?
The answer, backed up by data, would have a profound affect on general software security guidance. It would to more efficiently allocate security resources in developer training, standardized framework controls, software testing, personnel retention, etc. Unfortunately very few people in the industry might have the data to answer this question authoritatively, and then only from within their own organization. Off the top of my head I can only think that Adobe, Google, or Microsoft might have such data handy, but they’ve never published or discussed it.
In the meantime I think we’d all benefit from hearing some personal anecdotes. From your experience, are 20% of developers responsible for 80% of the vulnerabilities?