Guest post by G.S. McNamara
Open source is great. Startups, non-profits, enterprises, and the government have adopted open source software. Whitehouse.gov runs on a modified instance of Drupal, the free and open source content management framework, with security modifications. Another open source project, Apache Solr, is what powers search on whitehouse.gov. And finally, all of it runs on the LAMP solution stack. This is just one example of open source adoption by the federal government. Open source software is attractive in part because it can help the bottom line by avoiding vendor lock-in and leveraging the global community’s effort in creating new features quickly.
But open source code that has been scrutinized by many must be secure, right? Sometimes, no. Do not skip the security assessment of a Web application simply because it is based on open source code. I am not talking about the introduction of insidious backdoors in the code, but about design decisions made in open source projects that have real security implications. Some of these design decisions can support really attractive features, but come with a hidden cost. Security concerns are not always highlighted in the documentation, and busy developers using open source code likely will not know about them. While application developers might be versed in secure development practices, they may make the assumption that so too are the developers contributing to the open source projects they build upon. Security weaknesses that make it past both sets of developers may later be discovered by the users of the application.
Thankfully there are tools and services that can be used to automatically scan your Web application and the libraries it uses for security vulnerabilities. An additional set of tools and services can be leveraged after the application is deemed functionally complete. I would suggest that open source developers clearly mark the security tradeoffs involved in design decisions, and match them up with a common terminology such as Web Application Security Consortium’s (WASC) Threat Classification to help developers using your work understand the risk tradeoffs. If you are using the open source please evaluate your Web applications using a combination of static and dynamic analysis tools. Some tools can be used while still in development to avoid future, more costly surprises that require additional engineering to remediate. Some security tools are created for specific open source projects and frameworks, and are designed to be aware of security shortcomings within those projects and frameworks. Developers can use these tools while developing to be alerted to security issues as they are introduced into the application they are developing.
We are close, but we are not completely talking the same language when it comes to security. For instance, in my recent work on a pre-deployment security evaluation of Ruby on Rails Web application, I found what can be categorized under WASC’s Threat Classification as an Insufficient Session Expiration (WASC-41) weakness (PDF link) provided by the Rails framework itself. This weakness resides in the cookie-based session storage mechanism CookieStore, which is both enabled by default and attractive for its performance benefits.
In discussing this issue, which also exists in the popular Django framework, I realized there is also a semantic problem. This became evident during conversations with contributors to these open source projects. This session termination weakness does not fall under the examples given by the projects’ security guides of a replay attack or session hijacking. When replay attacks are mentioned in the documentation and elsewhere, the examples given are actions that can be taken (again) within an existing session on behalf of a user, such as re-purchasing a product from a shopping website. When “session hijacking” is discussed, talks concern stealing a user’s session while it is still active. The documentation does not talk about how the terminated session itself can be exhumed, which is the root issue I have worked to point out. Even a developer who has read the security guides for these projects would be blindsided by this nuance. More work is needed to better define vendor-neutral security weaknesses so their risks can be highlighted and managed appropriately regardless of the particular technology.
The bottom line is that you cannot trust that all eyes have vetted all security issues in the open source software your application is built on. Both applications and their components need to be tested by the developers for security flaws. Open source software design choices may have been made that are not secure enough for your environment, and the security tradeoffs of these choices may not even be mentioned in the project’s documentation. A huge benefit of open source software is that you can review the security of the code itself; take advantage of that opportunity to ensure you understand the risks that are present in your code.
G.S. McNamara is a senior technology and security consultant, and an intelligence technologies postgraduate based in the Washington, D.C. area. Formerly, he worked in research and development lab on DARPA malware analysis and Web security programs. He began his career as an IT consultant to a K Street medical practice. He works with startups as well as on federal contracts building Web and mobile applications.
About our “Unsung Hero Program”
Every day app sec professionals tirelessly protect the Web, and we recognize that this is largely owed to a series of small victories. These represent untold stories. We want to help share your story. To learn more click here.