At LASCON this year, I was invited to deliver the keynote address. Unlike most of my presentations, I got an opportunity to be really reflective and give my take on how the industry is progressing. Rather than deliver a purely technical presentation I decided to focus on some blindspots that I saw in the industry. These are areas where we can do better and I thought I would recap some of them here.
The first blindspot that I see regularly in the webappsec space is a general failure to understand how the entire industry and the Internet at large works. I regularly run into people who have worked in security for the better part of their career but don’t really understand how things work. Specifically I am talking about network security, host security, OS rendering, caching, DNS, TCP/IP and so on, all of which are highly important, but tend to get missed almost entirely by people our area of security – webappsec.
I think this is probably a bit of a hold over from the old developer mantra of thinking that “if it’s not code that you wrote, it’s not your problem, it’s a sys-admin’s problem.” Unfortunately that line of thinking allows us to fully ignore major areas of security, and I see people pigeonholed into their small issues, without thinking about the larger security realm. This can even get smaller than webappsec as a whole, down to individual vuln classes within webappsec.
A good simple example of how this can directly affect us is STS (Strict Transport Security). Once upon a time, Firesheep was making a lot of news, as was Middler and SSLStrip, and we as an industry needed a way to ensure that if someone came to our site that we always sent them to HTTPS. There are two problems with STS though. By virtue of creating a single ‘bit’ of information per subdomain, pinned to HTTP, STS turns into a method of tracking people. If you get 30 subdomains, you have enough bits to properly track every IP on the Internet. Of course after this issue was uncovered the “fix” was to delete the STS flag upon deleting cache, which is another way people can be tracked. In my opinion that makes STS not worth much (it’s either important to have or it isn’t, and it’s either important not to be tracked or it isn’t – either way, it’s a bad design as a result).
With browsers, when you go from an HTTPS site to an HTTP site the browser is supposed to strip out the referring URL to protect against information leakage of nonces, URL structures, private domain names, etc. But with STS in (at least some) browsers it allows an adversary that has been linked to from HTTPS sites to upgrade themselves to HTTPS using STS. If the STS flag is sent the browser doesn’t realize this is not really an HTTPS->HTTPS link and sends the referring URL on subsequent requests after the STS flag has been set.
By only focusing on one problem (man in the middle attacks) STS has ended up causing at least two additional security issues (de-cloaking and information leakage) because a proper threat model was never completed. Being only partially aware of certain aspects of our industry makes it extremely easy to have blindspots. Any area that you don’t feel comfortable with should be an area that you focus on, or at minimum know the guy/girl who is comfortable with that area and get them on speed dial. This isn’t just about network and host security, this is about being hyper focused on a single issue/threat at the expense of the rest of the ecosystem. We all have blindspots, it’s just a matter of what we do about it.
Over the next few posts I will share some insights into what I think are some of the other industry blindspots.