Static Analysis

Static Analysis (SAST) and the Truth About False Positives

Static Analysis (SAST) and the Truth About False Positives

Here at WhiteHat Security, we receive a lot of questions about what constitutes an ideal static analysis (SAST) solution, the importance of depth of coverage, and some causes of false positives – how they come up, why they happen, and what can be done about them. The solutions architects and security engineers from our Threat Research Center have even been called onsite to companies who need help deciphering reports of false positives.

What you’re about to read may be surprising, but we feel it necessary to clear up some confusion regarding source code scanning, language support and how to handle false positives.

The Truth About False Positives

While there are exceptions, the fact is, in order to rapidly develop support for 20+ languages, some companies have built assumptions into their scanners, and most of the time, those assumptions are wrong. The output of the scanner and vulnerabilities may look legitimate, but they’re not – in our experience in reviewing reports from other companies offering application security technology, when some of the code doesn’t make sense to their scanner, or when code or dependencies are missing, those scanners assume that the code is vulnerable. This creates a large number of false positives that resource-stretched security and development teams often struggle to verify as false positives.

Companies should look for scanning technology that is not programmed with assumptions, and ideally, a solution that is a combination of technology and managed services to scale their application security program. At WhiteHat, our research and development teams take the time to thoroughly review all supported languages, frameworks, and libraries to encode their security properties into our scanner. This ensures that Sentinel Source reviews each part of the code quickly, efficiently, with accurate results.

Often, algorithms are built on a simple UML diagram convergence model, in which the correct location to fix a vulnerability is assumed to be where all of a vulnerability’s data-flows come together. UML (Unified Modeling Language) is a general purpose, developmental, modeling language in the field of software engineering, that is intended to provide a standard way to visualize the design of a system. However, this approach is only applicable in the simplest cases. Often in enterprise software, the location highlighted by this method will not only be the wrong location to fix the vulnerability, but it will break the application functionality.

What’s the correct way to handle this?

WhiteHat’s Sentinel Source uses patented Positional Analysis Technologyä that identifies the correct location to fix a vulnerability without affecting application functionality and when combined with our patented directed remediation technology, it generates a patch file that immediately secures the code.

It’s also important to question claims that “no configuration is needed”. This ignores the reality that source code scans need to be scoped according to the architecture and data boundaries of the application or platform that is being assessed. Scanning a repository or archive of code without taking these important factors into account can lead to false positives, incorrect risk and threat assessments as well as a false sense of confidence in the coverage of the scanner due to the assumption that the scans must be working correctly, if they are producing a large number of findings.

Sentinel Source leverages the expertise of WhiteHat’s Threat Research Center who review all scan configurations to ensure that the scan is setup to accurately reflect the architecture and data boundaries of the application or platform being scanned. The Threat Research Center will make recommendations and assist in improving scan configuration to achieve the best coverage and most efficient scanning.

Access to Security Experts is Paramount

If the company you’re considering limits access to security experts or only works with a long list of external service providers, question whether or not your organization can reach your goals of managing an application security program. Often times, these companies’ offerings will get hugely expensive when you need assistance, which may or may not be feasible for your security budget. The total cost of both the technology and managed services should be simple and transparent. This makes for a much better and predictable experience.

Integrated Software Composition Analysis (SCA)

It’s estimated that between 70 to 90 percent of source code is composed of open source components. If we take this into consideration, doesn’t it make sense that SCA should be included in a SAST solution? Yet, most companies do not offer SCA within their SAST, and in our view, this is backwards. We’ve all experienced times when there are numerous tools stacked on top of each other, that don’t always integrate, and it can be hugely disruptive to the process and to the results obtained. WhiteHat’s Sentinel Source comes with integrated SCA, at no additional charge.

The Bottom Line

Find a partner, not a tool. Use a combination of solid technology and managed services. This will ensure you can scale your application security efforts, get the most for what you pay for, and know the total cost of ownership up front.

Want a deeper dive? We have a great webinar, “Applying Security to the Twelve-Factor App” with our Chief Scientist, Eric Sheridan, and product manager, Sandeep Potdar.

Tags: directed remediation, false positives, positional analysis technology, sca, sentinel source, software composition analysis, static analysis, threat research center, trc, twelve factor app