WhiteHat Sentinel has assessed well over 12,000 websites for vulnerabilities across 500 companies. For context, getting to our first 1,000 websites took four years. Today, we’re onboarding at least 1,000 per month.
The infrastructure’s concurrent scan average is roughly 2,100 with peaks reaching 3,374. Currently, these vulnerability scans generate 256 million HTTP requests per month. This traffic crosses over redundant 1GB Internet connections and has uncovered nearly 100,000 separate website vulnerabilities between 2006 and 2012. Collectively we index billions of URLs annually. Think Googlebot, but logged-in. To top it off, we log each and every HTTP request / response combo, with full headers, for every scan, on every website.
As you can see, mass scanning websites for vulnerabilities is highly disk intensive. That’s why Sentinel’s infrastructure has 220TB worth of clustered storage arrays, plus an additional 32TB in Virtual shared storage. This storage space is split up among 12 master databases and 12 standby databases (one for each master database for full tolerance), and each consumes about 20GB per week. 2TB of new data is being written to the NFS cluster every week.
We also have heavy server requirements. While we recommend Sentinel customers scan their websites continuously to minimize coverage gaps, current schedules are weighted towards commencing Thursday and Friday, extending over the weekend, and pause/complete by the Monday e-commerce rush. This of course is local time for the customer, and we do provide services for the entire planet! Monday morning is typically when customers analyze their most recent Sentinel vulnerability findings, integrate our results into their bug tracking system, and generate customized reports for the week’s meetings.
For efficiency, Sentinel’s infrastructure must be smart of enough to automatically provision Scan Servers and Reporting Servers. To accomplish this we leverage virtualization on top of several clusters of blade chassis, which allow us to control resource allocation between multiple scanning instances and load balanced front-end & back-end reporting Web servers. As new scans kickoff, as defined by their schedule, Scan Servers dynamically appear to handle the load. We’ve had as many as 64 Scan Servers running at once. As scans taper off, unnecessary Scan Servers vanish, freeing up their CPU / memory resources for the Reporting Servers. When we need additional server capacity, we add additional blades or an entire new blade chassis.
Next we could describe all the various networking gear, routers, switches, and firewalls, which bind everything together. The reality is we’re not comfortable sharing out that information publicly. What we can say is the entirety of the system passed a BITS/ISO27002 Shared Assessment compliance audit. Beyond that, you’ll need to sign a non-disclosure agreement.
Its safe to say the Sentinel infrastructure is rather sophisticated and contains a lot of moving parts. All told, our IT team monitors 162 hosts and over 1,300 services in production. They keep a close eye on utilization of network, CPU, memory, uptime, latency, etc. ensuring everything runs smoothly 24 hours of the day, 7 days a week, 365 days a year. With rare exception, Sentinel’s entire infrastructure is redundant. Pull any network cable, push any power button, and the system keeps hacking away — so to speak.
All of this heavy metal is connected together via dual 10GB backplane ethernet and housed in 5 fully utilized 42U racks (expanding into racks 6 and 7 shortly). Since the data we’re responsible for is highly sensitive, to the say the least, the racks are physically located in SSAE16 SOC 1, and soon to be FedRamp certified, state-of-the-art colocation facility. At the Colo, security guards are always onsite. Then there are digital video recorders, false entrances, vehicle blockades, bulletproof glass/walls, unmarked buildings, and person-traps authenticating only one person at a time. Access to our cage requires an appointment, government issued ID, biometric scan, and only then do they hand over the key.
Building the Sentinel infrastructure has taken us years, millions and millions of dollars, countless all nighters and precious hair follicles. It is something we’re extremely proud of and confident in. Nothing else like it, or even close to it, exists. And it’s always getting better, always being improved upon. When your mission is scanning every website on the Internet for vulnerabilities, making them measurably more secure, such a physical infrastructure is just one of the things you need. When we say “scalable,” this is what we mean.