August 23, 2012
I have worked for and with technology-focused companies for the past 15 years. I’m a huge believer that technological advances (or even just new ways of using existing technology) are making our lives demonstrably better. There are stops and starts along the way, but as a society, we are using technology to improve our businesses and our lives. I truly believe this.
But I also believe that we, at times, rely on ‘technology’ as our savior. It fixes all problems, it solves all puzzles. Everything that we don’t really understand (but have been forced to address either at work or in life) seems like it can be dealt with by some technical product. This view is supported by the fact that, in the world of IT at least, a technical product does exist for almost anything for which there is even the perception of a need.
This is especially true when you are talking about a discipline like information security. Infrastructure design and configuration, application security, regulatory compliance, policy development, privacy, disaster recovery, database security, vendor management….all of these areas might fall under ‘information security’. At the very least, those in charge of information security have some input into all of these areas. There’s a lot to know, or at least understand, here and many professionals within our community are asked to overreach their personal knowledge bank on a daily basis in order to address a need of the business.
Beyond the complexity of the required knowledge, there is also a well-understood desire for standardization and progress. This is especially true if you are working for large, politically complex organizations. If you are working with disparate groups or product/program managers that are under a lot of time-pressure and working with a defined budget, you want to present a ‘solution’ that is standardized, well-structured, easily understood, and represents progress for the security program. Even if it’s not everything that should be done. Ultimately, it may be more effective to get something in place that becomes an accepted practice, even if it’s not perfect.
I’ve been thinking quite a bit about this over the last couple of weeks. Several large organizations have recently come to NetSPI and asked us to re-assess environments and applications that were recently assessed as part of their standard internal process (which involved entirely automated tools and assessment-on-demand type services). They came to us because someone on the internal security team was concerned that they were falsely assuming all was well after ‘clean’ results from these expensive technology solutions.
Sadly, we were not able to provide them with any assurance – in each case our team identified vulnerabilities that were critical in nature and provided administrator access to apparently ‘secure’ environments and applications. These vulnerabilities were not zero-day issues – in some cases they weren’t even hard to identify. In two instances, the technologies that were used in the solution being reviewed simply weren’t supported by the on-demand assessment service that was utilized as part of their internal process (not that our client was informed of this by the vendor). At the end of the day, the fully automated and on-demand assessment solutions just didn’t find critical issues that our clients needed to know about in order to reduce risk.
My point in relating this information isn’t to bash the use of technology or automated solutions in assessing technical security. Automation is a key part in making security more efficient, and standardization helps to promote adoption and understanding internally – both good things. My point is to recognize that technology isn’t perfect and information security has characteristics that make putting all of your assessment ‘eggs’ in a single basket provided by an automation vendor a very risky proposition for an organization that is actually looking to manage risk and exposure effectively.
A recent study (Performance of automated network vulnerability scanning at remediating security issues) that looked at the performance of automated network vulnerability scanners specifically found that, across the breadth of tools tested, only slightly more than half of the vulnerabilities known to exist in the test environment were identified (and remediation guidelines presented) by the tools tested in the study.
(As a side note, the same study hits the tools pretty hard on the usefulness of the reporting that is generated – something that we’ve had issues with for years and why we created our own reporting structure and content as part of CorrelatedVM™.)
An interesting quote in regard to vulnerability scanning with the automated tools: ‘…there are issues with the method: manual effort is needed to reach complete accuracy and the remediation guidelines are oftentimes very cumbersome to study.’ This certainly supports NetSPI’s approach and the methodology that we follow with clients.
While this study was focused specifically on leading network vulnerability scanning tools, another study (Why Johnny Can’t Pentest: An Analysis of Black-box Web Vulnerability Scanners) found a very similar situation with web vulnerability scanners. Apparently the researchers in this case found that while certain kinds of vulnerabilities are found quite effectively, ‘there are whole classes of vulnerabilities that are not well-understood and cannot be detected by any of the scanners.’ (their emphasis)
My point in all of this is that automated vulnerability scanning is certainly useful and, with large environments or applications, absolutely necessary (we use some of these tools in our assessment process), but don’t be lulled into a false sense of security. If this is all that you are doing to identify and address potential vulnerabilities within your network or critical application environments then you have a problem.