I’ve always been a firm believer in incorporating manual testing as part of any security assessment; after all, a human is the best judge of evaluating the contents of application output, and best able to truly understand how an application is supposed to function. But after attending Darren Challey’s (GE) presentation at the 2009 OWASP AppSec conference, I was encouraged that someone actually measured the value of manual testing – and justified my belief! According to Darren, no single application assessment or code review product could find more than about 35% of the total vulnerabilities GE could find with a manual process. That alone should encourage anyone serious about eradicating vulnerabilities within their applications to step it up a notch! I would not want to be the person certifying an application for public consumption with only 30% of security issues fixed!
To understand why manual testing is so critical, let’s break down some of the reasons why assessment tools have limitations. For network scanners, vulnerabilities are largely based upon remote OS and application footprints; accuracy will decrease if that footprint is inaccurate or masqueraded. Application scanners must try to interpret application output; if an application uses custom messaging, what’s the scanner supposed to think? Code review products are never going to be able to accurately interpret code comments, identify customized backdoors, or follow application functionality that might appear orphaned. One must also keep in mind that an assessment product will only report on something if the vendor has written a check or signature for it; think about how many vulnerability signature authors exist compared to the number of hackers identifying new exploits.
Automated testing has a very important role in security assessments: these security tools help us identify a large swath of mainstream issues in an efficient manner. And manual testing can be expensive and time consuming. However, the cost of fixing vulnerabilities after an application or system is in production is also very costly and time consuming. According to the Systems Sciences Institute at IBM, a production or maintenance bug fix costs 100x more than a design bug fix. Furthermore, the cost of a breach increases annually as well. Adding comprehensive manual testing to your assessment criteria does have an ROI, and more importantly, could improve your detection accuracy by 60% or more!
Necessary cookies help make a website usable by enabling basic functions like page navigation and access to secure areas of the website. The website cannot function properly without these cookies.
YouTube session cookie.
Marketing cookies are used to track visitors across websites. The intention is to display ads that are relevant and engaging for the individual user and thereby more valuable for publishers and third party advertisers.
Analytics cookies help website owners to understand how visitors interact with websites by collecting and reporting information anonymously.
Preference cookies enable a website to remember information that changes the way the website behaves or looks, like your preferred language or the region that you are in.
Unclassified cookies are cookies that we are in the process of classifying, together with the providers of individual cookies.
Cookies are small text files that can be used by websites to make a user's experience more efficient. The law states that we can store cookies on your device if they are strictly necessary for the operation of this site. For all other types of cookies we need your permission. This site uses different types of cookies. Some cookies are placed by third party services that appear on our pages.
Discover why security operations teams choose NetSPI.