Manual vs. Automated Testing
I’ve always been a firm believer in incorporating manual testing as part of any security assessment; after all, a human is the best judge of evaluating the contents of application output, and best able to truly understand how an application is supposed to function. But after attending Darren Challey’s (GE) presentation at the 2009 OWASP AppSec conference, I was encouraged that someone actually measured the value of manual testing – and justified my belief! According to Darren, no single application assessment or code review product could find more than about 35% of the total vulnerabilities GE could find with a manual process. That alone should encourage anyone serious about eradicating vulnerabilities within their applications to step it up a notch! I would not want to be the person certifying an application for public consumption with only 30% of security issues fixed!
To understand why manual testing is so critical, let’s break down some of the reasons why assessment tools have limitations. For network scanners, vulnerabilities are largely based upon remote OS and application footprints; accuracy will decrease if that footprint is inaccurate or masqueraded. Application scanners must try to interpret application output; if an application uses custom messaging, what’s the scanner supposed to think? Code review products are never going to be able to accurately interpret code comments, identify customized backdoors, or follow application functionality that might appear orphaned. One must also keep in mind that an assessment product will only report on something if the vendor has written a check or signature for it; think about how many vulnerability signature authors exist compared to the number of hackers identifying new exploits.
Automated testing has a very important role in security assessments: these security tools help us identify a large swath of mainstream issues in an efficient manner. And manual testing can be expensive and time consuming. However, the cost of fixing vulnerabilities after an application or system is in production is also very costly and time consuming. According to the Systems Sciences Institute at IBM, a production or maintenance bug fix costs 100x more than a design bug fix. Furthermore, the cost of a breach increases annually as well. Adding comprehensive manual testing to your assessment criteria does have an ROI, and more importantly, could improve your detection accuracy by 60% or more!
Explore More Blog Posts
Why Continuous Testing is the New Standard for Modern Security
NetSPI's continuous pentesting delivers regular, tailored assessments across critical assets, customized to your organization's risk profile and operational cadence to ensure coverage where it matters most. These services are delivered through NetSPI’s leading PTaaS platform using existing workflows.
CVE-2026-0300 Palo Alto Networks PAN-OS Buffer Overflow Overview & Takeaways
Palo Alto Networks has disclosed a critical zero-day vulnerability in PAN-OS, tracked as CVE-2026-0300, affecting PA-Series and VM-Series firewalls with the User-ID Authentication Portal (Captive Portal) enabled. The flaw is a pre-authentication buffer overflow that allows an unauthenticated, remote attacker to execute arbitrary code with root privileges on affected devices.
CVE-2026-41940 cPanel & WHM Authentication Bypass Overview and Takeaways
cPanel has disclosed a critical authentication bypass vulnerability affecting cPanel & WHM and WP Squared, tracked as CVE-2026-41940 (CVSS 9.8). The flaw allows a remote, unauthenticated attacker to gain root-level administrative access by injecting arbitrary values into a server-side session file, effectively bypassing all credential checks.