Back

What Not to Do When Ingesting and Prioritizing Vulnerability Data for Remediation

I should have known better.

Eleven-some thousand findings, struggling inexorably to transform from scanner output to csv format. It was too late; the scanner tool was on a mission to dump megabytes of data into a spreadsheet and there was nothing I could do to cancel it.

As I sat there staring at the progress counter slowly creep upward, I questioned my life choices up until that point. I’ve been a security practitioner my entire adult life. I’ve (legally) stolen troves of data in many forms. I’ve discovered untold thousands of vulnerabilities in my penetration testing days, most of which didn’t amount to much; inconsequential findings that did not correlate to any meaningful risk to the organization I was testing. I’ve always weighed more the vulnerabilities I knew would net the golden ring, whether it was unauthorized access to sensitive data, privileged access to a network or system, or whatever prize the vulnerability du jour led to.

And yet there I was, wondering what made me even look for that many vulnerabilities. For some reason I enabled all vulnerability checks in the scanner configuration. The scanner categorized most of the findings as “information,” usually mundane tidbits of data more suited for asset inventories than vulnerability management. Of those 11,000 findings, maybe 25 were categorized as high risk, and maybe a few hundred or so as medium risk. After some threat modeling and other consideration, it turned out there were maybe five relevant vulnerabilities that required prioritized action. All those informational findings? No need to worry about those.

Except one. And man, it was a killer.

It was a simple thing, really. The scanner identified something my team and I had taken great pains to disable long ago. I was confident – arrogantly so! – that it was disabled, so I didn’t bother checking the scanner output to see if it was suddenly active again.

I think you can see where this is going.

It wasn’t until later during an internal audit that I discovered I made the mistake of not propagating my vulnerability management strategy wide enough to encompass a critical process in our security program framework: to periodically validate everything that could have the most adverse effect on the business. Thankfully, it was discovered internally but let’s be honest, nobody enjoys internal auditors finding anything at all, much less something significant.

To be fair, how do you sift through 11,000 findings to determine which are important? You don’t. At least, it certainly isn’t using spreadsheets, arguably the most common method of tracking vulnerabilities. Spreadsheets are the devil. Dumping vulnerability data into them leads to headaches and doesn’t provide the kind of tools needed to manipulate and correlate the data to produce meaningful outcomes in managing the vulnerabilities. And besides, it’s unnecessary. This entire approach is inefficient and ultimately unnecessary.

A Scanner is not the Equivalent of a Vulnerability Management Program

The truth is, many organizations consider vulnerability management to be running a scanner with all the checks turned on, and then addressing the high-risk findings. In my experience, this bottom-up approach presents a few problems:

  • Scanner policy configurations are not one-size-fits-all. When set to scan for all possible technology vulnerabilities, the scanner can produce an enormous amount of noise in which meaningful vulnerabilities may be missed or ignored. This “spray and pray” method creates more confusion and eventually apathy toward purposeful vulnerability analysis.
  • Similar vulnerabilities can pose drastically different risks. A discovered open share on a file server containing HR data may be categorized by a scanner as medium risk, but the actual risk to the business is high or even critical. A discovered open share on a print controller containing fonts or no files at all may also be categorized as medium risk but in fact is a low risk to the business. Without the proper context an organization may treat these two findings as equal and expend the same time and effort (cost) in addressing both when they do not merit equal treatment.
  • Measured improvements in security maturity are an expensive undertaking. The costs in terms of money, time, and effort can skyrocket if guardrails aren’t applied to focus the process on specific goals, otherwise it is a continuous game of catching up each time a vulnerability scan is run.

The key is to understand the risks most likely to disrupt the business from meeting its objectives, identify the threats that would cause and amplify those risks, and select the controls most appropriate for managing those threats. The controls should then be regularly measured and audited to ensure they are implemented correctly and are effective in protecting the organization.

In the next blog in this vulnerability management series, we will look at how to align vulnerability management goals to meet the organization’s business objectives, and present considerations for maturing vulnerability management processes into risk-based program strategy.

Discover how the NetSPI BAS solution helps organizations validate the efficacy of existing security controls and understand their Security Posture and Readiness.

X