Physical artifacts are amazing little (okay sometimes big) things that give us insight into how earlier civilizations lived, worked, and played. These rediscovered relics provide such useful information that we wouldn’t otherwise have about such time-frames and people. Virtual artifacts are rather similar, just less tangible. Virtual artifacts run the gamut from computer generated artwork, photographs of family, and other critical files denoting and cataloging our (virtual) lives. However, they also include forgotten or discarded files that were never deleted (of course the true digital archaeologist knows how to dig even deeper to get files not securely deleted). As such, virtual artifacts provide keen insight into a system and the system’s owner. Including such files that we probably would have preferred never to see the light of day again. So why should we concern ourselves with these little remnants in our organization’s computer systems? The obvious concern is that of hackers (both internal and external). Virtual artifacts can affect your compliance efforts even without hackers as part of the equation. Depending upon the quantity of information stored in the files (such as data dumps from databases, debug logs, etc.) you may face some potential breach notification issues with significant consequences. These may also undermine all the scoping efforts performed to date, specifically relating to PCI. If those files remain on a file server that is discovered during an assessment, your cardholder data just ballooned beyond the comfort level. During ISO reviews, these artifacts may be as helpful as a hostile witness to your (re)certification case. Alongside these are internal policy violations which may compromise sensitive internal information (employee information such as payroll, etc.). So how do we combat these virtual artifacts within our organization? In essence, where do we start to dig within our virtual landscape? As unfavorable as it may seem, you start at the system most likely to contain such files and just keep going. There are tools that can help automate this process. First think like an attacker; NetSPI’s Assessment Team does just that during penetration tests. They look for unprotected and residual data (the files that are just “left out there”); this includes sensitive data (PII, PHI, cardholder data, passwords, etc.) through generic file system searches. While not overly glamorous, sometimes the simplest method is the best. Then they scour multiple systems at once through spider or crawler tools, and even look at databases and their output. Speaking of, Scott Sutherland has a new blog post that includes finding potentially sensitive information within SQL databases. They find where programmers are leaving their specific output files, debug logs, etc. Sometimes the most nondescript system can have that file you don’t want to see the light of day. So how often should you be performing these internal reviews? It partly depends on your organization’s propensity to leave virtual golden idols lying around and how effective your defenses / controls are. If movies have taught us anything is that the truly daring individual can overcome most controls if the gains are substantial enough. The best defense is to have guidelines for employees (especially those in positions that generate, or even have the ability to generate) to securely delete files no longer needed (i.e., don’t store the golden idols on pedestals where the sunlight gleams off them like a beacon). For a more realistic example, an application owner or custodian should ensure that their application’s logs that include sensitive information are properly secured behind active access controls, temporary logs are immediately deleted when no longer needed, and the passwords to the system are secured (encrypted), etc. Some may respond and say that the Data Loss Prevention (DLP) tool will catch these, so we are good to go. However some organizations implement a DLP tool focusing on one aspect only (Network, Storage, or End-Point). Each of these components can be overcome through various means. Blowguns (Storage controls), weight-monitoring pedestals (End-Point controls), and giant boulders closing the opening (Network controls) can be all be bypassed by careful and skilled virtual archaeologists. It’s not uncommon for a found stray file to compromise an organization’s compliance efforts. By reviewing your environment proactively you also help make the case that your organization is performed the necessary due diligence should an incident occur. But then the point is to find those files first, leaving nothing for the tomb raiders.
PTaaS is NetSPI’s delivery model for penetration testing. It enables customers to simplify the scoping of new engagements, view their testing results in real time, orchestrate faster remediation, perform always-on continuous testing, and more - all through the Resolve™ vulnerability management and orchestration platform.
We help organizations defend against adversaries by being the best at simulating real-world, sophisticated adversaries with the products, services, and training we provide. We know how attackers think and operate, allowing us to help our customers better defend against the threats they face daily.
At NetSPI, we believe that there is simply no replacement for human-led manual deep dive testing. Our Resolve platform delivers automation to ensure our people spend time looking for the critical vulnerabilities that tools miss. We provide automated and manual testing of all aspects of an organization’s entire attack surface, including external and internal network, application, cloud, and physical security.
Our proven methodology ensures that the client experience and our findings aren’t only as good as the latest tester assigned to your project. That consistency gives our customers assurance that if vulnerabilities exist, we will find them.
Is your organization prepared for a ransomware attack? Explore our Ransomware Attack Simulation service.