Dark Reading: Abusing Kerberos for Local Privilege Escalation

On August 9, NetSPI Head of Adversarial R&D Nick Landers was featured in the Dark Reading article called Abusing Kerberos for Local Privilege Escalation. Read the preview below or view it online.


As the main authentication protocol for Windows enterprise networks, Kerberos has long been a favored hacking playground for security researchers and cybercriminals alike. While the focus has been on attacking Kerberos authentication to carry out remote exploits and aid in lateral movement across the network, new research explores how Kerberos can also be abused to great effect in carrying out a variety of local privilege escalation (LPE) attacks.

At the Black Hat USA conference this week in Las Vegas, James Forshaw, security researcher for Google Project Zero, and Nick Landers, head of adversarial R&D for NetSPI, plan to take the security discussion beyond the Kerberoasting and Golden/Silver ticket attack discussions that have dominated Kerberos security research in recent years. In the session “Elevating Kerberos to the Next Level,” Forshaw and Landers will explore authentication bypasses, sandbox escapes, and arbitrary code execution in privileged processes.

“James and I have both spent a lot of our time digging into Windows internals, and Kerberos is fundamental to network authentication between Windows systems. However, most of the existing research and tooling I’ve done focuses on remote exploitation — ignoring attack surfaces that exist on just a local machine,” says Landers, who explained why the pair decided to dig deeper into design flaws in the way Kerberos does local authentication. “Through this, we’ve discovered many interesting flaws — some fixed and some not — that we’re excited to share on Wednesday, along with the tooling we’ve built and knowledge we’ve gained over the last several months.”

The tooling will help others in the security research community to inspect and manipulate Kerberos on local systems to build on the pair’s research. The duo will also offer up some important detection and configuration advice to help security practitioners mitigate the risk of the flaws that they’ll present.

You can read the full article on Dark Reading!


Database Trends and Applications: NetSPI’s Latest Open-Source Tools Confront Information Security Issues

On August 9, NetSPI Senior Director Scott Sutherland was featured in the Database Trends and Applications article called NetSPI’s Latest Open-Source Tools Confront Information Security Issues. Read the preview below or view it online.


NetSPI, an enterprise penetration testing and attack surface management company, is releasing two new open-source tools for identity and access management (IAM) and security operations center (SOC) groups. These tools, PowerHuntShares and PowerHunt, will aid information security teams discover vulnerable network shares and improve detections overall.

PowerHuntShares aims to elevate the pains of data exposure, privilege escalation, and ransomware attacks in company systems caused by excessive privileges. The tool inventories, analyzes, and reports excessive share permissions associated with their respective SMB shares on Active Directory domain joined computers.

PowerHunt is a modular threat hunting framework that locates risks across target environments, as well as identifies target-specific anomalies and outliers. This detection is based on artifacts from prevalent MITRE ATT&CK techniques, whose collection is automated using PowerShell remoting and perform initial analysis. PowerHunt also produces easy to consume .csv files for increased triage and analysis, among other tools and processes.

“I’m proud to work for an organization that understands the importance of open-source tool development and encourages innovation through collaboration,” said Scott Sutherland, senior director at NetSPI. “I urge the security community to check out and contribute to these tools so we can better understand our SMB share attack surfaces and improve strategies for remediation, together.”

For more information, please visit


VentureBeat: NetSPI rolls out 2 new open-source pen-testing tools at Black Hat

On August 9, NetSPI Senior Director Scott Sutherland was featured in the VentureBeat article called NetSPI rolls out 2 new open-source pen-testing tools at Black Hat. Read the preview below or view it online.


Preventing and mitigating cyberattacks is a day-to-day — sometimes hour-to-hour — is a massive endeavor for enterprises. New, more advanced techniques are revealed constantly, especially with the rise in ransomware-as-a-service, crime syndicates and cybercrime commoditization. Likewise, statistics are seemingly endless, with a regular churn of new, updated reports and research studies revealing worsening conditions. 

According to Fortune Business Insights, the worldwide information security market will reach just around $376 billion in 2029. And, IBM research revealed that the average cost of a data breach is $4.35 million.

The harsh truth is that many organizations are exposed due to common software, hardware or organizational process vulnerabilities — and 93% of all networks are open to breaches, according to another recent report

Cybersecurity must therefore be a team effort, said Scott Sutherland, senior director at NetSPI, which specializes in enterprise penetration testing and attack-surface management. 

New open-source discovery and remediation tools

The company today announced the release of two new open-source tools for the information security community: PowerHuntShares and PowerHunt. Sutherland is demoing both at Black Hat USA this week. 

These new tools are aimed at helping defense, identity and access management (IAM) and security operations center (SOC) teams discover vulnerable network shares and improve detections, said Sutherland. 

They have been developed — and released in an open-source capacity — to “help ensure our penetration testers and the IT community can more effectively identify and remediate excessive share permissions that are being abused by bad actors like ransomware groups,” said Sutherland. 

He added, “They can be used as part of a regular quarterly cadence, but the hope is they’ll be a starting point for companies that lacked awareness around these issues before the tools were released.” 

Vulnerabilities revealed (by the good guys)

The new PowerHuntShares capability inventories, analyzes and reports excessive privilege assigned to server message block (SMB) shares on Microsoft’s Active Directory (AD) domain-joined computers. 

SMB allows applications on a computer to read and write to files and to request services from server programs in a computer network.

NetSPI’s new tool helps address risks of excessive share permissions in AD environments that can lead to data exposure, privilege escalation and ransomware attacks within enterprise environments, explained Sutherland. 

“PowerHuntShares is focused on identifying shares configured with excessive permissions and providing data insight to understand how they are related to each other, when they were introduced into the environment, who owns them and how exploitable they are,” said Sutherland. 

For instance, according to a recent study from cybersecurity company ExtraHop, SMB was the most prevalent protocol exposed in many industries: 34 out of 10,000 devices in financial services; seven out of 10,000 devices in healthcare; and five out of 10,000 devices in state, local and education (SLED).

You can read the full article at VentureBeat!


The Current State of Cyber Insurance

It’s no secret that data breaches are costly. IBM’s annual Cost of a Data Breach report illustrates this well:  

  • The average cost of a data breach in 2021 was $4.35 million. 
  • The average cost of a ransomware attack, not including the cost of the ransom was $4.54 million in 2021.
  • 60 percent of organizations’ breaches led to increases in prices passed on to customers. 

Given the significant costs associated with data breaches, organizations are increasingly looking to cyber insurance to help protect their businesses against financial losses from a cyber attack. In fact, in IBM’s report, “insurance protection” was a key factor that lowered the average total cost of a data breach.  

Yet, cybersecurity insurance is still considered an emerging space, one that is notoriously difficult to navigate. 

For insights on the topic, we recently sat down with industry experts Ethan Harrington, Founder and Principal at 221b Consulting, and Mary Roop, Consultant at 221b Consulting, to discuss the current state of cyber insurance and get answers to some of our burning questions. Continue reading for highlights from the discussion.

What’s going on in the cyber insurance market? 

Ethan Harrington: The market is terrible, and many of the issues we’ve started to experience have surfaced just within the last few years. Last year was a historical year, and not for good reason. We saw a 300-plus percent increase in ransomware. We also saw our clients experience triple-digit increases in their cyber insurance premiums. 

On average, a company categorized as having “good” risk levels may see a 15 to 20 percent increase in premiums, and those at the “questionable” risk level or that have had claims experience may see another three-digit percentage increase. 

Why is this happening? Market corrections. The insurance marketplace is global, and all of these insurers are writing more than cyber coverage. When they have a year where auto liability coverage is bad, they’re typically going to try to make up some of that premium in other places because they have to make money. In 2019 and during COVID-19, auto liability and general liability were extremely stressed, along with other claims completely unrelated to cyber. So, we knew that there was going to be a potential correction. 

But what we saw last year was a complete market shift. We’ve never seen anything like this before. We’re concerned that what we’re seeing right now is going to perpetuate for many more years and are unsure if coverages are ever going to return to what they were and how the associated premium will be impacted.  

As cyber insurance matures, is it becoming yet another regulation or standard to comply with? 

Ethan: Yes and no. Yes, because it is another party that is keenly interested in what organizations are doing to not only harden their defenses and protect their financials but also protect Personally Identifiable Information (PII) or data from a potential ransomware attack that could cause business interruption. 

No, because most insurance carriers understand that there are several golden standards to adhere to, whether it’s the National Institute of Standards and Technology (NIST) or the International Organization for Standardization (ISO). If you can document that you follow one or a combination of them, then I think that most would understand it. 

Insurers are starting to layer on more requirements beyond what NIST or ISO would indicate as guidance – and they’re asking questions specific to CISOs. They’re starting to ask questions about cyber resiliency. In general, most regulatory frameworks that organizations follow focus on preventative actions. Now, carriers are focusing on reactive responses to cyber attacks, looking at what you are doing to limit the potential impact if you do have to file a claim. 

There’s more scrutiny involved in cyber insurance today, and it’s different from what other regulators require.  

Who typically manages the cyber insurance process? 

According to the webinar attendees, here is the breakdown of how cyber insurance is managed at their respective organizations, many of which came from financial institutions: 

  • 42% risk management  
  • 25% finance 
  • 25% information security  
  • 8% general counsel/legal 

Mary Roop: Whoever runs risk management typically controls the placement, but it truly is a partnership between the person responsible for placing the insurance policies, the information security team, the privacy team within legal, and the team responsible for Payment Card Insurance (PCI) compliance.  

These teams need to work together to ensure an understanding of the cyber hygiene and the data incident response within your organization. This creates a holistic picture with complete information useful in the robust cyber insurance application and underwriting process. 

How has ransomware played a role in the cyber insurance market?  

Ethan: Ransomware decimated the entire insurance industry from a cyber perspective. In 2021, there was a 300-plus percent increase in ransomware attacks. Ransomware used to be a quick way for adversaries to grab cash, but they’ve become more intelligent, conducting background checks into businesses to determine what their financials look like to identify the most realistic ransom amount to ask for. 

Ransomware is not going away anytime soon, and the cyber insurance market is responding to that. Now, we are starting to see sub-limits within insurance policies specific to ransomware, separate retentions as it applies to ransomware, and different changes in waiting periods (eight hours then vs. 24-48 hours now). But I expect that’ll start to lessen, and some of those policies will return to what they were before. 

Want to improve your ransomware prevention and detection? Explore NetSPI’s ransomware attack simulation services. 

How have cybersecurity insurance questionnaires evolved? 

Ethan: 15 years ago, none of the insurers had any expertise in cybersecurity. Many insurance companies recognized that they do not understand cybersecurity and hired third parties to come in and ask the questions on their behalf.  

That has changed. Lots of insurance carriers are now hiring specific technical people that have been consultants in cyberspace or those who managed security service providers because they understand the market much better. Now, insurance companies are teaching them insurance and how to do underwriting versus outsourcing. 

How do you navigate situations where providers require specific vendors for your solutions and controls? 

Mary: If your cyber insurance carrier isn’t already requesting this within the application, we do recommend getting pre-approval on your data incident providers. They may be included on that pre-approved list already, and if not, they’re going to have to be vetted extensively by those providers.  

This process is lengthy, but it is important to undertake before starting your renewal strategy. Go meet up with your legal team to determine the outside counsel that you can use to help advocate for your vendor choices. Carriers want to understand vendor credibility if they’re not familiar with them. 

Getting ahead of this process is important because you don’t want any surprises when a data incident occurs. Like when your carrier says, “We’re not going to approve this claim because you do not use an approved vendor.” If you are proactive about this, you can go to the leaders of the respective departments and come up with a solution before it’s too late. 

There has been talk about possibly monitoring clients’ cyber behavior and adjusting insurance premiums accordingly. How might we see a program like this play out? 

Ethan: We don’t like insurance companies constantly monitoring and doing scans of environments. It looks bad for the insurance industry because we all know that there’s going to be weaknesses that can be found if you look close enough.  

If an insurance company is constantly scanning your system, it is possible that they’re going to come back to you and say, “We need you to fix this.” At some point, the CISO is going to say, “I don’t have any more risk management practices that I can apply to protect us against that.” Security teams can do everything they can, but if employees/personnel make a negligent mistake or are heavily targeted, they can cause a massive claim to occur. 

We’re putting the CISO in a difficult position where they’re trying to manage the board, protect their critical assets, and now all of a sudden, they also need to keep an insurance company happy.  

Some scans delve into the depths of systems to find vendors and clients that you’ve referenced and how they could affect your insurance. Underwriters, especially in financial services, are looking at the kind of brand reputation or loss of business income that might be impacted if there was a data security incident. It’s becoming exceedingly difficult for underwriters to try to figure this out. 

Have you seen any companies go under because they’ve failed to secure cyber insurance due to poor IT security controls? 

Ethan: Thus far, no, I have not seen anybody that has actually gone under because they didn’t buy cyber insurance. But I anticipate it is going to happen, especially with the triple-digit increases in premiums. 

We are seeing more and more companies that are not buying or cannot obtain cyber insurance, and it will come back to bite them in some capacity. It’s likely that we will see organizations going under as a result of the rising financial costs associated with breaches today. 

For the full conversation and more in-depth insights from Ethan, Mary, and Norman, watch the on-demand webinar.


NetSPI Releases Two Open-Source Tools for the Information Security Community

The tools help defense teams discover vulnerable network shares and identify adversary behaviors.

Minneapolis, MN NetSPI, the leader in enterprise penetration testing and attack surface management, today unveiled two new open-source tools for the information security community: PowerHuntShares and PowerHunt

These new adversary simulation tools were developed by NetSPI’s Senior Director, Scott Sutherland, to help defense, identity and access management (IAM), and security operations center (SOC) teams discover vulnerable network shares and improve detections. 

  • PowerHuntShares inventories, analyzes, and reports excessive privilege assigned to SMB shares on Active Directory domain joined computers. This capability helps address the risks of excessive share permissions in Active Directory environments that can lead to data exposure, privilege escalation, and ransomware attacks within enterprise environments. 
  • PowerHunt, a modular threat hunting framework, identifies signs of compromise based on artifacts from common MITRE ATT&CK techniques and detects anomalies and outliers specific to the target environment. PowerHunt automates the collection of artifacts at scale using PowerShell remoting and perform initial analysis. It can also output easy to consume .csv files so that additional triage and analysis can be done using other tools and processes. 

“I’m proud to work for an organization that understands the importance of open-source tool development and encourages innovation through collaboration,” said Scott. “I urge the security community to check out and contribute to these tools so we can better understand our SMB share attack surfaces and improve strategies for remediation, together.” 

To see PowerHuntShares in action and explore the risks of excessive share permissions, read Scott’s blog, or register for our upcoming webinar, How to Evaluate Active Directory SMB Shares at Scale. For those attending Black Hat on August 10-11, request a meeting with Scott at NetSPI booth #1687. 

NetSPI’s global penetration testing team has developed several open-source tools, including popular penetration testing tools PowerUpSQL and MicroBurst. Learn more about NetSPI’s commitment to open-source tool development on the company’s tool repository

About NetSPI  

NetSPI is the leader in enterprise security testing and attack surface management, partnering with nine of the top 10 U.S. banks, three of the world’s five largest healthcare companies, the largest global cloud providers, and many of the Fortune® 500. NetSPI offers Penetration Testing as a Service (PTaaS) through its Resolve™ penetration testing and vulnerability management platform. Its experts perform deep dive manual penetration testing of application, network, and cloud attack surfaces, historically testing over one million assets to find four million unique vulnerabilities. NetSPI is headquartered in Minneapolis, MN and is a portfolio company of private equity firms Sunstone Partners, KKR, and Ten Eleven Ventures. Follow us on Facebook, Twitter, and LinkedIn.  

Media Contacts: 
Tori Norris, NetSPI
(630) 258-0277  

Inkhouse for NetSPI
(774) 451-5142 


Attacking and Remediating Excessive Network Share Permissions in Active Directory Environments


In this blog, I’ll explain how to quickly inventory, exploit, and remediate network shares configured with excessive permissions at scale in Active Directory environments. Excessive share permissions represent a risk that can lead to data exposure, privilege escalation, and ransomware attacks within enterprise environments. So, I’ll also be exploring why network shares configured with excessive permissions are still plaguing most environments after 20 years of mainstream vulnerability management and penetration testing.

Finally, I’ll share a new open-source tool called PowerHuntShares that can help streamline share hunting and remediation of excessive SMB share permissions in Active Directory environments. This content is also available in a presentation format here. Or, if you’d like to hear me talk about this topic, check out our webinar, How to Evaluate Active Directory SMB Shares at Scale.

This should be interesting to people responsible for managing network share access in Active Directory environments (Identity and access management/IAM teams) and the red team/penetration testers tasked with exploiting that access. 

TLDR; We can leverage Active Directory to help create an inventory of systems and shares. Shares configured with excessive permissions can lead to remote code execution (RCE) in a variety of ways, remediation efforts can be expedited through simple data grouping techniques, and malicious share scanning can be detected with a few common event IDs and a little correlation (always easier said than done).

Table of Contents: 

The Problem(s)
Network Share Permissions Inheritance Blind Spots
Network Share Inventory
Network Share Exploitation
Network Share Remediation
Introducing PowerHuntShares
Wrap Up

The Problem(s) 

If only it were just one problem. I don’t know a penetration tester that doesn’t have a war story involving unauthorized network share access. In the real world, that story typically ends with the deployment of ransomware and double extortion. That’s why it’s important we try to understand some of the root causes behind this issue. Below is a summary of the root causes that often lead to massive network share exposure in most Active Directory environments. 

Broken Asset Management 

Tracking live systems in enterprise environments is difficult and tracking an ever-changing share inventory and owners is even more difficult. Even if the Identity and Access Management (IAM) team finds a network share through discovery, it begs the questions:

  1. Who owns it?
  2. What applications or business processes does it support?
  3. Can we remove high risk Access Control Entries (ACE)?
  4. Can we remove the share all together?

Most of those questions can be answered if you have a functioning Configuration Management Database (CMDB). Unfortunately, not everyone does.

Broken Vulnerability Management 

Many vulnerability management programs were never built to identify network share configurations that provide unauthorized access to authenticated domain users. Much of their focus has been on identifying classic vulnerabilities (missing patches, weak passwords, and application issues) and prioritizing efforts around vulnerabilities that don’t require authentication, which is of course not all bad.

However, based on my observations, the industry has only taken a deep interest in the Active Directory ecosystem in the last five years. This seems to be largely due to increased exposure and awareness of Active Directory (AD) attacks which are heavily dependent on configurations and not missing patches.

I’m also not saying IAM teams haven’t been working hard to do their jobs, but in many cases, they get bogged down in what equates to group management and forget to (or don’t have time to) look at the actual assets that global/common groups have access to. That is a deep well, but today’s focus is on the network shares.

Penetration testers have always known shares are a risk, but implementing, managing, and evaluating least privilege in Active Directory environments is a non-trivial challenge. Even with increased interest in the security community, very few solutions can effectively inventory and evaluate share access for an entire Active Directory domain (or multiple domains). 

Based on my experience, very few organizations perform authenticated vulnerability scans to begin with, but even those that do seem to lack findings for common excessive privileges, inherited permissions, and distilled summary data for the environment that provides the insights that most IAM teams need to make good decisions. There has been an overreliance on those types of tools for a long time because many companies have the impression that they provide more coverage than they do regarding network share permissions. 

In short, good asset inventory and attack surface management paves the way for better vulnerability management coverage – and many companies aren’t quite there yet. 

Not Considering Segmentation Boundaries 

Most large environments have host, network, and Active Directory domain boundaries that need to be considered when performing any type of authenticated scanning or agent deployment. Companies trying to accurately inventory and evaluate network shares often miss things because they do not consider the boundaries isolating their assets. Make sure to work within those boundaries when evaluating assets. 

The Cloud is Here!

The cloud is here, and it supports all kinds of fantastic file storage mediums, but that doesn’t mean that on premise network shares disappear. Companies need to make sure they are still looking backward as they continue to look forward regarding security controls on file shares. For many companies, it may be the better part of a decade before they can migrate the bulk of their file storage infrastructure into their favorite floating mass of condensed water vapor – you know, the cloud. 😜

Misunderstanding NTFS and Share Permissions 

There are a lot of bad practices related to share permission management that have gotten absorbed into IT culture over the years simply because people don’t understand how they work. One of the biggest contributors to excessive share permissions is privilege inheritance through native nested group memberships. This issue is not limited to network shares either. We have been abusing the same privilege inheritance issues for over a decade to get access to SQL Server Instances. In the next sections, I’ll provide an overview of the issue and how it can be exploited in the context of network shares.

Network Share Permissions Inheritance Blind Spots 

A network share is just a medium for making local files available to remote users on the network, but two sets of permissions control a remote user’s access to the shared files. To understand the privilege inheritance problem, it helps to do a quick refresher on how NTFS and share permissions work together on Windows systems. Let’s explore the basics. 

NTFS Permissions 

  • Used to control access to the local NTFS file system 
  • Can affect local and remote users 

Share Permissions 

  • Used to control access to shared files and folders 
  • Only affects remote users 

In short, from a remote user perspective, network share permissions (remote) are reviewed first, then NTFS permissions (local) are reviewed second, but the most restrictive permission always wins regardless. Below is a simple example showing that John has Full Control permissions on the share, but only Read permissions on the associated local folder. Most restrictive wins, so John is only provided read access to the files made available remotely through the shared folder.

A diagram of NTFS Permissions and Share Permissions showcasing that the most restrictive permission wins.

So those are the basics. The big idea being that the most restrictive ACL wins. However, there are some nuances that have to do with local groups that inherit Domain Groups. To get our heads around that, let’s touch briefly on the affected local groups. 


The everyone group provides all authenticated and anonymous users with access in most configurations. This group is overused in many environments and often results in excessive privilege. 


New local users are added to it by default. When the system is not joined to a domain it operates as you would expect it to. 

Authenticated Users

This group is nested in the Builtin\Users group. When a system is not joined to the domain, it doesn’t do much in the way of influencing access. However, when a system is joined to an Active Directory domain, Authenticated Users implicitly includes the “Domain Users” and “Domain Computers” groups. For example, an IT administrator may think they’re only providing remote share access to the Builtin\Users group, when in fact they are giving it to everyone on the domain. Below is a diagram to help illustrates this scenario.

Builtin\Users group includes Domain Users when domain joined.

The lesson here is that a small misunderstanding around local and domain group relationships can lead to unauthorized access and potential risk. The next section will cover how to inventory shares and their Access-Control Lists (ACLs) so we can target and remediate them.

Network Share Inventory 

As it turns out, getting a quick inventory of your domain computers and associated shares isn’t that hard thanks to several native and open-source tools. The trick is to grab enough information to answer those who, what, where, when, and how questions needed for remediation efforts.
The discovery of shares and permissions boils down to a few basic steps: 

  1. Query Active Directory via Lightweight Directory Access Protocol (LDAP) to get a list of domain computers. PowerShell commands like Get-AdComputer (Active Directory PowerShell Module) and Get-DomainComputer (PowerSploit) can help a lot there.
  2. Confirm connectivity to those computers on TCP port 445. Nmap is a free and easy-to-use tool for this purpose. There are also several open-source TCP port scanning scripts out there if you want to stick with PowerShell.
  3. Query for shares, share permissions, and other information using your preferred method. PowerShell tools like Get-SMBShare, Get-SmbShareAccess, Get-ACL, and Get-ObjectAcl (PowerSploit) are quite helpful.
  4. Other information that will help remediation efforts later includes the folder owner, file count, file listing, file listing hash, and computer IP address. You may also find some of that information in your company’s CMDB. PowerShell commands like Get-ChildItem and Resolve-DnsNameSome can also help gather some of that information.

PowerHuntShares can be used to automate the tasks above (covered in the last section), but regardless of what you use for discovery, understanding how unauthorized share access can be abused will help your team prioritize remediation efforts.

Network Share Exploitation 

Network shares configured with excessive permissions can be exploited in several ways, but the nature of the share and specific share permissions will ultimately dictate which attacks can be executed. Below, I’ve provided an overview of some of the most common attacks that leverage read and write access to shares to help get you started. 

Read Access Abuse 

Ransomware and other threat actors often leverage excessive read permissions on shares to access sensitive data like Personal Identifiable Information (PII) or intellectual property (source code, engineering designs, investment strategies, proprietary formulas, acquisition information, etc.) that they can exploit, sell, or extort your company with. Additionally, we have found during penetration tests that passwords are commonly stored in cleartext and can be used to log into databases and servers. This means that in some cases, read access to a share can end in RCE.

Below is a simple example of how excessive read access to a network share can result in RCE: 

  1. The attacker compromises a domain user.
  2. The attacker identifies a shared folder for a web root, code backup, or dev ops directory.
  3. The attacker identifies passwords (often database connection strings) stored in cleartext.
  4. The attacker uses the database password to connect to the database server.
  5. The attacker uses the native database functionality to obtain local administrative privileges to the database server’s operating system.
  6. The attacker leverages shared database service account to access other database servers. 

Below is a simple illustration of that process: 

A 6-step process of how excessive read access to a network share can result in remote code execution (RCE).

Write Access Abuse 

Write access provides all the benefits of read access with the bonus of being able to add, remove, modify, and encrypt files (like Ransomware threat actors). Write access also offers more potential to turn share access into RCE. Below is a list of ten of the more common RCE options: 

  1. Write a web shell to a web root folder, which can be accessed via the web server.
  2. Replace or modify application EXE and DLL files to include a backdoor.
  3. Write EXE or DLL files to paths used by applications and services that are unquoted.
  4. Write a DLL to application folders to perform DLL hijacking. You can use Koppeling, written by NetSPI’s very own Director of Research Nick Landers.
  5. Write a DLL and config file to application folders to perform appdomain hijacking for .net applications.
  6. Write an executable or script to the “All Users” Startup folder to launch them at the next logon.
  7. Modify files executed by scheduled tasks.
  8. Modify the PowerShell startup profile to include a backdoor.
  9. Modify Microsoft office templates to include a backdoor.
  10. Write a malicious LNK file to capture or relay the NetNTLM hashes. 

You may have noticed that many of the techniques I listed are also commonly used for persistence and lateral movement, which is a great reminder that old techniques can have more than one use case. 

Below is a simple diagram that attempts to illustrate the basic web shell example.

  1. The attacker compromises a domain user.
  2. The attacker scans for shares, finds a wwwroot directory, and uploads a web shell. The wwwroot directory stores all the files used by the web application hosted on the target IIS server. So, you can think of the web shell as something that extends the functionality of the published web application.
  3. Using a standard web browser, the attacker can now access the uploaded web shell file hosted by the target IIS web server.
  4. The attacker uses the web shell access to execute commands on the operating systems as the web server service account.
  5. The web server service account may have additional privileges to access other resources on the network.
A 5-step diagram showing RCE using a web shell.

Below is another simplified diagram showing the generic steps that can be used to execute the attacks from my top 10 list. Let’s pay attention to the C$ share being abused. The C$ share is a default hidden share in Windows that should not be accessible to standard domain users. It maps to the C drive, which typically includes all the files on the system. Unfortunately, devOops, application deployments, and single user misconfigurations accidentally (or intently) make the C$ share available to all domain users in more environments than you might think. During our penetration test, we perform full SMB share audits for domain joined systems, and we have found that we end up with write access to a C$ share more than half the time.

A simplified diagram based on the list of 10 common remote code execution (RCE) options.

Network Share Remediation 

Tracking down system owners, applications, and valid business cases during excessive share remediation efforts can be a huge pain for IAM teams. For a large business, it can mean sorting through hundreds of thousands of share ACLs. So having ways to group and prioritize shares during that effort can be a huge time saver. 

I’ve found that the trick to successful grouping is collecting the right data. To determine what data to collect, I ask myself the standard who, what, where, when, and how questions and then determine where I may be able to get that data from there. 

What shares are exposed? 

  • Share Name: Sometimes, the share name alone can indicate the type of data exposed including high risk shares like C$, ADMIN$, and wwwroot.
  • Share File Count: Directories with no files can be a way to prioritize share remediation when you may be trying to prioritize high-risk shares first.
  • Directory List: Similar to share name, the folders and files in a shared directory can often tell you a lot about context.
  • Directory List Hash: This is simply a hash of the directory listing. While not a hard requirement, it can make identifying and comparing directory listing that are the same a little easier. 

Who has access to them? 

  • Share ACL: This will help show what access users have and can be filtered for known high-risk groups or large internal groups.
  • NTFS ACL: This will help show what access users have and can be filtered for known high-risk groups or large internal groups. 

When were they created? 

  • Folder Creation Date: Grouping or clustering creation dates on a timeline can reveal trends that can be tied to business units, applications, and processes that may have introduced excessive share privileges in the past. 

Who created them? 

  • Folder Owner: The folder owner can sometimes lead you to the department or business unit that owns the system, application, or process that created/uses the share.
  • Hostname: Hostname can indicate location and ownership if standardized naming conventions are used. 

Where are they? 

  • Computer Name: The computer name that the hosts share can often be used to determine a lot of information like department and location if a standardized naming convention is used.
  • IP Address: Similar to computer names, subnets are also commonly allocated to computers that do specific things. In many environments, that allocation is documented in Active Directory and can be cross referenced. 

If we collect all of that information during discovery, we can use it to perform grouping based on share name, owner, subnet, folder list, and folder list hash so we can identify large chunks of related shares that can be remediated at once. Don’t want to write the code for that yourself? I wrote PowerHuntShares to help you out.

Introducing PowerHuntShares 

PowerHuntShares is designed to automatically inventory, analyze, and report excessive privilege assigned to SMB shares on Active Directory domain joined computers. It is intended to be used by IAM and other security teams to gain a better understanding of their SMB Share attack surface and provide data insights to help group and prioritize share remediation efforts. Below is a quick guide to PowerHuntShares setup, execution (collection & analysis), and reporting. 


1. Download the project from

2. From a non-domain system you can load it with the following command:

runas /netonly /user:domain\user PowerShell.exe 
Set-ExecutionPolicy bypass -scope process
Import-Module Invoke-HuntSMBShares.ps1 

Alternatively, you can load it directly from the internet using the following PowerShell script. 

Callback = {$true} 

[Net.ServicePointManager]::SecurityProtocol =[Net.Security

IEX(New-Object System.Net.WebClient).DownloadString


The Invoke-HuntSMBShares collection function wraps a few modified functions from PowerView and Invoke-Parallel. The modifications grab additional information, automate common task sequences, and generate summary data for the reports. Regardless, a big shout out for the nice work done by Warren F. and Will Schroeder (however long ago). Below are some command examples. 

Run from domain joined system 
Invoke-HuntSMBShares -Threads 100 -OutputDirectory c:\temp\test
Run from a non-domain joined system
runas /netonly /user:domain\user PowerShell.exe
Invoke-HuntSMBShares -Threads 100 -RunSpaceTimeOut 10` 
-OutputDirectory c:\folder\`
-Credential domain\user
 This function automates the following tasks:

 o Determine current computer's domain
 o Enumerate domain computers
 o Filter for computers that respond to ping requests
 o Filter for computers that have TCP 445 open and accessible
 o Enumerate SMB shares
 o Enumerate SMB share permissions
 o Identify shares with potentially excessive privileges
 o Identify shares that provide reads & write access
 o Identify shares that are high risk
 o Identify common share owners, names, & directory listings
 o Generate creation, last written, & last accessed timelines
 o Generate html summary report and detailed csv files

 Note: This can take hours to run in large environments.
[*][03/01/2021 09:35] Scan Start
[*][03/01/2021 09:35] Output Directory: c:\temp\smbshares\SmbShareHunt-03012021093504
[*][03/01/2021 09:35] Successful connection to domain controller: dc1.demo.local
[*][03/01/2021 09:35] Performing LDAP query for computers associated with the demo.local domain
[*][03/01/2021 09:35] - 245 computers found
[*][03/01/2021 09:35] Pinging 245 computers
[*][03/01/2021 09:35] - 55 computers responded to ping requests.
[*][03/01/2021 09:35] Checking if TCP Port 445 is open on 55 computers
[*][03/01/2021 09:36] - 49 computers have TCP port 445 open.
[*][03/01/2021 09:36] Getting a list of SMB shares from 49 computers
[*][03/01/2021 09:36] - 217 SMB shares were found.
[*][03/01/2021 09:36] Getting share permissions from 217 SMB shares
[*][03/01/2021 09:37] - 374 share permissions were enumerated.
[*][03/01/2021 09:37] Getting directory listings from 33 SMB shares
[*][03/01/2021 09:37] - Targeting up to 3 nested directory levels
[*][03/01/2021 09:37] - 563 files and folders were enumerated.
[*][03/01/2021 09:37] Identifying potentially excessive share permissions
[*][03/01/2021 09:37] - 33 potentially excessive privileges were found across 12 systems.
[*][03/01/2021 09:37] Scan Complete
[*][03/01/2021 09:37] Analysis Start
[*][03/01/2021 09:37] - 14 shares can be read across 12 systems.
[*][03/01/2021 09:37] - 1 shares can be written to across 1 systems.
[*][03/01/2021 09:37] - 46 shares are considered non-default across 32 systems.
[*][03/01/2021 09:37] - 0 shares are considered high risk across 0 systems
[*][03/01/2021 09:37] - Identified top 5 owners of excessive shares.
[*][03/01/2021 09:37] - Identified top 5 share groups.
[*][03/01/2021 09:37] - Identified top 5 share names.
[*][03/01/2021 09:37] - Identified shares created in last 90 days.
[*][03/01/2021 09:37] - Identified shares accessed in last 90 days.
[*][03/01/2021 09:37] - Identified shares modified in last 90 days.
[*][03/01/2021 09:37] Analysis Complete
[*][03/01/2021 09:37] Domain: demo.local
[*][03/01/2021 09:37] Start time: 03/01/2021 09:35:04
[*][03/01/2021 09:37] End time: 03/01/2021 09:37:27
[*][03/01/2021 09:37] Run time: 00:02:23.2759086
[*][03/01/2021 09:37]
[*][03/01/2021 09:37] COMPUTER SUMMARY
[*][03/01/2021 09:37] - 245 domain computers found.
[*][03/01/2021 09:37] - 55 (22.45%) domain computers responded to ping.
[*][03/01/2021 09:37] - 49 (20.00%) domain computers had TCP port 445 accessible.
[*][03/01/2021 09:37] - 32 (13.06%) domain computers had shares that were non-default.
[*][03/01/2021 09:37] - 12 (4.90%) domain computers had shares with potentially excessive privileges.
[*][03/01/2021 09:37] - 12 (4.90%) domain computers had shares that allowed READ access.
[*][03/01/2021 09:37] - 1 (0.41%) domain computers had shares that allowed WRITE access.
[*][03/01/2021 09:37] - 0 (0.00%) domain computers had shares that are HIGH RISK.
[*][03/01/2021 09:37]
[*][03/01/2021 09:37] SHARE SUMMARY
[*][03/01/2021 09:37] - 217 shares were found. We expect a minimum of 98 shares
[*][03/01/2021 09:37]   because 49 systems had open ports and there are typically two default shares.
[*][03/01/2021 09:37] - 46 (21.20%) shares across 32 systems were non-default.
[*][03/01/2021 09:37] - 14 (6.45%) shares across 12 systems are configured with 33 potentially excessive ACLs.
[*][03/01/2021 09:37] - 14 (6.45%) shares across 12 systems allowed READ access.
[*][03/01/2021 09:37] - 1 (0.46%) shares across 1 systems allowed WRITE access.
[*][03/01/2021 09:37] - 0 (0.00%) shares across 0 systems are considered HIGH RISK.
[*][03/01/2021 09:37]
[*][03/01/2021 09:37] SHARE ACL SUMMARY
[*][03/01/2021 09:37] - 374 ACLs were found.
[*][03/01/2021 09:37] - 374 (100.00%) ACLs were associated with non-default shares.
[*][03/01/2021 09:37] - 33 (8.82%) ACLs were found to be potentially excessive.
[*][03/01/2021 09:37] - 32 (8.56%) ACLs were found that allowed READ access.
[*][03/01/2021 09:37] - 1 (0.27%) ACLs were found that allowed WRITE access.
[*][03/01/2021 09:37] - 0 (0.00%) ACLs were found that are associated with HIGH RISK share names.
[*][03/01/2021 09:37]
[*][03/01/2021 09:37] - The 5 most common share names are:
[*][03/01/2021 09:37] - 9 of 14 (64.29%) discovered shares are associated with the top 5 share names.
[*][03/01/2021 09:37]   - 4 backup
[*][03/01/2021 09:37]   - 2 ssms
[*][03/01/2021 09:37]   - 1 test2
[*][03/01/2021 09:37]   - 1 test1
[*][03/01/2021 09:37]   - 1 users
[*] ----------------------------------------------- 


PowerHuntShares will inventory SMB share ACLs configured with “excessive privileges” and highlight “high risk” ACLs. Below is how those are defined in this context.

Excessive Privileges 

Excessive read and write share permissions have been defined as any network share ACL containing an explicit ACE (Access Control Entry) for the “Everyone”, “Authenticated Users”, “BUILTIN\Users”, “Domain Users”, or “Domain Computers” groups. They all provide domain users access to the affected shares due to privilege inheritance issues.

High Risk Shares 

In the context of this report, high-risk shares have been defined as shares that provide unauthorized remote access to a system or application. By default, that includes wwwroot, inetpub, c, and c$ shares. However, additional exposures may exist that are not called out beyond that.


The script will produce an HTML report, csv data files, and html files.

HTML Report 

The HTML report should have links to all the content. Below is a quick screenshot of the dashboard. It includes summary data at the computer, share, and share ACL level. It also has a fun share creation timeline so you can identify those share creation clusters mentioned earlier. It was my first attempt at generating that type of HTML/CSS with PowerShell, so while it could be better, at least it is a functional first try. 😊 It also includes data grouping summaries in the “data insights” section. 
Note: The data displayed in the creation timeline chart seems to be trustworthy, but the last accessed/modified timeline charts seem to be a little less dependable. I believe it has something to do with how they are used by the OS, but that is a research project for another day. 

A screenshot of the Powerhunt Shares dashboard. Includes summary data at the computer, share, and share ACL level.
CSV Files

The Invoke-HuntSMBShares script will generate all kinds of .csv files, but the primary file of interest will be the “Inventory-Excessive-Privileges.csv” file. It should contain all the data discussed earlier on in this blog and can be a good source of data for additional offline analysis.

A detailed screenshot of the Inventory-Excessive-Privileges.csv generated by the Invoke-HuntSMBShares script.A detailed screenshot of the Inventory-Excessive-Privileges.csv generated by the Invoke-HuntSMBShares script.

PowerShell can be used to import the .csv files and do additional analysis on the spot, which can be handy from both the blue and red team perspectives.

A screenshot detailing how PowerShell can be used to import.csv files for additional analysis.

Wrap Up

This was a fun blog to write, and we covered a lot of ground, so below is a quick recap:

  • IAM and red teams can leverage Active Directory to help create an inventory of systems, shares, and share permissions.
  • Remediation efforts can be expedited through simple data grouping techniques if the correct information is collected when creating your share inventory.
  • The builtin\users group implicitly includes domain users when joined to Active Directory domains through a group inheritance chain.
  • Shares configured with excessive permissions can lead to RCE in various ways.
  • Windows event IDs can be used to identify authenticated scanning (540, 4624, 680,4625) and share access (5140) happening in your environment.
  • PowerHuntShares is an open-source tool that can be used to get you started.

In the long term, my hope is to rewrite PowerHuntShares in C# to improve performance and remove some of the bugs. Hopefully, the information shared in this blog helped generate some awareness of the issues surrounding excessive permissions assigned to SMB shares in Active Directory environments. Or at least serves as a place to start digging into the solutions. 

Remember, share audits should be done on a regular cadence so you can identify and remediate high risk share permissions before they become a threat. It is part of good IT and offensive security hygiene, just like a penetration testing, adversary simulation, or red team operations

For more on this topic, watch NetSPI’s webinar, How to Evaluate Active Directory SMB Shares at Scale.

Good luck and happy hunting!


SDxCentral: Decentralization Haunts Security, Cloud Transitions

On August 8, NetSPI Senior Director Thomas Elling was featured in an article in SDxCentral called Decentralization Haunts Security, Cloud Transitions. Read the preview below or view it online.


Cloud security has become more of an imperative and a must-have for organizations today. However, when looking at cybersecurity and specifically cloud transformation, cloud environments are incredibly decentralized, industry experts argued during an Inkhouse Black Hat panel last week. 

Security responsibilities have become diluted across organizations, where different business units all have to consider security — units usually without an impressive IT or security knowledge background.

The New Normal

Hybrid work is the new normal for many global workforces. With more work happening through online spaces rather than face to face, cloud adoption is on the rise. Netskope saw that organizations have increased their number of cloud applications by 80% on average since the beginning of the pandemic, according to Canzanese.

The next phase of businesses increasing their cloud transitions is the necessary increase in security budgets

“Large institutions, inevitably, are going to have to increase their security budgets,” said Immuta CEO and co-founder Matthew Carroll. “Even in a recent recession, I think that budgets from the security side are going to have to go up because inevitably they’re going to have to move data to the cloud, and these organizations have to mature their security posture to be able to do that.”

Maturing business security falls back on the cybersecurity talent shortage — a shortage that will continue into next year.

“We’re gonna see that talent shortage continuing to next year, and budget-wise you’re going to need to figure out where you can merge technology and talent,” said Thomas Elling, senior director of cloud pentesting practice at NetSPI. “You’re going to need cybersecurity professionals to understand and improve how they can use their tools to monitor and alert in the cloud and then go beyond that.”

You can read the full article at SDxCentral!


NetSPI Recognized in the 2022 Gartner® Hype Cycle for Security Operations

Minneapolis, MinnesotaNetSPI is a market leader in penetration testing and attack surface management, and today it has been named a Sample Vendor for Pentesting as a Service (PTaaS) in the 2022 Gartner® Hype Cycle for Security Operations (SecOps)

This Gartner’s Hype Cycle SecOps includes entries across the SecOps space that “aim to help security and risk management leaders strategize and deliver effective response and remediation.”  

We believe our inclusion in this year’s report validates NetSPI’s PTaaS model.

The core benefits of NetSPI’s Resolve™ platform in three core areas include: 

  • Hybrid automated and manual testing approach: NetSPI leverages a combination of automation and human pentesters to increase the efficiency and effectiveness of the results. With automation, NetSPI alleviates many of the mundane vulnerability management tasks for organizations—enabling more manual pentesting to find and fix business-critical vulnerabilities. 
  • Real-time validation and faster remediation: NetSPI’s PTaaS model delivers a platform that enables faster scheduling and execution, and real-time communications with testers and visibility of test results. By providing access to real-time findings, NetSPI enables earlier remediation of vulnerabilities. 
  • Support for teams with limited in-house security experts: NetSPI provides customized and tailored guidance throughout the life cycle of each assessment to support internal teams facing the pressures of the security skill gap.  

“To us, this acknowledgment by Gartner further cements our approach to delivering innovative vulnerability and risk management solutions to today’s top enterprises,” said Travis Hoyt, CTO at NetSPI. “Traditional penetration testing is dead. PTaaS allows organizations to remediate faster, receive support from expert pentesters, and implement a strategic approach to offensive security.” 

According to Gartner, “the adoption of remote work, and increased use of mobile devices and cloud services have not slowed over the last 12 months. This has led to expanded requirements for organizations to track risk and threats to a wider set of digital assets. With the expansion of digital business functions and third-party-managed assets, security and risk management leaders must reevaluate how their business-critical environments change security strategy and tooling.” The report also mentions that “pentesting is foundational in a security program and mandated by various compliance standards. PTaaS enables organizations to elevate their security posture through continual assessment, and integrates validation earlier in the AppDev cycle by giving access to real-time findings delivered through the platform, therefore enabling faster treatment of vulnerabilities.” 

Learn more about NetSPI’s PTaaS solutions here.

Gartner, Hype Cycle for Security Operations, 2022, Andrew Davies, 5 July, 2022.

Gartner does not endorse any vendor, product or service depicted in its research publications, and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner’s research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose. GARTNER and HYPE CYCLE are the registered trademarks of Gartner Inc., and/or its affiliates in the U.S and/or internationally and have been used herein with permission. All rights reserved.

About NetSPI 

NetSPI is the leader in penetration testing and attack surface management, partnering with nine of the top 10 U.S. banks, three of the world’s five largest healthcare companies, the largest global cloud providers, and many of the Fortune® 500. NetSPI offers Penetration Testing as a Service (PTaaS) through its Resolve™ penetration testing and vulnerability management platform. Its experts perform deep dive manual penetration testing of application, network, and cloud attack surfaces, historically testing over 1 million assets to find 4 million unique vulnerabilities. NetSPI is headquartered in Minneapolis, MN and is a portfolio company of private equity firms Sunstone Partners, KKR, and Ten Eleven Ventures. Follow us on Facebook, Twitter, and LinkedIn. 

Media Contacts:
Tori Norris, NetSPI
630) 258-0277

Inkhouse for NetSPI


Media Alert: NetSPI at Black Hat 2022 and DEF CON 30

Penetration Testing as a Service (PTaaS) leader marks its presence with various speaking sessions, open source tool releases, and a happy hour event during this year’s Black Hat and DEF CON conferences.

Minneapolis, MN and Las VegasNetSPI, the leader in enterprise security testing and attack surface management, will be participating in several speaking sessions and activities during Black Hat 2022 and DEF CON, taking place at the Mandalay Bay Expo Hall in Las Vegas starting on August 30. NetSPI is located at Booth #1687 on the Mandalay Bay Trade Floor. 

With over 20 years of experience, NetSPI’s team of over 200 global pentesters are highly-skilled in manual pentesting and laser-focused on excellence. During the events, these company experts will inform attendees on the vulnerabilities and escalating threats targeting enterprises, as well as share insights on how businesses can mature their security programs and empower their workforces.  

NetSPI speaking sessions during Black Hat and DEF CON include: 

  • On August 10 at 10:20am PT, Nick Landers, Director of Research at NetSPI, will present at Black Hat alongside James Forshaw, Security Researcher at Google Project Zero, in a talk titled: “Elevating Kerberos to the Next Level.” In this talk, Nick and James will conduct a deep dive into the inner workings of Kerberos as it applies to local authentication and some of the unusual behaviors to be found within. They will also describe the Kerberos security issues they’ve discovered, including authentication bypasses, sandbox escapes and arbitrary code execution in privileged processes.
  • On August 12 at 10:10am PT, Karl Fosaaen, Senior Director at NetSPI, will present at the DEF CON Cloud Village in a talk titled: “Automating Insecurity in Azure.” In this talk, Karl will go over how Automation Accounts function within Azure, how attackers can abuse built-in functionality to gain access to credentials, privileged identities, and sensitive information, and present a deep dive on four vulnerabilities from the last year that all apply to Azure Automation Accounts.
  • On August 12 and 13, Melissa Miller, Managing Security Consultant at NetSPI will present at the DEF CON Girls Hack Village.
    • On August 12 at 5pm PT Imposter Syndrome: The Silent Killer of Motivation, Melissa will discuss the characteristics of a healthy work environment and steps towards updating your environment to make it right for you, along with—how to realistically identify your strengths and weaknesses and use that information to pursue and achieve your career goals.
    • On August 13 at 1:30pm PT at the Hacking Diversity panel, Melissa will  discuss how the industry can increase diversity in cybersecurity. 

During Black Hat, Scott Sutherland, Senior Director at NetSPI, will be revealing two new open source tools for security operations centers. The new tools are designed to help teams hunt for artifacts and anomalies associated with common “known bad” behaviors, and help teams inventory, naturally group, and prioritize the triage/remediation of excessive privileges assigned to SMB shares hosted across Active Directory computers. 

For more information or to book a meeting with one of NetSPI’s experts at Black Hat or DEF CON, please click here

You can also join NetSPI for their Black Hat happy hour co-hosted by Adaptive Shield, and Armis on August 10 at 5 PM PT at the Foundation Room Las Vegas, located on the 63rd floor of Mandalay Bay. Register your spot today.

About NetSPI 

NetSPI is the leader in penetration testing and attack surface management, partnering with nine of the top 10 U.S. banks, three of the world’s five largest healthcare companies, the largest global cloud providers, and many of the Fortune® 500. NetSPI offers Penetration Testing as a Service (PTaaS) through its Resolve™ penetration testing and vulnerability management platform. Its experts perform deep dive manual penetration testing of application, network, and cloud attack surfaces, historically testing over 1 million assets to find 4 million unique vulnerabilities. NetSPI is headquartered in Minneapolis, MN and is a portfolio company of private equity firms Sunstone Partners, KKR, and Ten Eleven Ventures. Follow us on Facebook, Twitter, and LinkedIn

Media Contact:
Inkhouse for NetSPI


The Intersection of Cybersecurity Technology and Talent

NetSPI CEO Aaron Shilts recently wrote an article that centered around this powerful statement: Technology cannot solve our greatest cybersecurity challenges. People can.  

As Head of Product, this statement gave me a critical opportunity to pause and reflect on my team’s purpose and ask, “What is the true intent of our technology innovation?” 

The answer was abundantly clear: Technology should empower people and maximize the value of human creativity, experience, and ingenuity. It should enable people to do more, with less. 

But it is not possible for technology nor people to be a force multiplier on their own. It all comes back to the intersection of the two. Data is just data unless you can derive intelligence from it, tools are just tools unless you can leverage them to deliver outcomes. Shelfware has never made anyone secure. 

Cybersecurity Technology Pitfalls 

Today, security programs are faced with a dilemma of not having enough people to tackle their greatest challenges, yet technology alone has not provided the level of efficacy to improve security programs. Without people, technology cannot: 

🚫 Understand unique organizational needs 

Company infrastructures are distinct. While many organizations have the same technical security controls or operate in the same industry, the ways the controls are implemented and operationalized, and the context of each infrastructure can differ greatly. Additionally, risk profiles and tolerance vary. External pressures may be different, driving additional bifurcation in how they approach a specific problem. Technology alone cannot identify these nuances and adjust. 

🚫 Continuously manage and operationalize itself 

Tools need to be run. The process of evaluating, implementing, and operationalizing technology requires humans. This process often takes focus away from defending against cyber attacks. When we have limited resources, we need to make sure they are focused on the right aspects of the greater mission.  

🚫 Support security programs in a cost-efficient way 

The security industry is crowded with technology vendors offering a wide range of solutions. Research platform CyberDB has compiled a list of cybersecurity vendors which includes 3,500 companies – just in the US. It has become difficult for security leaders to effectively implement supportive technologies in a cost-efficient way due to redundant functionality, gaps in coverage, and other challenges that come with the crowded market. 

The Spectrum of Cybersecurity Tools 

To truly understand the value of the intersection of technology and talent, it’s important to define the opposite ends of the spectrum – from traditional services/consulting firms to standalone technology platforms. 

  • Traditional Services/Consulting Firms:
    • Expectations: A comfortable and trusting relationship with specific resources; easy to procure; professional services contracts are well understood; processes are easy to onboard and manage
    • Reality: Slow to scale; only as good as the consultant assigned; not maximizing the value; expensive; time consuming
  • Standalone Technology Platforms:
    • Expectations: All-in-one solution to a problem; use existing resources to manage the platform; low touch management
    • Reality: Lacks efficacy; purchased technologies do not meet expectations; requires dedicated resources to manage; opaque (“trust us it works”); operates without context specific to your business needs and risk profile 

So, how do you get the best of both worlds? 

Platform Driven, Human Delivered 

The solution to effectively execute the industry’s security missions with limited human capital lies within the combination of technology and talent. Together, they can be a force multiplier for the industry. 

At NetSPI we call this “platform driven, human delivered.” In our approach, we use technology to maximize human value by focusing human value on the right assets, at the right time. 

We “automate the automatable.” In other words, we leverage automation to handle mundane and repetitive tasks that take up valuable time for a human to perform. Take our three core services for example: 

Penetration Testing as a Service (PTaaS) 

The following features in Resolve™, our PTaaS platform, help to ensure our global pentesting team spends more time focused on higher severity issues like authentication, sessions management, and replicating real attacker behavior during our engagements. 

  1. Processing scans on behalf of the pentesters. Using our correlation engine, we’re able to bring disparate scan outputs into one finding.
  2. Providing additional dimensions of data to findings to help better prioritize the remediation of findings with Risk Scoring.
  3. Report generation. Our consultants do all their testing within a process management workflow which allows them to simply generate a report at any point in the engagement.
  4. Process management. Deliver quality and consistency through workflow and process management automation, quality assurance, and communication. Adding automated components to these functions allows the pentesters to be more creative in their approaches and spend time finding higher severity findings. 

Attack Surface Management 

The following features of our attack surface management solution combine the power of technology and talent by:  

  1. Leveraging the cloud. We’ve taken our tools and techniques from over 20 years of external network penetration testing and are now utilizing the advancements in cloud technology to effectively scale that IP / knowledge capital.
  2. Continuous monitoring. Leverage technology to continuously monitor the aspects of client’s known assets and ensure they are free from critical issues. AND provide visibility into the aspects of their attack surface they are unaware of.
  3. Using human input to determine signal vs. noise. In tandem, we utilize our human experts to parse and manage that data to extract “the signal from the noise” to help organizations understand what’s at risk and which exposures to prioritize.
  4. Making all the data available to clients in the platform so they can use it for analytics and pattern identification. 

Breach & Attack Simulation  

On average, NetSPI clients identify roughly 15% of the attack techniques we run in their environments – this includes security programs that have spent millions on controls. We automate the automatable by: 

  1. Connecting the execution of attacks in client environments with a NetSPI expert to help prioritize and get context into how we benchmark against industry peers.
  2. Automating attack plays that map back to the Mitre ATT@CK framework paired with human expertise to help make informed prioritization decisions of the attack techniques most relevant to your business.
  3. Track ongoing improvements, or reductions, in detection capabilities over time to empower defense teams to make the case for additional resources and shore up their defenses.  

Becoming a Force Multiplier in Offensive Security 

As an industry, we need to take a step back and evaluate, “what do we need to do to protect ourselves?” What are our priorities? 

From an offensive security perspective, our clients have the need to identify all assets, identify vulnerabilities on those assets, and remediate them. No one person, nor one tool can achieve these goals. But together? The opportunity for success is exponential. 

After all, technology cannot solve our greatest cybersecurity challenges. People and technology can. 

Want to experience “platform driven, human delivered” offensive security solutions? Contact us.

Discover why security operations teams choose NetSPI.