Back

Burp Suite Extension: AWS Signer 2.0 Release

Update April 20, 2022: The updated version of the AWS Signer extension is now available on the BApp Store. This can be installed/updated within Burp Suite through the Extender tab. Alternatively the extension can be downloaded from the BApp Store here and installed manually.

+ + +

The AWS Signer extension enhances Burp Suite’s functionality for manipulating API requests sent to AWS services. As the requests pass through the proxy, the extension signs (or resigns) the requests using user-supplied credentials and the AWS SigV4 algorithm. This allows the user to easily modify/replay the request in Burp Suite and ensure the AWS service accepts the request. Eligible requests are automatically identified by the presence of the X-Amz-Date and Authorization header. 

The extension was initially released by NetSPI’s Eric Gruber in October 2017 and has been maintained by NetSPI’s Andrey Rainchik. The extension has served as a valuable tool on hundreds of penetration tests over the years. 

Today, I’m releasing version 2.0 which brings functional and usability improvements to the extension. An introduction to the enhancements is provided below. For more detailed information on how to use the extension, please see the updated documentation on GitHub.

The New Burp Suite Extension Interface

The most obvious difference upon loading the new version is that the extension’s UI tab in Burp Suite looks very different. All the key functionality from the original version of the extension remains.

This is the AWS Signer extension UI tab in Burp Suite.

At the top of the tab, we have “Global Settings,” which controls extension-wide behavior. The user can enable/disable the extension entirely through a checkbox. Additionally, a user can also select a profile to use for signing all requests in the “Always Sign With” dropdown menu. If set, all eligible requests will be signed with the selected profile’s credentials. Speaking of profiles…

Introducing Profile Management

A profile represents a collection of settings for signing requests. As with the previous version of AWS Signer, a profile can specify which region and service should be used when signing the request. Or that information can be extracted from the request itself via the Authorization header. The new version of the extension introduces import and export functionality for profiles. 

You Can Now Import Profiles from Multiple Sources

Upon clicking the Import button, a pop-up window will appear to guide the user through the import process. 

You can import profiles within Burp Suite using Auto, File, Env, and Clipboard.

Profiles can be imported from a variety of sources using the buttons at the top of the pop-up window:

  • Auto: Automatically sources profiles from default credential files (as used by the AWS CLI), the clipboard, and the following environment variables:
    • AWS_ACCESS_KEY_ID
    • AWS_SECRET_ACCESS_KEY
    • AWS_SESSION_TOKEN
  • File: Allows the user to specify which file to load profiles from. This is useful for importing previously exported profiles.
  • Env: Attempts to import a profile based on the standardized AWS CLI environment variables listed above.
  • Clipboard: Attempts to automatically recognize and import a profile based on credentials currently copied and held in the user’s clipboard. 

After sourcing the profiles, the user can select which ones to bring into the extension using the checkboxes beside each profile. 

Once the profiles are imported and configured by the user, all of the profiles can be easily exported to a file via the Export button. If working with a teammate, the exported profiles could be shared between multiple testers to reduce configuration time. This also allows for easily storing and restoring the extension’s configuration without including sensitive credentials within the project’s Burp Suite configuration. 

New: Profile Types

A key improvement in the latest version of the extension is the introduction of multiple profile types. Each profile type uses a different method for sourcing its credentials which will be used for signing requests. There are currently three profile types, each of which is described below. 

Static Credentials Profile

This is the profile type that previous users of the extension will be most familiar with. The user simply provides an access key, a secret key and (optionally) a session token. When this profile is used to sign a request, these credentials are used to generate the signature.

AssumeRole Profile

The AssumeRole profile type allows the user to sign requests with the credentials that were returned after assuming an IAM role. The user must provide at least the following:

  1. The role ARN of the IAM role to assume.
  2. A profile that provides credentials to assume the role. This is referred to as the “Assumer Profile.”

When signing a request with an AssumeRole profile, the extension will first call the AssumeRole API using the Assumer profile to obtain an access key, secret key, and session token for the role. Using a profile (rather than static credentials) allows the extension to handle complex chaining of multiple profiles to fetch the necessary credentials. 

After retrieving the credentials, the extension will cache and reuse them to avoid continuously invoking the AssumeRole API. This is configurable through the Duration setting. The user may also provide a session policy to be applied when assuming the role. A session policy is an IAM policy, which can be used to further restrict the IAM permissions granted to a temporary session for a role. Session policies are useful for testing and confirming intended behavior with a specific policy because they are applied immediately upon the AssumeRole call with zero propagation delay and can be quickly modified. This eliminates the frustrating delays waiting for the eventual consistency of IAM permissions.

New AssumeRole profile in the AWS Signer.

Command Profile

The final profile type allows the user to provide an OS command or external program to provide the signing credentials. The command will be executed using either cmd (Windows) or sh (non-Windows). The extension will attempt to parse the credentials from the command’s stdout output. The output does not have a set format, and the credential extraction is based on pattern matching. The extension is designed to recognize valid credentials in a variety of output formats. Similar to the AssumeRole profile, the returned credentials will be cached and reused where possible. The user can configure the lifetime of the credentials through the Duration field. For ease of testing, the UI also displays the latest credentials returned after clicking the “Test Profile Credentials” button.

The Command Profile type allows the user to provide an OS command or external program to provide the signing credentials.

Signing Improvements

The AWS Signer extension should properly sign well-formatted requests to any AWS service using SigV4. In older versions of the extension, some S3 API requests were not handled properly and would be rejected by the S3 service. The actual signing process is delegated to the AWS Java SDK, which should provide robust signing support for a wide variety of services and inputs. 

The extension also provides experimental support for SigV4a. At the time of writing, there is minimal published information about the SigV4a signing process. However, this functionality is available in the AWS Java SDK. As such, the AWS Signer extension attempts to recognize SigV4a requests and perform the proper signing process via the SDK. 

This is functional for the currently available multi-region access points in S3, which require SigV4a. However, as more information becomes available and as more services adopt this signing process, the extension may not handle all use cases.

Summary of Changes in Burp Suite Extension: AWS Signer 2.0

I sincerely hope you enjoy the changes in version 2.0 of the AWS Signer extension for Burp Suite. NetSPI will coordinate with PortSwigger to ensure this update is made available in the BApp store as soon as possible. In the meantime, the update is available from the Releases page of the project’s GitHub repositories. Please submit all bug reports and feature requests as issues in GitHub and we’ll address those as we’re able to. 

Want to pentest some of the most prominent and innovative organizations in the world? NetSPI is hiring! Visit our careers page to explore open security consultant roles.

Back

VMblog: Don’t Forget to Celebrate World Backup Day 2022 – Hear From Industry Experts

On March 31, 2022, Florindo Gallicchio was featured in the VMblog article, Don’t Forget to Celebrate World Backup Day 2022 – Hear From Industry Experts. Preview the article below, or read the full article online.

+ + +

What is World Backup Day?

World Backup Day is celebrated on March 31st – and it is a yearly reminder for both organizations and individuals to secure their files via backups and how to improve the security per device and solution. It’s the day to prevent data loss.

Even though backing up data should be common knowledge:

  • 21% of people have never made a backup
  • 113 phones are lost or stolen every minute
  • 29% of data loss cases are caused by accident
  • 30% of all computers are already infected by malware

Hopefully this day will make everyone think twice about their situation, and educate themselves on the various options available to them so that they can get things backed up. A backup is only as good as your ability to recover the data. As part of your backup strategy, make sure to have a recovery plan. Be prepared to recover an entire system, a folder or collection of folders, and a single file. World Backup Day should bring about awareness and create a reminder for all of us to backup things up. No matter how secure or safe you feel about your data, know that it’s important to backup your files.

Don’t take my word for it. Hear from some of the leading industry experts in the backup and disaster recovery industry for more commentary and expertise:

Florindo Gallicchio, Managing Director, Head of Strategic Solutions at NetSPI

“This World Backup Day, it’s time to acknowledge how critical data backup has become, especially since many ransomware strains attempt to delete backup files, as we witnessed with Ryuk. Most businesses are faced with two significant risks when it comes to backups: the theft and public disclosure of sensitive data, and the disruption of critical business functions. If either of these risks occur, organizations could endure devastating consequences. To make sure that doesn’t happen, organizations need to proactively put strategies in place to bolster protection against these threat actors.

One way to do this is by ensuring that backups with all of the organization’s critical data are routinely, completely, and securely assessed – as this is a necessary step in recovering from a possible ransomware attack. These backups should be encrypted so that sensitive data is not disclosed and stored in such a way that an organization can recover its data in a timely manner, as this is necessary to minimize disruption to business operations. Additionally, organizations should regularly revisit and test disaster recovery and business continuity plans to validate that ransomware and other threats won’t impact the integrity of any backups.

Finally, any highly important, sensitive data should be stored on an entirely separate network from the internal network. That way, if ransomware targets the desktop network, it cannot spread to the critical systems and cause complete chaos. While this is a long-term, and challenging strategy, it’s well worth the time and investment for organizations to counter the continuous risk of critical data loss.”

Back

Security Magazine: Reduce Data Breach Risk on World Backup Day 2022

On March 31, 2022, Florindo Gallicchio was featured in the Security Magazine article, Reduce Data Breach Risk on World Backup Day 2022. Preview the article below, or read the full article online.

+ + +

Backing up enterprise data can reduce the effects of ransomware or a data breach. After experiencing a ransomware attack, 83% of victims paid to restore their data — preparing for cyberattacks in advance by conducting frequent data backups may help reduce these costs.

Spreading awareness about data backups is the mission of World Backup Day, which began in 2011. For World Backup Day 2022, Security magazine spoke with security leaders about the importance of enterprise cybersecurity.

Florindo Gallicchio, Managing Director, Head of Strategic Solutions at NetSPI:

This World Backup Day, it’s time to acknowledge how critical data backup has become, especially since many ransomware strains attempt to delete backup files, as we witnessed with Ryuk. Most businesses are faced with two significant risks when it comes to backups: the theft and public disclosure of sensitive data, and the disruption of critical business functions. If either of these risks occur, organizations could endure devastating consequences. To make sure that doesn’t happen, organizations need to proactively put strategies in place to bolster protection against these threat actors.

One way to do this is by ensuring that backups with all of the organization’s critical data are routinely, completely, and securely assessed –– as this is a necessary step in recovering from a possible ransomware attack. These backups should be encrypted so that sensitive data is not disclosed and stored in such a way that an organization can recover its data in a timely manner, as this is necessary to minimize disruption to business operations. Additionally, organizations should regularly revisit and test disaster recovery and business continuity plans to validate that ransomware and other threats won’t impact the integrity of any backups. 

Finally, any highly important, sensitive data should be stored on an entirely separate network from the internal network. That way, if ransomware targets the desktop network, it cannot spread to the critical systems and cause complete chaos. While this is a long-term and challenging strategy, it’s well worth the time and investment for organizations to counter the continuous risk of critical data loss.

Back

BetaNews: World Backup Day Highlights the Importance of Keeping Your Data Safe

On March 31, 2022, Florindo Gallicchio was featured in the BetaNews article, World Backup Day Highlights the Importance of Keeping Your Data Safe. Preview the article below, or read the full article online.

+ + +

Today is World Backup Day, which is a good opportunity to remind you that you only have a couple of days left to get your hands on some free backup software courtesy of our AOMEI giveaway.

It’s also an opportunity to look at the continued importance of backups even in the modern world of clouds and SaaS applications. A new report from Crucial highlights the ongoing cost of data breaches which has risen 9.8 percent from 2020 to 2021.

There are also plenty of hints and tips on offer to help you make your backups effective and painless to carry out.

It’s important to regularly review your backup strategy says Florindo Gallicchio, managing director and head of strategic solutions at NetSPI, “One way to do this is by ensuring that backups with all of the organization’s critical data are routinely, completely, and securely assessed — as this is a necessary step in recovering from a possible ransomware attack. These backups should be encrypted so that sensitive data is not disclosed and stored in such a way that an organization can recover its data in a timely manner, as this is necessary to minimize disruption to business operations. Additionally, organizations should regularly revisit and test disaster recovery and business continuity plans to validate that ransomware and other threats won’t impact the integrity of any backups.”

Back

Infosecurity Magazine: #WorldBackupDay: 5 Backup Tips to Retain Critical Data Following a Ransomware Attack

On March 31, 2022, Florindo Gallicchio was featured in the Infosecurity article, #WorldBackupDay: 5 Backup Tips to Retain Critical Data Following a Ransomware Attack. Preview the article below, or read the full article online.

+ + +

The phrase “data is the new oil” offers a dramatic yet apt description of the importance of information flows to modern businesses in an increasingly digitized world. Sadly, cyber-criminals are highly aware of this fact, leading to surging cases of data breaches in the past few years.

This is why the annual World Backup Day campaign on March 31 is taking on increasing relevance, sending a timely reminder of the need for backups and best practices surrounding implementation. ExtraHop’s Costlow added: “This World Backup Day should be a call for all organizations to examine how their backup and recovery plan weaves into their overall security strategy to ensure they are protected in the event of a ransomware attack.”

Another important approach is to encrypt data stored in backups. Florindo Gallicchio, managing director, head of strategic solutions at NetSPI, elucidated this approach: “These backups should be encrypted so that sensitive data is not disclosed and stored in such a way that an organization can recover its data in a timely manner, as this is necessary to minimize disruption to business operations. Additionally, organizations should regularly revisit and test disaster recovery and business continuity plans to validate that ransomware and other threats won’t impact the integrity of any backups.”

Back

VentureBeat: What’s Happening in the Attack Surface Market: Mitigating Threats in the Cloud Era

On March 30, 2022, NetSPI was featured in the VentureBeat article, What’s Happening in the Attack Surface Market: Mitigating Threats in the Cloud Era. Preview the article below, or read the full article online.

+ + +

For an increasing number of organizations, the explosion in attack surfaces has reached unmanageable levels amid the COVID-19 pandemic and the widespread adoption of cloud services. In fact, research shows seven in 10 organizations have been compromised by an unknown or unmanaged asset. 

As remote working has grown more popular during the pandemic, environments that sprawl across on-premises and cloud environments have expanded enterprise attack surfaces to the point where they can’t be secured through traditional IT security approaches alone.

NetSPI Brings Penetration Testing to the ASM Market 

As the need for ASM solutions increases, many security vendors are beginning to move into the space. One such provider is NetSPI, a penetration testing-as-as-service provider that’s raised $100 million in funding to date, who last month launched a new ASM tool that incorporates human penetration testing.

NetSPI’s solution automatically scans attack surface assets and alerts users to high-risk exposure, while NetSPI’s internal team evaluates the risk posed by discovered issues and provides the organization with guidance on how to remediate them.

The use of human penetration testing is unique in the market, and enables organizations to benefit from automated asset scanning alongside the rich risk insights of an experience penetration testing team, who can identify what threats a risk poses in a way that automated solutions cannot.

Back

How to Build a Layered Approach to Application Security Testing

Siloes still exist in cybersecurity, where related functions and activities operate asynchronously with other parts of the organization. This is especially true with application security

Various tests occur throughout the software development life cycle (SDLC), but they often lack context or are not in sync with other security activities, leaving organizations with gaps in coverage and a narrow view of their AppSec program.  

To help change the way we approach application security testing today, three Appsec experts came together to discuss this topic in the webinar, Application Security In Depth: Understanding The Three Layers Of AppSec Testing. In this blog, we’ll share key takeaways from the discussion, which features Moshe Zioni, VP of Security Research at Apiiro; Nabil Hannan, Managing Director at NetSPI; and Samir Sherif, CISO at Imperva. 

Why Context is Key During Application Security Testing 

Contextual data is important. It helps organizations understand their SDLC through a broad lens and assists in prioritization of workflows and next steps. Not all vulnerabilities identified will be fixed immediately, and context is key to remediating those that pose the highest risk to the business first and fastest. 

Moshe shares the following five different contextual triggers security leaders should pay close attention to in the SDLC. 

Five Contextual Triggers to Leverage in the SDLC 

  1. Design: At the design stage, prioritize according to what threat model sessions you’d like to have. If there are several designs going through an agile development life cycle, prioritize that by balancing between the capacity we have as security practitioners to the actual deployment. This stage is also important for triggering contextual compliance review. If something is required for compliance and you didn’t prepare for it, this will costly and difficult to go back and implement. 
  2. Branch: After a pull request, you should have context around the code itself. First, analyze the code. This can be accomplished by a review or any automatic tool to enrich the data and provide us with data for the code itself. Through this context point, you can get multiple triggers according to workflows, how lean you want to get, and what priority you have for the commit itself. If you have a commit, which is highly prioritized in terms of sensitive data or a new developer, these context points create a weighing system to help automate the risk questionnaire and code governance. Once the automation is developed, you’ll have some cadence and governance rules for when to trigger each point instead of triggering everything. 
  3. Repository: At the repository level, you gain context about the repository, what kind of business impacts we will have for the application, what information passes through the application, and who is the customer. These points provide you with a coherent view of what needs to be done to secure your application. This is especially true if you need to have compliance rules. The repository is not to be overlooked and should have triggers and workflows.   
  4. CI/CD: The last point of the coding journey is the CI/CD system, or any integration and deployment processes. CI/CD is fluent, so there will be cycles going on throughout the organization. There should also be a lean and safe process for the CI/CD itself. Integrity and provenance for the CI/CD are important to have in terms of automation – as well as putting in place integration for integrity checks across the CI/CD life cycle. 
  5. Production: Before production, you should have another set of eyes look at the information for anything that looks suspicious. 

Along with the context points and material changes, Moshe explains that “all of this comes together to a create a complete picture and mission, which is an ongoing cycle that doesn’t disrupt and interrupt the deployment process but gives you confidence on what kind of design and code you’re going to push to the cloud.” 

Best Practices for Application Penetration Testing and Secure Code Review 

Many different application security testing activities are completed throughout the SDLC, but penetration testing and secure code review are two of the most common and effective.  

A larger concern, however, is that organizations struggle to optimize the results of these due to a lack of clarity on the results they want to achieve. Below are five best practices organizations can implement to optimize these tests. 

Five Best Practices for Application Penetration Testing 

  1. Determine your business objectives. Organizations need to have a clear understanding of their business objectives and how they will make money. This will aid in building a proper application security roadmap and help organizations allocate resources and identify which areas to focus on. 
  2. Contextualize the vulnerabilities. Don’t just perform a security test, fix the vulnerabilities identified. This means understanding the vulnerabilities, contextualizing them based on the business risks, and figuring out which ones to remediate first.  
  3. Acquire buy-in from finance and risk leadership. Gaining support from finance, the Chief Compliance Officer, and other risk leaders and partners will enable organizations to perform testing on a regular cadence with the appropriate resources and budget for testing. 
  4. Perform proper threat modeling and design level analysis. Then, utilize the results to determine new and creative ways that attackers may be trying to gain access to company-wide assets or software that can’t be derived from regular pentesting. 
  5. Invest in continuous pentesting. Point-in-time testing is no longer sufficient if organizations want to protect their software and assets. Instead, it’s time to invest in continuous pentesting to keep up with the rate of change organizations face today. 

One of the earliest times to detect a vulnerability is when the code is being written. Nabil shared this advice on how to start, “From a secure code review perspective, make sure you start aligning different tooling technology and code review activities with your software development cadence so that they are in lockstep in how they’re performed.” 

Here are six additional best practices for secure code review. 

Six Best Practices When Performing a Secure Code Review 

  1. Don’t get complacent. Organizations should be rotating the people who are reviewing source code over time, so everyone is immersed in devising creative ways to discover and fix vulnerabilities.   
  2. Build a methodology for code review. Create a champions program where developers are being trained to write secure code from the get-go. Then reward them for their efforts.  
  3. Transparency is key. Similar to the pentesting best practice above, organizations need to make sure they’re involving folks in leadership and other areas. This means explaining the need for security testing at the code level and how tooling, manual reviews, and automation are helpful with the development process and help build the software securely. 
  4. Prioritize onboarding and scan frequency. Organizations should be testing the right assets, the right applications, and at the right frequency and key timeframe. 
  5. Provide the proper training: Determine how to deal with the different bugs and vulnerabilities that were discovered. This is where it’s important that developers are equipped with the right training and education to fix these vulnerabilities. Another thing to consider is to gamify training so that folks can consume remediation guidance in bite-sized pieces.  
  6. Measure and Improve: Aim for continuous improvement. To accomplish this, organizations need to ensure they’re capturing key metrics and evaluating remediation rates. Are there vulnerabilities that keep recurring? Are developers writing better quality code over time? Are they able to abstract out certain security controls and put them into a secure development framework to help you reduce the cost, time, and effort it takes to fix the vulnerabilities? 

Want to read more on secure code review? Check out these blog posts: 

Solutions to Consider in the Implementation Journey 

In application security, risk is one of the key drivers in delivering effective solutions for your application security program. “At the end of the day, it’s really about risk. How you manage risk and how you manage resiliency for your solutions. Not only from the AppSec perspective but also from the perspective of running your business and supporting the business that you’re in today,” shared Samir.  

Samir explained that the three biggest drivers for security testing include: 

  • How well am I protecting customer data? 
  • How effectively am I building resilience for the technologies that I am providing as a service to customers? 
  • How well do all the different capabilities from infrastructure security to monitoring solutions interplay with each other in application security? 

What matters most in application security? According to Samir, there isn’t a single solution. We need to have a comprehensive view across the whole environment. Here, Samir shares examples of solution capabilities he recommends that security teams must implement – especially if you are selling or servicing solutions to your customers.  

  • Awareness and Education
  • In-App Protection
  • Advanced Solutions
  • Code Analysis
  • Perimeter Protection
  • Proactive Solutions 

Awareness, education, and code analysis will continue to evolve. Adversaries are always changing the game when it comes to finding vulnerabilities given the popularity of third-party and open-source components. There is always a new need to look at different capabilities based on this risk context. Solutions that are not only advanced but practical will be increasingly important.  

Samir continued, “Shift left-to-right is critical.” To measure the application security program, organizations need to look at the SDLC from one end to the other. From different contexts – how they develop and train their engineers to what they’re seeing on the infrastructure side with solutions that provide visibility into how they’re deploying and the types of attack patterns that target their applications.  

Understanding the interplay between these capabilities will help organizations understand what to address and prioritize to drive the effectiveness of their application security program

A Layered Approach to Application Security Testing  

Using the strategies discussed in this blog post and in the webinar, you’ll be able to implement a layered approach to AppSec that will help you build a world-class AppSec program. It starts with learning how to incorporate a risk context across the SDLC, then determining the key timeframes to implement application security testing and understanding how your solution capabilities interplay with one another.

Listen to the full conversation on building a world-class, layered application security program.
Back

Cyber Attacks on Ukraine Signal Need for Heightened Security

Four Tips to Proactively Improve Your Security Posture

Is cyber warfare in your crisis management plan? If not, it’s time to revisit your incident response plans and get proactive with your security as tensions rise in Eastern Europe. 

Recently, several Ukrainian government and bank websites were offline as a result of a massive distributed denial-of-service (DDoS) attack. Shortly following these attacks, a new “wiper” malware targeting Ukrainian organizations was discovered on hundreds of machines to erase data from targeted systems.  

Experts believe both security incidents were carried out by Russian cybercriminals or nation-state hackers, creating a new digital warfare environment that affects organizations worldwide.  

Now, on the heels of the Biden administration issuing new sanctions against Russian banks, the U.S. government is advising public and private organizations to heighten cybersecurity vigilance related to ransomware attacks carried out by the newly identified wiper malware. In fact, New York recently issued an “ultra high alert” as the state faces increased risk of nation-state sponsored cyber attacks.  

As cybercrime escalates and tensions mount, business leaders can take the following four steps to bolster security measures and remain better protected against potential risk: 

1. Evaluate Your Current Security Posture

Before implementing any new initiatives or overhauling existing measures, it’s important to evaluate the organization’s current security posture. This means taking a closer look at its attack surface, customer environments, vendor relationships, and other partnerships to understand an organization’s true exposure to malicious actors.  

Businesses that have proactively developed an incident response playbook are best prepared to evaluate their position, and large organizations likely have policies that cover geopolitical unrest. However, with the threats still unclear, even late adopters can allocate resources to strengthen their security posture in weeks or even days. 

2. Refer to CISA’s Shields Up Initiative  

The Cybersecurity and Infrastructure Security Agency (CISA) recently launched Shields Up, a free resource that features new services, the latest threat research, recommendations for business leaders, as well as actions to protect critical assets.  

Whether an IT security professional, or a top C-suite leader, all roles within an organization should familiarize themselves with Shields Up and the actionable advice recommended by CISA.  

Such advice includes reducing the likelihood of a damaging cyber intrusion; taking steps to quickly detect a potential intrusion; ensuring that the organization is prepared to respond if an intrusion occurs; and maximizing the organization’s resilience to a destructive cyber incident. 

3. Prioritize Proactive Offensive Security Measures

Proactive cybersecurity testing is oftentimes an afterthought for business leaders when evaluating breach preparedness. In reality, enterprise security testing tools and penetration testing services that boost an organization’s cybersecurity posture from the onset should be a top priority, now more than ever before.  

While many tend to focus on the physical disruption nation-state attacks can cause, popular cybercriminal tactics like distributed denial-of-service and ransomware can be mitigated through proactive offensive security activities like Penetration Testing as a Service (PTaaS), red team, breach and attack simulation, or attack surface management. 

4. Understand that Security is Everyone’s Responsibility

The weakest link within any organization is its employees. Everyone working for, or with, the business should understand that security is everyone’s business – from the CEO to the seasonal intern, and even the third-party contractor.  

For this reason, organizations should implement frequent, hands-on security training, and regularly test the effectiveness of such training with simulated attacks to determine if more work needs to be done. After all, it only takes one accidental click on a malicious link to cripple an entire organization and its assets. 

During times of unrest, cybercrime skyrockets as individuals become distracted and increasingly vulnerable. It’s important to remain vigilant while the current attacks continue, even if an organization does not directly work with Ukraine or Russia.

Connect with Team NetSPI to learn more about our testing capabilities.
Back

Abusing Azure Hybrid Workers for Privilege Escalation – Part 1

On the NetSPI blog, we often focus on Azure Automation Accounts. They offer a fair amount of attack surface area during cloud penetration tests and are a great source for privilege escalation opportunities.  

During one of our recent Azure penetration testing assessments, we ran into an environment that was using Automation Account Hybrid Workers to run automation runbooks on virtual machines. Hybrid Workers are an alternative to the traditional Azure Automation Account container environment for runbook execution. Outside of the “normal” runbook execution environment, automation runbooks need access to additional credentials to interact with Azure resources. This can lead to a potential privilege escalation scenario that we will cover in this blog.

TL;DR

Azure Hybrid Workers can be configured to use Automation Account “Run as” accounts, which can expose the credentials to anyone with local administrator access to the Hybrid Worker. Since “Run as” accounts are typically subscription contributors, this can lead to privilege escalation from multiple Azure Role-Based Access Control (RBAC) roles.

What are Azure Hybrid Workers?

For those that need more computing resources (CPU, RAM, Disk, Time) to run their Automation Account runbooks, there is an option to add Hybrid Workers to an Automation Account. These Hybrid Workers can be Azure Virtual Machines (VMs) or Arc-enabled servers, and they allow for additional computing flexibility over the normal limitations of the Automation Account hosted environment. Typically, I’ve seen Hybrid Workers as Windows-based Azure VMs, as that’s the easiest way to integrate with the Automation Account runbooks. 

Add Hybrid Workers to an Automation Account

In this article, we’re going to focus on instances where the Hybrid Workers are Windows VMs in Azure. They’re the most common configuration that we run into, and the Linux VMs in Azure can’t be configured to use the “Run as” certificates, which are the target of this blog.

The easiest way to identify Automation Accounts that use Hybrid Workers is to look at the “Hybrid worker groups” section of an Automation Account in the portal. We will be focusing on the “User” groups, versus the “System” groups for this post. 

The easiest way to identify Automation Accounts that use Hybrid Workers is to look at the “Hybrid worker groups” section of an Automation Account in the portal.

Additionally, you can use the Az PowerShell cmdlets to identify the Hybrid Worker groups, or you can enumerate the VMs that have the “HybridWorkerExtension” VM extension installed. I’ve found this last method is the most reliable for finding potentially vulnerable VMs to attack.

Additional Azure Automation Accounts Research:

Running Jobs on the Workers

To run jobs on the Hybrid Worker group, you can modify the “Run settings” in any of your runbook execution options (Schedules, Webhook, Test Pane) to “Run on” the Hybrid Worker group.

To run jobs on the Hybrid Worker group, you can modify the “Run settings” in any of your runbook execution options (Schedules, Webhook, Test Pane) to “Run on” the Hybrid Worker group.

When the runbook code is executed on the Hybrid Worker, it is run as the “NT AUTHORITY\SYSTEM” account in Windows, or “root” in Linux. If an Azure AD user has a role (Automation Contributor) with Automation Account permissions, and no VM permissions, this could allow them to gain privileged access to VMs.

We will go over this in greater detail in part two of this blog, but Hybrid Workers utilize an undocumented internal API to poll for information about the Automation Account (Runbooks, Credentials, Jobs). As part of this, the Hybrid Workers are not supposed to have direct access to the certificates that are used as part of the traditional “Run As” process. As you will see in the following blog, this isn’t totally true.

To make up for the lack of immediate access to the “Run as” credentials, Microsoft recommends exporting the “Run as” certificate from the Automation Account and installing it on each Hybrid Worker in the group of workers. Once installed, the “Run as” credential can then be referenced by the runbook, to authenticate as the app registration.

If you have access to an Automation Account, keep an eye out for any lingering “Export-RunAsCertificateToHybridWorker” runbooks that may indicate the usage of the “Run as” certificates on the Hybrid Workers.

If you have access to an Automation Account, keep an eye out for any lingering “Export-RunAsCertificateToHybridWorker” runbooks that may indicate the usage of the “Run as” certificates on the Hybrid Workers.

The issue with installing these “Run As” certificates on the Hybrid Workers is that anyone with local administrator access to the Hybrid Worker can extract the credential and use it to authenticate as the “Run as” account. Given that “Run as” accounts are typically configured with the Contributor role at the subscription scope, this could result in privilege escalation.

Extracting “Run As” Credentials from Hybrid Workers

We have two different ways of accessing Windows VMs in Azure, direct authentication (Local or Domain accounts) and platform level command execution (VM Run Command in Azure). Since there are a million different ways that someone could gain access to credentials with local administrator rights, we won’t be covering standard Windows authentication. Instead, we will briefly cover the multiple Azure RBAC roles that allow for various ways of command execution on Azure VMs.

Affected Roles:

Where noted above (VM Extension Rights), the VM Extension command execution method comes from the following NetSPI blog: Attacking Azure with Custom Script Extensions.

Since the above roles are not the full Contributor role on the subscription, it is possible for someone with one of the above roles to extract the “Run as” credentials from the VM (see below) to escalate to a subscription Contributor. This is a somewhat similar escalation path to the one that we previously called out for the Log Analytics Contributor role.

Exporting the Certificate from the Worker

As a local administrator on the Hybrid Worker VM, it’s fairly simple to export the certificate. With Remote Desktop Protocol (RDP) access, we can just manually go into the certificate manager (certmgr), find the “Run as” certificate, and export it to a pfx file.

With Remote Desktop Protocol (RDP) access, we can just manually go into the certificate manager (certmgr), find the “Run as” certificate, and export it to a pfx file.

At this point we can copy the file from the Hybrid Worker to use for authentication on another system. Since this is a bit tedious to do at scale, we’ve automated the whole process with a PowerShell script.

Automating the Process

The following script is in the MicroBurst repository under the “Az” folder:

https://github.com/NetSPI/MicroBurst/blob/master/Az/Invoke-AzHybridWorkerExtraction.ps1

This script will enumerate any running Windows virtual machines configured with the Hybrid Worker extension and will then run commands on the VMs (via Invoke-AzVMRunCommand) to export the available private certificates. Assuming the Hybrid Worker is only configured with one exportable private certificate, this will return the certificate as a Base64 string in the run command output.

PS C:\temp\hybrid> Invoke-AzHybridWorkerExtraction -Verbose
VERBOSE: Logged In as kfosaaen@notarealdomain.com
VERBOSE: Getting a list of Hybrid Worker VMs
VERBOSE: 	Running extraction script on the HWTest virtual machine
VERBOSE: 		Looking for the attached App Registration... This may take a while in larger environments
VERBOSE: 			Writing the AuthAs script
VERBOSE: 		Use the C:\temp\HybridWorkers\AuthAsNetSPI_tester_[REDACTED].ps1 script to authenticate as the NetSPI_sQ[REDACTED]g= App Registration
VERBOSE: 	Script Execution on HWTest Completed
VERBOSE: Run as Credential Dumping Activities Have Completed

The script will then write this Base64 certificate data to a file and use the resulting certificate thumbprint to match against App Registration credentials in Azure AD. This will allow the script to find the App Registration Client ID that is needed to authenticate with the exported certificate.

Finally, this will create an “AuthAs” script (noted in the output) that can be used to authenticate as the “Run as” account, with the exported private certificate.

PS C:\temp\hybrid> ls | select Name, Length
Name                                        Length
----                                        ------
AuthAsNetSPI_tester_[Redacted_Sub_ID].ps1   1018
NetSPI_tester_[Redacted_Sub_ID].pfx         2615

This script can be run with any RBAC role that has VM “Run Command” rights on the Hybrid Workers to extract out the “Run as” credentials.

Authenticating as the “Run As” Account

Now that we have the certificate, we can use the generated script to authenticate to the subscription as the “Run As” account. This is very similar to what we do with exporting credentials in the Get-AzPasswords function, so this may look familiar.

PS C:\temp\hybrid> .\AuthAsNetSPI_tester_[Redacted_Sub_ID].ps1
   PSParentPath: Microsoft.PowerShell.Security\Certificate::LocalMachine\My
Thumbprint                                Subject
----------                                -------
BDD023EC342FE04CC1C0613499F9FF63111631BB  DC=NetSPI_tester_[Redacted_Sub_ID]

Environments : {[AzureChinaCloud, AzureChinaCloud], [AzureCloud, AzureCloud], [AzureGermanCloud, AzureGermanCloud], [AzureUSGovernment, AzureUSGovernment]}
Context      : Microsoft.Azure.Commands.Profile.Models.Core.PSAzureContext

PS C:\temp\hybrid> (Get-AzContext).Account                                                                               
Id                    : 52[REDACTED]57
Type                  : ServicePrincipal
Tenants               : {47[REDACTED]35}
Credential            :
TenantMap             : {}
CertificateThumbprint : BDD023EC342FE04CC1C0613499F9FF63111631BB
ExtendedProperties    : {[Subscriptions, d4[REDACTED]b2], [Tenants, 47[REDACTED]35], [CertificateThumbprint, BDD023EC342FE04CC1C0613499F9FF63111631BB]}

Alternative Options

Finally, any user with the ability to run commands as “NT AUTHORITY\SYSTEM” on the Hybrid Workers is also able to assume the authenticated Azure context that results from authenticating (Connect-AzAccount) to Azure while running a job as a Hybrid Worker. 

This would result in users being able to run Az PowerShell module functions as the “Run as” account via the Azure “Run command” and “Extension” features that are available to many of the roles listed above. Assuming the “Connect-AzAccount” function was previously used with a runbook, an attacker could just use the run command feature to run other Az module functions with the “Run as” context.

Additionally, since the certificate is installed on the VM, a user could just use the certificate to directly authenticate from the Hybrid Worker, if there was no active login context.

Summary

In conjunction with the issues outlined in part two of this blog, we submitted our findings to MSRC.

Since this issue ultimately relies on an Azure administrator giving a user access to specific VMs (the Hybrid Workers), it’s considered a user misconfiguration issue. Microsoft has updated their documentation to reflect the potential impact of installing the “Run as” certificate on the VMs. Additionally, you could also modify the certificate installation process to mark the certificates as “non-exportable” to help protect them.

Note: Microsoft has updated their documentation to reflect the potential impact of installing the “Run as” certificate on the VMs.

We would recommend against using “Run as” accounts for Automation Accounts and instead switch to using managed identities on the Hybrid Worker VMs.

Stay tuned to the NetSPI technical blog for the second half of this series that will outline how we were able to use a Reader role account to extract credentials and certificates from Automation Accounts. In subscriptions where Run As accounts were in use, this resulted in a Reader to Contributor privilege escalation.

Prior Work

While we were working on these blogs, the Azsec blog put out the “Laterally move by abusing Log Analytics Agent and Automation Hybrid worker” post that outlines some similar techniques to what we’ve outlined above. Read the post to see how they make use of Log Analytics to gain access to the Hybrid Worker groups.

Back

Live from SecureWorld Boston 2022

The NetSPI team ventured from Minneapolis to Boston to attend the 2022 SecureWorld cybersecurity conference. For the first time in over two years, we saw plenty of smiling faces in the halls of the Hynes Convention Center as Boston lifted its COVID-19 mask mandate just days before the event.

While NetSPI has a growing satellite team in Boston, including managing director Nabil Hannan who is a frequent contributor to the NetSPI Executive blog, it was the first or second time to Boston for many of us. Needless to say, we took full advantage of the oysters, lobster rolls, and Italian food. Boston is truly a special city, which was made evident by the local security community members that we connected with at SecureWorld and at our post-event happy hour at Eataly.

If you skim through the SecureWorld Boston 2022 agenda you’ll quickly recognize common themes: the cybersecurity skills and diversity gap, ransomware prevention, application security, and more.

But the session descriptions can only tell you so much. Here are for four key narratives to take away from the event.

Security Awareness Training Takes Center Stage

In nearly every session we attended, security awareness training was referenced in some capacity. In many cases it was the final – and most actionable – recommendation provided.

How should you prepare for the increased cybersecurity risk amid Russia’s attack on Ukraine? Security awareness training. How can you prevent ransomware? Security awareness training. How do you secure the cloud? You guessed it… security awareness training.

One session titled, A Whole Lotta BS (Behavioral Science) About Cybersecurity, Lisa Plaggemier, Executive Director at the National Cybersecurity Alliance analyzed the results from a study that benchmarks the current state of security awareness and training.

The most shocking statistics from the report?

  • Only 22% of respondents always report phishing emails to their email platform. 
  • 28% of respondents do not know how to report phishing emails. 
  • Only 12% of respondents use a password management platform, which Lisa attributed to the lack of trust within the industry due to the breaches they experienced early on. 
  • 48% of respondents have never heard of multi-factor authentication (MFA).

Lisa went on to explain that capability, opportunity, and motivation are necessary to get someone to form a new habit. And when it comes to cybersecurity hygiene, motivation is the hardest to achieve.

A question from the audience member validated this concept. They asked, “How can I motivate my employees to report phishing attempts more regularly?” Lisa and the audience chimed in with actionable recommendations including: 

  • Give people validation. Have an automated response that thanks employees for successfully identifying a phishing attempt. People want validation and a simple automated “thank you” note goes a long way. 
  • Gamify your social engineering assessments and reward success. An audience member implemented a program where employees receive points for properly reporting a phishing attempt. They can then cash in those points to purchase from the company store. 
  • Include HR in your conversations around social engineering. They will have great ideas as to what motivates people in the workplace and can help set policies for those who repeatedly fail phishing assessments.

These were timely suggestions as phishing attempts are not only more frequent, but also more successful. The State of the Phish report from Proofpoint found that 83% of organizations experienced a successful email-based phishing attack in 2021, versus 57% in 2020.

It’s no surprise that security awareness was top of mind for Boston’s security leaders at SecureWorld.

Security Decisions Should Never Be Made in a Silo

One of the most engaging sessions was called, Congratulations on CISO, Now What?. Bill Bowman, CISO at Emburse, spoke directly to the newer CISOs about how to set a solid foundation for success in the demanding role.

He began his talk with an overview of the OODA Loop: Observation, Orientation, Decision, and Action. Where are your crown jewels? How does your business make money? What exactly did you inherit? What security framework are you using? These were a few of the questions he urged CISOs to answer as they get started.

“You are the brakes that make your company go faster,” explained Bill. Yes, security may cause friction to business processes, but it also adds immense value. And it’s up to the CISO to showcase and communicate that value they bring to the table.

He went on to explain why security decisions cannot be made in a silo. “The bad guys always collaborate, and the good guys don’t,” Bill said as he urged the CISOs in the room to establish security decision making teams within their organizations.

He continued with actionable advice for building said teams, specifically a policy review board and an infosec committee. Bill suggested that the policy review board should consist of thought leaders with clout and legal teams. While infosec teams should consist of not only who the CISO reports to, but also the technical security engineering team: those in charge of penetration testing, vulnerability scanning, bug bounty programs. These are people who understand where your security program is at and how to improve it, tactically.

Bill ended the session with insights around security awareness and the need to understand what types of security content people are interested in.

Delivering content that truly interests them, whether that’s through monthly meetings or an internal newsletter, allows you to seamlessly connect your security program to your employees. Plus, he dug deeper into how to establish metrics that matter, including vulnerability management metrics, and the need to prepare for a crisis early on and establish relationships with law enforcement contacts in the event of a breach. If you ever get a chance to hear Bill speak, don’t miss it!

Cloud Security is a Shared Responsibility

One narrative that resonated across all of the cloud security sessions was the Shared Responsibility Model, the concept that cloud security is both the responsibility of the cloud providers, AWS, Azure, Google Cloud Platform (GCP) and the organizations who use the technology.

At NetSPI, we practice this through our cloud penetration testing services. We help our customers identify cloud platform misconfigurations and fix vulnerabilities on the end-user side. Every organization must take responsibility for their own cloud security.

In an early panel, speakers discussed how the shift to fully remote and hybrid workforce models have increased the urgency to improve cloud security. When the moderator asked the audience, “Who has a formal work-from-home policy in place for their employees?” Shockingly, only a few hands were raised out of the crowd of at least 50.

They continued to speak about the long-term technological impact of COVID-19, how expectations have changed, and why cloud services have become much more valuable today. Their final words of wisdom? If you’re migrating to the cloud, take the time to do things properly the first time. Silo your technologies, work with your data scientists, and leverage cloud pentesting and bug bounty programs to find security flaws before bad actors do.

It’s Necessary to Pause Before Reacting to a Crisis

With all eyes on Ukraine and the threat of cyberwarfare looming across the globe, panelists from an afternoon keynote session, Live from Ukraine: How Does Your Crisis Management Playbook Stack up During a Real-World Conflict?, explain why organizations need to pause before reacting to a crisis situation.

Why? Empathy.

DataRobot CISO Andy Smeaton joined the panel live from Poland where he was helping Ukrainians find safety and aid in humanitarian efforts. He was joined by Esmond Kane, CISO at Steward Health Care, Selva Vinothe Mahimaidas, CISO at Houghton Mifflin Harcourt, and Eric Gauthier VP of Infrastructure & Security at Emsi Burning Glass.

Live from SecureWorld Boston 2022

“Plans are essentially useless, but planning is essential,” quoted one of the panelists. They discussed tabletop exercises and provided some recommendations on how to run an effective session and what to do when a real crisis, such as a breach, happens. Advice included:

  1. Review your incident response plan. Refresh your understanding of the content and revisit the corresponding notes from the last tabletop for hidden gems and key insights.
  2. Tabletop exercises should focus on outcomes.
  3. Start small. Book an hour to start and grow and improve the sessions from there.
  4. Use what you have sitting in front of you. Evaluate the security controls you already have and understand how to use them to your advantage versus buying new technologies to fill a gap.
  5. Ask yourself, is this your crisis?
  6. Include security and IT teams in the planning phase. Let them participate and express judgement. It’s important to have a critical view of your playbook.
  7. Take a deep breath. The more you project calmness and control, the better you will respond.
  8. Be realistic. Smaller security teams will take more time, resources, and money compared to larger teams.

It’s challenging to prioritize planning for something that might not happen, but it’s vitally important. The most difficult factor of crisis planning and response according to the panelists? The human factor. There’s no playbook for how to deal with your emotions.

The panelists went on to explain that CISOs must practice empathy as employees grapple with the impact of the war. Humans are imperfect and they will fall victim to misinformation and phishing attempts. It’s up to security leaders to hit pause, reflect, and lean on their team to make thoughtful decisions together to not only protect their business and their job, but also their people. 

A Successful Happy Hour with Team NetSPI

Following the event on Wednesday, NetSPI hosted a happy hour at Eataly just next door to the convention center. We continued the conversations around security leadership, increasing threats, penetration testing services, and more while we enjoyed themed cocktails (appropriately named “NetSPIked” and “Netgroni”) and delicious Italian bites.

Check out this recap of the event:

Want to connect with us at a security event or NetSPI happy hour?

Visit our events page to find out where NetSPI is heading next!

Discover how the NetSPI BAS solution helps organizations validate the efficacy of existing security controls and understand their Security Posture and Readiness.

X