Last week Karl Fosaaen described in his blog the various trials and tribulations we went through at a hardware level in building a dedicated GPU cracking server. This week I will be doing a complete walkthrough for installing all the software that we use on our box. This includes the operating system, video drivers, oclHashcat-plus, and John the Ripper. Because we have AMD video cards, the driver installation and compiling John the Ripper sections will be tailored for AMD, sorry Nvidia users.
Installing the OS:
For an operating system, Linux and Windows are going to be the way to go. For a headless server however, Linux is the best way to go. The only downside with Linux is that driver support among video cards, especially AMD, is somewhat lacking to its Windows counterpart. However, the good news is that both AMD and Nvidia have been increasing their support for Linux drivers in recent years.
Any Linux distribution will do, but for our server, we opted for Ubuntu 12.10 64-Bit server edition to do the most minimal setup. Much of the information for the next few sections is from the hashcat wiki.
To start off, download the Ubuntu 12.10 server edition ISO from Ubuntu. We don’t have a cd drive on our server, so we had to copy the ISO to a flash drive. YUMI and UNetbootin make this process painless on Windows and Linux, respectfully. Otherwise, the ISO can be burned to a disc.
Boot up the Ubuntu image, choose your language, and select Install Ubuntu Server.
Navigate through the installation options and select your preferences. For most people, the defaults should be sufficient. Then create your user when the dialog comes up. When the installation reaches the “Partition Disks” section, either manually set them up (if you know what you’re doing) or just use the “Guided – use entire disk” option. We choose not to use LVM on our box, but the option is up to you.
After you are done partitioning your hard drive, write the changes to the disc. If you have an HTTP proxy, enter the information when the dialog appears. If not, then just continue. Next, select if you would like to have automatic updates enabled. We opted not to, but it’s entirely up to you. When the software selection appears, select OpenSSH server by navigating to it with the arrow keys and pressing spacebar to select the option.
None of the other packages are required unless you need them. Press enter to install the software. When the installation is finished, install GRUB to the master boot record and reboot. You should now be booted into your new Ubuntu server!
Setting Up Ubuntu:
Before we install the video drivers, we have to setup our Ubuntu server with X11. This is because the AMD drivers require X11 to interact with video cards to obtain fan speeds and GPU temps, which are very important to know when cracking away.
To begin, ssh into your server and update Ubuntu with the following command:
sudo apt-get update && sudo apt-get upgrade
After Ubuntu has updated, we will need to install a minimal X11 environment that our user can automatically login to when the server is rebooted. This is to ensure that the xserver will always be running and in turn allow continuous cracking without any hiccups.
To keep it simple, a light weight window manager is recommend. Openbox, fluxbox, and blackbox are three simple light weight window managers that we can use. You are by no means restricted to a window manager. If you want gnome, xfce, or kde, those can be installed too. For this installation, we will install fluxbox with lightdm as the display manager. To install these, run the following command:
This should install all the necessary packages for an X11 environment to run. Now that we have an X11 environment installed, we need to let applications from the console know which display we are using. To do this, we set the DISPLAY variable to our current display. The format for the DISPLAY variable is hostname:display. For a local instance, the hostname can be omitted. The default display is usually going to be 0. Run the command below to set your current display to 0.
Add the above command to your bashrc to make it persistent whenever your user logs in. I have run into many issues because I did not have this set. So make sure your bashrc is setup with your correct display location.
Now that our X11 environment is setup, we can install the AMD drivers.
Installing AMD Drivers:
To begin installing the AMD drivers, we need to install some prerequisites. First install unzip with the following command
sudo apt-get install unzip
Next, we need to install the dependencies for fglrx, which is the proprietary Linux driver for AMD on Ubuntu’s repositories. The only difference between fglrx and AMD’s Catalyst drivers is that the latter is newer, but they both require the same dependencies. Run the following command to install the fglrx dependencies:
sudo apt-get build-dep fglrx
If the fglrx dependencies are not installed, the AMD driver installation will fail with this fglrx error:
oclHashcat-plus comes in a 7z format. So we need to install p7zip to extract it.
sudo apt-get install p7zip
Run p7zip with the –d flag to extract a 7z file.
p7zip -d oclHashcat-plus-0.14.7z
Navigate to the newly extracted ocl directory and run one of the Example.sh scripts to test run the cracking process.
If all goes well you should see your cards loading up and the hash getting cracked! If you do not see all your cards being recognized, make sure that your xorg.conf was created properly. Try running the amdconfig command above again to regenerate an xorg.conf.
Next we will install John the Ripper with OpenCL support
Installing John the Ripper
Like oclHashcat-plus, John also supports cracking hashes on GPUs, but it must be compiled with the options to do so. Much of the information here is taken from the john GPU wiki (https://openwall.info/wiki/john/GPU).
Next, install the libssl-dev package from apt-get so that John compiles correctly.
sudo apt-get install libssl-dev
Navigate to the john src directory. Compile john with OpenCL for either 32 bit or 64 bit with
respectfully. John can also be compiled with CUDA support if you have Nvidia cards. The information on how to do that is located on their wiki.
If you get openssl headers not found during compilation, install the libssl-dev package.
Navigate back to the run directory and your newly compiled john binary should be there. You can test that John can use your GPUs by running a test command.
This is guide details one of many possible setups for a GPU cracking server. When all is done, our cracking server built with these specifications works very well. In Karl’s blog here, he describes common ways to obtain hashes to crack on Windows, Linux, and web applications.
File upload vulnerabilities and web shells are not a novelty when talking about web application security. It’s not rare to see a web shell result in a full compromise of the web server. For example, Metasploit can generate uploadable web payloads that can initiate Metasploit shells. It’s also not that rare that the same web server hosts multiple web applications, all with their own back-end database connectivity.
I thought it would be nice to know how much data we can gain access to by simply uploading a web shell to a web server if we decided to take a step back and chose not to completely compromise it. This really becomes more practical when you’re testing an application in a QA environment and you want to show the client that access to a random QA application may grant you direct access to databases used by other applications, even critical production databases.
To simplify the process I rewrote an existing .aspx web shell and included PowerShell functionality to allow for database connectivity to create a new CmdSql.aspx web shell. Keep in mind that the shell only works on IIS servers that allow .aspx execution, PowerShell has to be available on the web server, and the current PowerShell code only allows connectivity to MSSQL servers. Not perfect, but nice enough for me.
It’s worth noting that the CmdSql shell can help in escalating an attack in tightly configured environments. If ingress and egress filtering are properly configured, normal Metasploit bind or reverse shells may not work. And if ingress filtering from the web server limits traffic to database communication, attacking databases may provide the means to escalate the attack into the internal network.
CmdSQL.aspx Script Overiew
The CmdSql.aspx web shell supports three different functions: OS command execution, web.config parsing, and SQL query execution. Below is an overview of the functionality and a basic screen shot.
OS Command Execution
This is really the core definition of a web shell I guess. Apart from the obvious, the command execution can be used to locate the web directories (such as C:inetpub) and thus make locating web.configs faster for the next step. Below is a basic example screen shot.
For the sake of CmdSql.aspx, the main function of web.config is to store the database connection strings. There can be multiple connection strings for an application, and there can be multiple web.configs per server. The connection strings can be either clear text or they can be encrypted. Nevertheless, they are needed for arbitrary SQL query execution.
CmdSql.aspx looks for all web.config files in the provided directory and extracts all the connection strings. If the connection string is encrypted, aspnet_regiis is first used to decrypt the configuration file (in a temp folder). Aspnet_regiis is a .NET tool that is typically used to encrypt web.configs; CmdSql attempts to find to newest version of the tools to decrypt the web.config. No key or any other decryption information has to be provided to aspnet_regiis, just the file location. I haven’t done comprehensive testing / research to determine what permissions are needed to run the program, but it seems to always work on my test systems. I decided to use aspnet_regiis even though WebAdminstration snapin could probably be used and it would be “cleaner”; I just wasn’t sure if it’s installed with IIS by default or if it’s otherwise common. Below is a basic example screenshot.
SQL Query Execution
Now that web.configs are successfully parsed (hopefully), and the connection strings are extracted, they can be popped into a text box in the web shell and arbitrary SQL queries can be executed on targeted database server. Below is a basic screen shot example.
Many times during our mobile application penetration testing, we are finding the applications are vulnerable to man-in-the-middle attacks (MITM). Certificate pinning is one part of the answer to MITM attacks in a mobile application. For those who do not know about certificate pinning, this is not pinning your CISSP certificate to the wall.
What is it?
Certificate pinning is hardcoding or storing the information for digital certificates/public keys in a mobile application. Since the predefined certificates are used for secure communication, all others will fail, even if the user trusted other certificates.
In a mobile application, the application knows what servers they will connect to, so that the application can check for those specific certificates. A browser cannot implement certificate pinning, since it is designed for general-purpose communication.
What happens during an SSL Connection?
When an application sees an SSL certificate from a server, it should verify two things:
The certificate signed by a root certificate authority (CA)
The server’s name (via DNS) matches the Common Name (CN) presented in the SSL certificate
In the case where these do not match, the application (or browser) throws up a warning and lets the user decide what to do. In many cases, the general user population will not understand the warning and just decide to accept the invalid certificate.
What are we trying to do by certificate pinning?
The idea is to prevent a man in the middle attack. This allows the attacker to get in the middle of the conversation between a client and server. They could be just eavesdropping on the conversation or could be changing the data as it moves to the client or server.
An attacker who gains control of a user’s operating system can install trusted root Certificate Authorities. These root CAs will be able to sign new certificates, which will satisfy SSL validation procedures. Certificate pinning prevents this by ensuring a specific server public key is used to initiate secured traffic.
How do we implement certificate pinning?
Distribute the server’s public key with the application. Any time the application begins an SSL exchange with the server, validate that the traffic has been encrypted with the same key that matches the public key included with the app. This takes the CA system out of the equation and assuming it is the correct certificate, the names do match.
Is there a way to break certificate pinning?
An attacker would have to decompile the application, change the code, rebuild it and redeploy the application. Another option would be to run the application in a debugger.
For Android, you can obfuscate your code. You can also check to see if the application is running in a debugger. Code signing will also make it more difficult for an attacker to create an unauthorized patch for your application.
A question came up about a PCI audit that was performed for one of our customers. They just finished their PCI audit and passed. I am now working with them on a new software application and there is a vulnerability in their application that was ranked as a high. This was discovered on an application penetration test back in 2011 but was accepted by the company as a business risk; resulting in the vulnerability being marked closed because of this acceptance. The client wanted to include this same functionality within a new application, resulting in the new application containing the vulnerability.
The QSA who performed their last PCI audit should not have passed them because this vulnerability is in violation of Requirement 6.5.6. The requirement states:
Prevent common coding vulnerabilities in software development processes, to include all “High” vulnerabilities identified in the vulnerability identification process (as defined in PCI DSS Requirement 6.2).
Please note, according to PCI Requirement 6.2, a CVSS score of 4 and above is considered to be a “High” risk vulnerability.
Because of this vulnerability and because the company has not fixed it, they could be fined by their bank. Furthermore, this vulnerability could pose financial liability and reputation risk for the company. If customers find out about this vulnerability, they may question the company’s ability as a trusted vendor.
So why did the previous QSA pass them? Without discussing this with the QSA, one can assume that since the issue was closed, it was fixed. You have to remember that when the auditor is performing the audit, they are presented with a lot of information. This is a lot like trying to drink from a fire hose. Things like this vulnerability could have been missed; it was one finding out of many or possibly the auditor assumed that since the finding was closed, that it had been remediated. Another reason may be the way an auditor interprets the PCI Requirements. This person may not have understood the requirement and made the wrong interpretation. In many cases, one auditor’s interpretation may be different from another auditor.
It does not really matter now, why the company passed their audit, even though they did not fix the vulnerability. The issue now is that they need to fix it before moving forward.
This winter, we decided to create our own dedicated GPU cracking solution to use for our assessments. It was quite the process, but we now have a fully functional hash cracking machine that tears through NTLMs at roughly 25 billion hashes per second (See below). While attempting to build this, we learned a lot about pushing the limits of consumer-grade hardware.
We set out to build a cracking rig with four high end video cards (AMD Radeon HD 7950) to run oclHashcat. We also wanted this solution to be rack mountable, so that it would be easy to store in our data center. As it turns out, there are not a ton of video card friendly server cases. We were only able to find a few GPU cracking friendly cases, but most of them cost more than the rest of our cracking hardware combined. If you have the money to spend, we would recommend going with the special case to save yourself from other issues, but this isn’t really an option for everyone. The reason why we recommend this is that the cards themselves do not take well to being lined up all together on a standard ATX motherboard. The fans tend to stick out further than they should and end up hitting the next card in the row. On top of that, the cramped conditions lead to overheating cards and cracking jobs stopping. The specialized cases have enough space to avoid these issues, making it easier to set up a box.
We opted for an “open air” configuration for our cracking box. This was primarily driven by trying to mimic the setups of bitcoin mining rigs that we had seen online. I will say that this is not the prettiest option for housing all of these cards. However, it is one of the most efficient ways to space the cards out for cooling. With the “open air” setup, we’re able to connect riser cables to two of the cards and keep the other two cards down on the board. These riser cables can have their own problems. We ended up opting for one (16x to 1x) riser cable and a different (16x to 16x) riser cable that has some modifications for voltage. The 16x to 16x cable has a 12 volt molex adapter soldered to the 12 volt pins on the riser slot.
While this looks a little hackish, it actually works quite well. We had to do this to supplement the voltage from the motherboard, as it was unable to pull proper voltage for all four cards (with two riser cables). I should also mention that there is some crafty engineering taking place to suspend the two cards above the board. This was accomplished with several zip ties and a modified piece of wire-mesh shelving.
I should also note that this whole rig is tied down (with stand-offs) to an old rack mount shelf. All in all, this setup works quite well. We can have all four cards running at full speed and the the hottest card will top out at 85° Celsius. We’re very aware of the fact that this looks insane. It’s hopefully a temporary solution. Eventually, we’re looking at securing a single rail to the rack to screw the cards into.
As for performance, here’s our current averages for hash cracking (OCL in Brute-Force mode):
MD5 – ~16000.0M/s
NTLM – ~25500.0 M/s
SHA1 – ~7900.0M/s
5 Tips for Building Your Own
So if you’re planning on putting together your own GPU cracking rig, here’s some steps that you may want to take to make it easier.
Necessary cookies help make a website usable by enabling basic functions like page navigation and access to secure areas of the website. The website cannot function properly without these cookies.
YouTube session cookie.
Marketing cookies are used to track visitors across websites. The intention is to display ads that are relevant and engaging for the individual user and thereby more valuable for publishers and third party advertisers.
Analytics cookies help website owners to understand how visitors interact with websites by collecting and reporting information anonymously.
Preference cookies enable a website to remember information that changes the way the website behaves or looks, like your preferred language or the region that you are in.
Unclassified cookies are cookies that we are in the process of classifying, together with the providers of individual cookies.
Cookies are small text files that can be used by websites to make a user's experience more efficient. The law states that we can store cookies on your device if they are strictly necessary for the operation of this site. For all other types of cookies we need your permission. This site uses different types of cookies. Some cookies are placed by third party services that appear on our pages.
Discover why security operations teams choose NetSPI.