Back

Through the Attacker's Lens: What Is Visible on Your Perimeter?

The Internet is a hacker’s playground. When a hacker is looking for targets to attack, they typically start with the weakest link they can find on the perimeter of a network – something they can easily exploit. Usually when they find a target that they can try to breach, if the level of effort becomes too high or the target is sufficiently protected, they simply move on to the next target.

The most common type of attackers are called “Script Kiddies.” These are typically inexperienced actors that simply use code/scripts posted online to replicate a hack or download and use software like Metasploit to try to run scans against systems to find something that breaks. This underscores the need for making sure that the perimeter of your network and what’s visible to the outside world is properly protected so that even inadvertent scanning by Script Kiddies or tools being ran against the network don’t end up causing issues.

Testing the External Network

In most cases, it seems like all the focus and energy goes to the network perimeter that is externally facing from an organization. Almost all the organizations we work with have a focus on, or automated scanners that are regularly testing the external facing network. This is an important starting point because usually vulnerabilities within the network are easily detected and there are many tools out there at the hackers’ disposal allowing them to easily discover vulnerabilities in the network. It’s very common for attackers to regularly scan the Internet network space to try to find vulnerabilities and determine what assets they are going to try to exploit first.

Organizations have processes or automations that regularly scan the external network looking for vulnerabilities. Depending on the industry an organization falls into, there are regulatory pressures that also require a regular cadence of security testing.

The Challenge of Getting an Inventory of All Web Assets

Large organizations that are rapidly creating and deploying software commonly struggle to have an up-to-date inventory of all web applications that are exposed to the Internet at any given time. This is due to the dynamic nature in which organizations work and have business drivers that require web applications to be regularly deployed and updated. One common example is organizations that heavily use web applications to support their business needs, particularly for marketing purposes where they deploy new micro-sites on a regular basis. Not only are new pages deployed regularly, but with today’s adoption of the DevOps culture and Continuous Integration (CI) / Continuous Deployment (CD) methodology being adopted by so many software engineering teams, almost all applications are regularly being updated with code changes.

With changes happening on the perimeter where web applications are exposed and updated all the time, organizations need to regularly scan their perimeter to discover what applications are truly exposed, as well as if they have any vulnerabilities that are easily visible on the outside to attackers that may be running scanning tools.

Typically, organizations do have governance and processes to perform regular testing of applications in non-production or production-like UAT environments, but many times, testing applications in production doesn’t happen. Although doing authenticated security testing in production may not always be feasible depending on the nature and business functionality of a web application, performing unauthenticated security scanning of applications using Dynamic Application Security Testing (DAST) tools can be done easily – after all, the hackers are going to be doing it anyways, so might as well proactively perform these scans and figure out ahead of time what will be visible to the hackers.

The Need for Unauthenticated DAST Scanning Against Web Applications in Production

Given these challenges, it’s important for organizations to seriously consider whether it makes sense to start performing unauthenticated DAST scanning against all their web applications in production on a periodic basis to ensure that vulnerabilities don’t make it through the SDLC to production.

At NetSPI, for our assessments we typically use multiple DAST scanning tools to perform assessments, leveraging both open source DAST scanners and commercial DAST scanning tools.

What Does NetSPI’s Assessment Data Tell Us?

Given our experience in web application assessments, we looked at data from the last 10,000 vulnerabilities identified from web application assessments, and the most common issues are Security Misconfiguration (28%) and Sensitive Data Exposure (23%). Full analysis of the data is in the graphs below.

Given how Security Misconfiguration is the most common vulnerability that we tend to find, it highlights how these misconfigurations could accidentally also be moved into production. This shows why it’s so important to periodically test all web applications in production for common vulnerabilities.

Key Takeaways

  1. Testing your external network is a good start and a common practice in most organizations.
  2. Keeping an up-to-date inventory of all web applications and Internet facing assets is challenging.
  3. Companies should periodically perform DAST assessments unauthenticated against all web applications in production.
  4. Data from thousands of web application assessments shows that Security Misconfiguration and Sensitive Data Exposure are the most common types of vulnerabilities found in web applications.

Discover how the NetSPI BAS solution helps organizations validate the efficacy of existing security controls and understand their Security Posture and Readiness.

X