Back

The CyberWire: Cody Chamberlain on Breach Communication

On June 30, 2022, NetSPI Head of Product Cody Chamberlain was featured on the CyberWire Daily podcast. Read the summary below or listen to episode 1610 online (starts at 14:37).

+++

  • The two pillars of breach communications: There are things you have to do and things you should do when responding to clients. Empathy and transparency will be key in communicating with them.
  • Plan the work, work the plan: Building the incident response, knowing who to work with, and trusting the process will give you the confidence you need, so less emotions take over.
  • Empathize with clients: Being transparent with clients will appease their needs and worries.
Back

Addressing Application Security Challenges in the SDLC

In recent years, more organizations have adopted the “shift left” mentality. This concept moves application security testing earlier in the software development life cycle (SDLC) versus the traditional method of implementing security testing after deployment of the application.  

By shifting left, an organization can detect application vulnerabilities early and remediate them, saving time and money, and ultimately not delaying the release of the application.  

But not everything comes wrapped in a beautiful bow. In application security, I witnessed that shifting left comes with its fair share of trouble – two in fact: 

  • Overworked and understaffed teams
  • Friction between application security engineers and development teams 

During his time at Microsoft, Idan Plotnik, co-founder and CEO at Apiiro experienced these two roadblocks and created an application security testing tool that addressed both. I recently had the opportunity to sit down with him to discuss the concept of shift left and other application security challenges.  

Continue reading for highlights from our conversation including contextual pentesting, open-source security, and tips on how a business can better prepare for remote code execution vulnerabilities like Log4Shell. For more, listen to the full episode on the Agent of Influence podcast.  

Why is it important to get more context on how software has changed and apply that to pentesting? 

Idan Plotnik: One of the biggest challenges we are hearing is that organizations want to run penetests more than once throughout the development life cycle but are unsure of what and when to test. You don’t want to spend valuable time on the pentester, the development team, and the application security engineer to run priority or scoping calls in every release. You want to identify the crown jewels that introduce risk to the application. You want to identify these features as early as possible and then alert your pentesting partner so they can start pentesting early on and with the right focus. 

It’s a win-win situation.  

On one hand, you reduce the cost of engineers because you’re not bombarding them with questions about what you’ve changed in the current release, when and where it is in the code, and what are the URLs for these APIs, etc.  

On the other hand, you’re reducing the costs of the pentest team because you’re allowing them to focus on the most critical assets in every release.  

Nabil Hannan: The traditional way of pentesting includes a full deep dive test on an application. Typically, the cadence we’ve been seeing is annual testing or annual requirements that are driven by some sort of compliance pressure or regulatory need.  

I think everybody understands why it would be valuable to test an application multiple times, and not just once a year, especially if it’s going through changes multiple times in a year. 

Now, the challenge is doing these tests can often be expensive because of the human element. I think that’s why I want to highlight that contextual testing allows the pentester to hone and focus only on the areas where change has occurred.  

Idan: When you move to agile, you have changes daily. You need to differentiate between changes that are not risky to the organization or to the business, versus the ones that introduce a potential risk to the business. 

It can be an API that exposes PII (Personally Identifiable Information). It can be authorization logic change. It can be a module that is responsible for transferring money in a trading system.  

These are the changes that you need to automatically identify. This is part of the technology that we developed at Apiiro to help the pentester become much more contextual and focused on the risky areas of the code. With the same budget that you have today, you can much more efficiently reduce the risks.  

Learn more about the partnership between NetSPI and Apiiro. 

Why is open-source software risk so important, and how do people need to think about it? 

Idan: You can’t look at open source as one dimension in application security. You must take into consideration the application code, the infrastructure code, the open-source code, and the cloud infrastructure that the application will eventually run on.  

We recently built the Dependency Combobulator. Dependency confusion is one of the most dangerous attack vectors today. Dependency confusion is where you’re using an internal dependency without a proper naming convention and then an attacker goes into a public package manager and uses the same name.  

When you can’t reach your internal artifact repository or package manager, it will automatically fall back and access the package manager on the internet. Then, your computer will fetch or download the malicious dependency with the malicious code, which is a huge problem for organizations.  

The person who founded the dependency confusion attack suddenly receive HTTP requests from within Microsoft, Apple, Google, and other enterprises because he found some internal packages while browsing a few websites. He just wanted to play with the concept of editing the same packages with the same name to the public repository. 

This is why we need to help the community and provide them with an open-source framework that they can extend, so that they can run it from their CLI or CI/CD pipeline for every internal dependency. Contributing to the open-source community is an important initiative.  

What can organizations do to be better prepared for similar vulnerabilities to Log4Shell? 

In December 2021, Log4Shell sent security teams into a frenzy ahead of the holiday season. Idan and I discussed four simple steps organizations can take on to mitigate the next remote code execution (RCE) vulnerability, including: 

  1. Inventory. Inventory and identify where the vulnerable components are.
  2. Protection. Protect yourself or your software from being attacked and exploited by attackers from the outside.
  3. Prevention. Prevent developers from doing something or getting access to the affected software to make additional changes until you know how to deal with the critical issue.
  4. Remediation. If you do not have that initial inventory that is automated and happening systemically across your organization and all the different software that is being developed, you cannot get to this step.  

For the full conversation and additional insights on application security, listen to episode 39 of the Agent of Influence podcast.

Listen to Agent of Influence, Episode 39 with Idan Plotnik now
Back

Digital Journal: Effective Data Backups can Provide Ransomware Protection

On June 30, 2022, NetSPI Head of Strategic Solutions Florindo Gallicchio was featured in Solutions Review in an article called Effective Data Backups Can Provide Ransomware Protection. Read the preview below or view it online.

+++

To understand the key aspects of data capture, preservation and backup, Digital Journal caught up with Florindo Gallicchio, Managing Director, Head of Strategic Solutions at NetSPI.

According to Gallicchio now is the “time to acknowledge how critical data backup has become, especially since many ransomware strains attempt to delete backup files, as we witnessed with Ryuk.”

Ryuk is a type of ransomware used in targeted attacks, where the threat actors make sure that essential files are encrypted. It is especially directed towards larger business units. Ryuk ransomware is derived primarily from the popular Hermes commodity ransomware that has been widely available on the dark web and hacker forums prior to 2018.

Gallicchio says there are two key focal points that need to be understood within the corporate world. These are defined as: “Most businesses are faced with two significant risks when it comes to backups: the theft and public disclosure of sensitive data, and the disruption of critical business functions. If either of these risks occur, organizations could endure devastating consequences. To make sure that doesn’t happen, organizations need to proactively put strategies in place to bolster protection against these threat actors.”

Read the full article online.

Back

Solutions Review: Four Ways to Elevate Your Penetration Testing Program

On June 24, 2022, NetSPI Managing Director Nabil Hannan published an article in Solutions Review called Four Ways to Elevate Your Penetration Testing Program. Read the preview below or view it online.

+++

Let’s set the scene. For years, organizations have undergone compliance-based penetration testing (pentesting), meaning they only audit their systems for security vulnerabilities when mandated to do so by regulatory bodies. However, this “check-the-box” mindset that’s centered around point-in-time testing is leaving organizations at risk for potential exploitation.

From August-October 2021 alone, a total of 7,064 new Common Vulnerabilities and Exposures (CVE) numbers were registered – all of which could go undetected if a business does not have an established proactive security posture.

With malicious actors continuously evolving and maturing their attack techniques, organizations must leave this outdated mindset behind and take the necessary steps to develop a comprehensive, always-on penetration testing program. Here’s a look at how this can be accomplished.

Adopt an ‘as-a-Service’ Model

Traditional pentesting programs operate under a guiding principle: organizations only need to test their assets a few times a year to protect their business from potential vulnerabilities properly. During this engagement, a pentester performs an assessment over a specified period and then provides a static report outlining all of the found vulnerabilities. While once deemed the status quo, there are many areas for inefficiencies in this traditional model.

With threats increasing, organizations must take a proactive approach to their security posture. Technology-enabled as-a-Service models overhaul traditional pentesting programs by creating always-on visibility into corporate systems. For an as-a-Service model to succeed, the engagement should allow organizations to view their testing results in real-time, orchestrate faster remediation, and perform always-on continuous testing.

This hyperfocus on transparency from both parties will drive clear communication, with the pentesters available to address any questions or concerns in real-time – instead of just providing an inactionable static report. Additionally, it allows teams to truly understand the vulnerabilities within their systems so they can begin remediation before the end of the pentesting engagement.

Lastly, when working in an as-a-Service model, pentesters can help organizations become more efficient with their security processes, as they work as an extension of the internal team and can lend their industry expertise to help strengthen their clients’ security posture.

Read the full article online here.

Back

A Strategic Approach to Automotive Security

In recent years, we have witnessed many technological advancements in the automotive industry. From advanced driver assistance systems to connected mobile apps and digital keys, this technology brings convenience to our lives, but with it a new set of risks.  

In an industry where a single mistake can put lives on the line, it’s pertinent that we take a strategic approach to automotive cybersecurity to ensure the functional safety of each vehicle, and related systems and applications, before malicious actors can cause harm. 

Standards such as the ISO 26262-1:2018 and ISO/SAE 21434 were specifically designed to address functional safety and threats to automotive security. For example, the ISO/SAE 21434 features the Threat Assessment and Risk Analysis (TARA). TARA breaks down threats in a system so engineers can calculate the risk of any one component to the entire system and mitigate those threats in the design phase.  

Even so, the technological design of a standard car contains multiple electronic control units (ECUs), with some luxury models containing well over a hundred. Each of these systems could contain more than 100 million lines of code, especially systems that deal with the outside world such as Automotive Head Units (AHU) or Telemetric Boxes (T-BOX).  

Every access point and added feature in the design makes it more difficult to identify vulnerabilities and serves as a potential vector for bad actors. So, what can original equipment manufacturers (OEMs), suppliers, vendors, and others do to strengthen their security efforts to keep pace with innovation? 

In this blog, I will discuss emerging automotive security risks and break down four automotive security best practices to help you improve your cybersecurity programs. 

Five Automotive Security Risks 

Misconfiguration in Automotive Software 

Most ECUs run on some form of operating system. The device may run QNX, a modified version of Android, or another specialized Real Time Operating System. Some of the ECUs found in today’s complex automotive systems require the full processing power of an operating system. Gone are the days where all of the automotive systems in the vehicle run bare metal code on weak processors.  

However, with every added feature, parsed protocol, or secure bootloader configuration, there is a chance for some misconfiguration or coding error to bypass development teams. This condition is especially true with software that interacts with the user or the outside world.  

The software for any installable applications such as a USB, Bluetooth driver, or even the Digital Audio Broadcasting (DAB) data that shows song information on the radio can, and has, allowed for aspects of the vehicle’s systems to crash or become compromised. This is more important the closer the Controller Area Network (CAN) bus or other diagnostic protocol segment is to the telematics unit because that allows the attacker potential access to the system over a range of wireless media.  

Attack on Automotive Hardware 

The automotive hardware that makes up a vehicle not only consists of the ECUs but also the sensors and the Hardware Security Modules (HSM). All these devices, including those that are added to increase the security of the vehicle contribute to the attack surface area of the car. An attack surface includes any system that an attacker with physical/remote access to could attack, extract, meddle, or spoof to explore a vulnerable system in the automobile.   

One area that is often a concern is how the system handles sensitive information at rest. There are a few ways to stop an attacker from extracting the non-volatile memory from an AHU and dumping/modifying the file system if it has not been encrypted and signed. Furthermore, self-driving autonomous vehicles are only as safe as their sensors are reliable.  

If an attacker extracts information from an HSM, spoof a sensor, glitch a debug port, or compromise the integrity of any part of the system, they could take any action. In automotive security, this threat is even more imminent because the architectural secrets gained on one vehicle could lead to the attacks of many (i.e., an entire fleet). Extracted firmware keys, Joint Test Action Group passwords, and network access to other cars are only some of the attacks that could be managed. 

Unauthorized Access in Debugging Ports 

Debugging ports are serial ports that are active during the development of a piece of hardware and software and then terminated when the system goes to production. This is not the case in automotive.  The data must be retrievable in case there is a failure in the device so that the root cause can be addressed.  

These interfaces should still be protected via a password key that restricts access to the device, unpopulated header locations, or software configuration. However, they won’t prevent attackers from attempting to access them. If attackers gain access to these interfaces, they could gain developer-level access to the device with the ability to implement their own malicious “features” into the ECU. 

Breach Points in the Network 

Automotive systems have several networks today. Where we used to have only the CAN Bus, which is still used today, we now use FlexRay, LIN, DoCAN, DoIP, and several others – each with their method of transmitting data and diagnostics.  

While these networks should be segmented, there are still breach points. We have developed some encryption methodologies to prevent an attacker from gaining access to a device that could breach other devices. This is accomplished by adding a layer of complication to the processing of the network traffic, especially when considering repaired or aftermarket parts and new key exchanges.  

Containers  

ECUs control the electronic functions of a car and now incorporate their own operating systems, but this is just the first step. More advanced ECUs have their own containers, which allows for a quicker development time and added layer of protection.  

Some of these containers are typically used for machine learning implementations to perform autonomous or assisted driving features that are prominent in many modern cars – a potential focus of attackers in recent years. Containers are normally thought of to be somewhat secure, but every year there are more reports of how these systems are misconfigured. They can be modified and, in some cases, used to perform privilege escalation. 

Four Automotive Security Best Practices 

Let’s dive into automotive security best practices and what you can do to improve the overall program security of your automotive. Here are four best practices for you to consider. 

Familiarize Yourself with the Automotive Threat Analysis and Risk Assessment Method (TARA)  

ISO standards, also known as the International Organization for Standardization, is a set of standards internationally agreed upon by experts. These processes range in types of activities and need to be addressed multiple times through the product lifecycle.  

TARA specifically recommends several threat modeling methods that can apply to automotives: EVITA, TVRA, OCTAVE, the HEAVENS security model, Attack trees, and SW to name a few. You can find more info in ISO 21434, but note that these are only recommendations – and the methodologies are adaptable. Other methodologies exist to check for automotive security, but no matter which method you use, make sure it provides good coverage and documentation of the areas where there are automotive security concerns. 

Review Code at Standard Intervals During Code Review to Reduce Errors 

Engineers don’t like to spend longer than necessary in a secure code review as most programmers produce code more than they enjoy reading it. However, reviewing code at standard intervals reduces the number of errors that show up in the production systems. Additionally, it may be beneficial to employ some form of automation to discover more hidden errors. 

Incorporate Fuzz Testing in the Quality Assurance Process 

Fuzz testing is a strategy for discovering bugs that other software testing methodologies can’t. Due to the speed and sophistication of input mutators, this brute-force method of bug hunting can be effective at finding flaws in network protocols, file handlers, and similar data. While it takes time to create fuzzing software to suit the needs of the system, if done right, that investment will save more time in the long run. 

Check Your Cellular Connections 

There are a few areas where traditional networks and automotive systems intercept. Where these intercepts exist, ensure at every possible configuration that the system is fuzzed. One often overlooked area is the use of a cellular test station to connect to the automotive T-BOX. This is important because, under normal circumstances, telecoms tend to act as makeshift firewalls for mobile devices. But you can’t always depend on them, especially when attackers can create their own cellular test systems.  

The Importance of Automotive Penetrating Testing  

It takes skills to design and implement the features in our vehicles today. Engineers have a certain method of thinking about problems and rarely think maliciously about the devices they are working on.  

They may know areas where effort has been lacking or bypasses that may not be covered in detail in the architecture diagram. When it comes to knowing the best way of combining these issues into a chained attack, engineers and quality assessment personnel fall short. It is one thing to know where a flaw is, but another to understand how to apply the correct pressure to the flaw and determine the true strength or vulnerability of the vehicle. 

In addition, the technologies and techniques used in attacking systems change faster than the technologies they attack. Keeping up to date with the newest attack methodologies requires a person’s full-time attention.  

That is why I find it is typically better to have a group of experts already dedicated to keeping abreast of the newest methodology and who developed their own tools to evaluate your unique automotive security requirements. You can invest in your own penetration testing teams or seek out penetration testing services, like NetSPI’s automotive pentesting

Automotive environments are complex cyber-physical systems, but there are many opportunities to improve your security maturity. This blog and the automotive cybersecurity best practices shared within are just scratching the surface of this unique threat landscape. If you’d like to continue the conversation with me and dig deeper, don’t hesitate to reach out at www.netspi.com/contact-us.

Back

NetSPI Expands Global Footprint with Strategic Leadership Appointments in EMEA

Security industry leaders join NetSPI’s EMEA team to fuel growth and meet increased demand for pentesting services in EMEA.

Minneapolis, MNNetSPI, the leader in enterprise penetration testing and attack surface management, today announced the expansion of its global footprint in Europe, Middle East, and Africa (EMEA) to meet growing international demand for its offensive security solutions.  

“NetSPI’s technology-powered services and customer-first focus has solidified the company’s leading position within the North American offensive security industry,” said KKR’s Paul Harragan, a London-based investor in NetSPI. “The team’s specialised skill set, tech acumen and white glove delivery model will resonate with the European market and should drive continued growth and expansion as the team develops and delivers critical offensive security solutions.” 

“We’ve experienced a record volume of demand from EMEA organisations needing to improve their security posture through a proven, holistic approach to pentesting, and now, we’re well positioned to deliver this in the region,” said Aaron Shilts, CEO, NetSPI. “We’ve hired a team of extremely talented, energising security leaders who align with our customer-first approach to business. Establishing our EMEA beachhead with this incredible group will ensure NetSPI is destined for accelerated growth and continued success in the region.” 

The company has appointed security industry veterans Steve Bakewell, Steve Armstrong, and Eric Graves to strategically lead NetSPI’s EMEA team and drive further growth in the region. Bakewell joins NetSPI as Managing Director of EMEA and brings over 23 years of experience in cybersecurity and risk management across organisations including Central Government & Defence and Royal Bank of Scotland, as well as with security vendors such as CipherCloud, RiskIQ and Citrix. 

“The pentesting space is highly competitive in the UK, but vendors in the region simply do not have the pedigree that NetSPI has,” Bakewell said. “NetSPI already provides its penetration testing services to nine out of the top 10 U.S. banks and many of the Fortune 500 – I’m looking forward to the opportunity to serve end users in EMEA during a time when security is high on the business agenda.” 

Bakewell will work closely with Armstrong, who has been appointed Regional Vice President for EMEA. Armstrong has two decades of experience in sales and security, spanning companies including Bitglass, CyCognito and Avira. Graves will work alongside Armstrong as NetSPI’s Regional Sales Director for EMEA, leveraging his extensive experience in cybersecurity sales for organisations such as Pentera, TrendMicro and Spok, to meet global demand and provide NetSPI’s award-winning pentesting solutions to EMEA customers. The three leaders will work closely alongside Shilts and oversee NetSPI’s growing team in EMEA. 

NetSPI will be at InfoSecurity Europe from June 21-23, 2022 at ExCel London. Participate in a live demo and meet the company’s security experts at Stand M-12. For more information or to schedule a meeting with NetSPI at InfoSecurity Europe, please click here.

 

About NetSPI 

NetSPI is the leader in enterprise security testing and attack surface management, partnering with nine of the top 10 U.S. banks, three of the world’s five largest healthcare companies, the largest global cloud providers, and many of the Fortune® 500. NetSPI offers Penetration Testing as a Service (PTaaS) through its Resolve™ penetration testing and vulnerability management platform. Its experts perform deep dive manual penetration testing of application, network, and cloud attack surfaces, historically testing over 1 million assets to find 4 million unique vulnerabilities. NetSPI is headquartered in Minneapolis, MN and is a portfolio company of private equity firms Sunstone Partners, KKR, and Ten Eleven Ventures. Follow us on Facebook, Twitter, and LinkedIn

Media Contacts:
Tori Norris, NetSPI 
victoria.norris@netspi.com
(630) 258-0277 

Jessica Bettencourt, Inkhouse for NetSPI  
netspi@inkhouse.com
(774) 451-5142

Back

TechTarget: How to Address Security Risks in GPS-enabled Devices

On June 21, 2022, NetSPI Managing Director Nabil Hannan published this article on TechTarget called How to Address Security Risks in GPS-enabled Devices. Read the preview below or view it online.

+++

Trendy consumer gadgets are reaching the market at an expedited rate in today’s world, and the next new viral product is right around the corner. While these innovations aim to make consumers’ lives easier and more efficient, the rapid development of these products often creates security risks for users — especially as hackers and malicious actors get more creative.

When commercial drones were brought to market as recreational tools in 2013, for example, consumers jumped at the chance to use them for a wide range of personal purposes, from photography to flying practice. Many security risks emerged, however, and it became clear that drones can be used maliciously to do anything from tracking and monitoring to causing physical harm and societal disruption.

GPS-enabled devices are now experiencing the same growing pains.

The Current Threat Environment

GPS-enabled devices have been on the market for a while, but consumer use has boomed in recent years. The newest device making waves is Apple’s AirTag — a small device that tracks personal items such as keys, wallets and backpacks.

With an affordable price tag, consumers have jumped at the opportunity to keep track of their belongings more easily. As adoption has grown, however, so have security and privacy concerns. Malicious actors can easily slip these devices into peoples’ belongings and track them.

While the risk to consumers is clear, businesses and influential figures can also be targeted. GPS-enabled devices can be used to track day-to-day business movements and identify exploitable weak points.

Apple has remediated some of these risks by releasing a personal safety guide outlining the steps users should take if they find an unknown AirTag or suspect someone has gained access to their product. Yet these risks highlight a broader problem with GPS-enabled devices. Threat modeling in the design phase of tech development must evolve to uncover emerging security risks — before consumers get their hands on the devices.

Read the full article online.

Back

TechRound: Meet Steve Bakewell, EMEA Managing Director at Penetration Testing and Attack Surface Management Provider: NetSPI

On June 20, 2022, NetSPI EMEA Managing Director Steve Bakewell was featured in this TechRound interview called Meet Steve Bakewell, EMEA Managing Director at Penetration Testing and Attack Surface Management Provider: NetSPI. Read the preview below or view it online.

+++

What is Penetration Testing as a Service (PTaaS)?

Businesses are always-on, and as security should enable the business, it needs to be aligned. PTaaS is the model NetSPI has chosen to deliver our portfolio of penetration testing services in an iterative and programmatic manner. Powered through NetSPI’s Resolve platform, customers can orchestrate and manage their penetration testing program at a cadence that suits their operational tempo. Whether it’s scoping and prioritizing tests, communicating directly with NetSPI’s expert team of penetration testers, accessing real-time results during the test or integrating with service management and GRC tooling to get the right data in front of the right people for faster decision-making. PTaaS enables customers to mature their security testing program and move towards continuous security improvement.

Why is penetration testing so critical in the current era of hybrid work?

I’ve just come from that world in my previous role and saw the impact the pandemic had on remote working. Overall, the pandemic and hybrid working enforced a level of change in such a short period of time, forcing businesses to find ways to continue operating efficiently and effectively. However, hybrid working increases risks both on the client side, as remote workers create new entry points to the corporate network, particularly where they use non-corporate devices. Then on the server-side, we’re seeing a substantial increase in the attack surface, including the increased take-up of cloud services. From a security testing perspective, there is a substantial amount of ground to cover, from network to cloud, as well as Attack Surface Management (ASM). Penetration testing, managed through a platform like Resolve, provides a way to help organizations reduce the risks of hybrid working by enabling an increased level of testing in a frictionless way.

Read the full article online here.

Back

NetSPI Named a Top Minnesota Workplace and Honored for its Cultural Excellence

The company is recognized for its innovation, culture, and leadership by The Star Tribune and the Top Workplaces Program.

Minneapolis, MNNetSPI, the leader in penetration testing and attack surface management, recently won two Top Workplaces awards – Top 200 Workplaces in Minnesota and the Cultural Excellence Awards – recognizing the company’s forward-looking innovation, team-first culture, and dedicated leadership team.  

Top Workplaces recognizes the most progressive companies in Minnesota based on employee opinions, measuring engagement, organizational health, and satisfaction, and the Cultural Excellence Awards highlight the company’s advancement in three key areas: 

  • Innovation: Celebrates organizations who have embedded innovation into their culture and create an environment where new ideas come from all employees.
  • Purpose & Values: Celebrates organizations who have both embedded their mission and values into their culture and are efficient in their work to bring it into reality.
  • Leadership: Celebrates organizations whose leaders inspire confidence in their employees and in the direction of the company.  

“We prioritize fostering an environment that ensures every team member feels valued, heard, and supported,” said Heather Crosley, Director of People Operations at NetSPI. “These two recognitions prove that our dedication to our culture is resonating across our workforce, and I want to thank our team for making NetSPI a great place to work.” 

These recognitions come during a year of rapid growth and innovation for NetSPI, as the company brought on more than 90 new employees this year already. NetSPI’s strong recruiting and retention initiatives and flexible company culture drive the development of new mission-critical services, with the company recently announcing the launch of its new attack surface management service, as well as enhancements to its breach & attack simulation offering. NetSPI is also expanding its global presence, building on its current momentum to serve the EMEA region.  

“Retaining top talent is more important than ever in today’s evolving cybersecurity threat environment,” said Aaron Shilts, CEO of NetSPI. “Our workforce consistently exceeds expectations, and our team-first culture is a driving force of that success. We are honored to be recognized by the Star Tribune and Top Workplaces.” 

The results of the Star Tribune Top Workplaces are based on survey information collected by Energage, an independent company specializing in employee engagement and retention. The analysis includes responses from employees at Minnesota public, private, and nonprofit organizations. Earlier this year, NetSPI was recognized as a 2022 National Top Workplace

NetSPI is hiring. Visit www.netspi.com/careers to view open roles and apply.

  

About NetSPI 

NetSPI is the leader in penetration testing and attack surface management, partnering with nine of the top 10 U.S. banks, three of the world’s five largest healthcare companies, the largest global cloud providers, and many of the Fortune® 500. NetSPI offers Penetration Testing as a Service (PTaaS) through its Resolve™ penetration testing and vulnerability management platform. Its experts perform deep dive manual penetration testing of application, network, and cloud attack surfaces, historically testing over 1 million assets to find 4 million unique vulnerabilities. NetSPI is headquartered in Minneapolis, MN and is a portfolio company of private equity firms Sunstone Partners, KKR, and Ten Eleven Ventures. Follow us on Facebook, Twitter, and LinkedIn

Media Contacts:
Tori Norris, NetSPI 
victoria.norris@netspi.com
(630) 258-0277 

Inkhouse for NetSPI 
netspi@inkhouse.com

Back

Painting a Threat Detection Landscape

The MITRE ATT&CK Evaluations is a small-scale demonstration that shows how a tool or (Endpoint Detection and Response) EDR product would detect and prevent adversary behavior. On their own, the evaluations paint an intriguing picture, but they have some issues and require a security program that understands itself to fully benefit from the findings. We can expand this output to answer a set of questions that have been asked of (security operations center) SOCs for years. 

  • How do you measure the threat detection efficacy and overall coverage of a SOC or Incident Responder (IR) program?
  • How can we tell when coverage of a technique is sufficient?
  • Are there security products that are not pulling their weight?
  • How can we best prioritize our security dollars and man-hours?
  • How do you determine a meaningful operational direction that avoids the Sisyphean task of chasing the malware of the week?
  • How do I conceptualize the engagement area that a SOC is meant to operate in i.e., how do I paint a reliable picture of my threat detection landscape? 

From here, we will cover the promise of the MITRE ATT&CK methodology and its shortcomings. We will also discuss the philosophy of threat detection and identify gaps within the MITRE ATT&CK Framework to help answer these questions in a data-driven manner. 

MITRE ATT&CK Evaluations: A Model to Start With 

This year is the fourth time MITRE has run the evaluations. In a nutshell, the evaluations are a combination of a purple team exercise and a science experiment. They place many security products in identical environments and then execute the same procedures against all of them.  

The goal of the MITRE ATT&CK Evaluations is to determine if the activity from the procedure was observed or detected. From those data points, MITRE assembles a list of visible tactics and techniques. 

Unfortunately, the output of this test paints a low-resolution picture that is easy to manipulate and misinterpret. For example, look at how vendors interpret the results from the evaluations: many declaring victory, 100% coverage, 100% prevention, top tier finish, etc.  

When you investigate the data, it is obvious that some of this pomp resulted from the limited number of techniques chosen for the MITRE ATT&CK Evaluations. Despite this, the fact that some vendors got 100% coverage of the chosen techniques is still impressive, right? Doesn’t that imply that a consumer would not have to worry about those techniques? 

Couldn’t one just find another vendor that covers the other techniques and brick-by-brick assembles 100% MITRE ATT&CK coverage? GG everyone, security is solved. Everyone go home. 

So, what’s the problem? The picture painted by the evaluations is not completely accurate because they are extrapolating coverage of a single procedure to mean complete coverage of an entire tactic or technique. Just as there are often many techniques to a tactic, there are also often many procedures to a technique.

If we look at the MITRE ATT&CK Framework and the methodology of the evaluations, we can understand this result. The evaluations first create an adversary emulation plan and the chosen procedures against a security product and then records what was observed using the objects of the MITRE ATT&CK Framework. It is a small-scale snapshot of what is possible, not an overall evaluation of product effectiveness. Additionally, the results are limited by the MITRE ATT&CK Framework’s structure, which MITRE has recently taken steps to fix by adding detection objects. 

The MITRE ATT&CK Framework has been revolutionary for cybersecurity, as it gives defenders a common lexicon and acts as a knowledge base of Tactics, Techniques, and Procedures (TTP) used by threat actors. The MITRE ATT&CK Model shows how the different objects in the ATT&CK Framework relate to each other. However, the reader will notice that within the model there is no object for procedures.

The Complex Role of Procedures in the MITRE ATT&CK Framework 

Logically, procedures can be seen as a component piece of technique/sub-technique and, as we will see, they are crucial for helping us to measure and understand threat detection. While tactics and techniques are important, they should not be the individual strokes of our painting. Coverage of a single procedure is usually not analogous to complete coverage of a technique. Making this assumption will lead us to paint a low-resolution landscape. 

Procedures from the perspective of the MITRE ATT&CK Framework are double-edged. While they are the raw methods used to implement the techniques, they also frequently change and are manifold. The MITRE ATT&CK Framework currently consists of 188 techniques and 379 sub-techniques. Within a majority of those techniques and sub-techniques exist multiple procedures.  

To complicate things, those procedures themselves may exist in a many-to-many relationship to the techniques. Comprehensive tracking of procedures would be a herculean effort, especially without a solid argument as to why they should be tracked or how they are individually relevant.

Using an example from MITRE’s Cyber Analytic Repository, we can see how a single procedure, the registry addition below, exists in two techniques and three tactics:

reg add “HKEY_LOCAL_MACHINE\System\CurrentControlSet\Control\
Session Manager” /v SafeDllSearchMode /d 0
TechniqueSubtechnique(s)Tactic(s)
Hijack Execution FlowDLL Search Order HijackingPersistence, Privilege Escalation, Defense Evasion 
Modify RegistryN/ADefense Evasion
Reference: https://car.mitre.org/analytics/CAR-2021-11-001/

Without considering the importance of procedures, general misunderstandings about visibility and detection coverage occur. This directly affects the decisions made at the different layers of a security organization. We begin to recognize this problem at the tactical level and can see how the misunderstanding propagates through the other levels and affect decisions.

Risk at the Three Levels of Threat Intelligence 

At a tactical level, a detection engineer may realize that for a specific technique identified by the MITRE ATT&CK Framework, coverage is inconsistent, or that complete coverage is impossible within an environment.  

Due to visibility, techniques may be complicated to cover or may need multiple detections to cover completely. Scale this problem by 188 techniques and 379 sub-techniques and it becomes obvious that a landscape assembled by one procedure per technique is at best incomplete. Moreover, it becomes difficult to trust that vendors completely cover the techniques they claim to cover.  

Consider that many of these techniques require specialized knowledge to understand all possible procedures. You need to do further research to orchestrate detections in a way that offers comprehensive coverage.  

Given these complications and the scope of the problem, we can extrapolate how the different layers of a security organization may use a threat detection landscape and see how one based solely on tactics and techniques may lead to less favorable outcomes.

Strategic LevelA Chief Information Security Officer (CISO) may ask for a coverage map overlaid against the MITRE ATT&CK Framework. This might be used in an attempt to plan product changes and acquisitions or add additional headcount to various operations groups. 

RISKS: The CISO may end up purchasing products that do not increase detection coverage or, conversely, provide duplicate coverage. They may end up asking for initiatives that are unlikely to pan out or assign staff to less effective groups.
Operational LevelA Security Operations Center (SOC) Manager may try to use MITRE ATT&CK Framework coverage to plan red/purple/hunt operations or help direct the focus of their detection engineering or triage teams.

RISKS: An incomplete picture at this level causes ineffective and inefficient operations and research. At this level, it can cause leaders to drive their analysts and engineers to focus on low-return operations or ignore areas with gaps. 
Tactical LevelAnalysts and engineers may try to use the MITRE ATT&CK Framework to look for other sources of rule logic or coverage and relevant and unknown procedures from the internet.  
 
RISKS: A low-resolution understanding or policy of procedure coverage equating technique coverage may cause an analyst to be blind to gaps or overestimate coverage. The assumptions may also cause them to ignore areas that could be improved or otherwise have misplaced faith in their systems. This can also occur during investigation or triage leading to false negatives during response. 

Tests like the MITRE ATT&CK Evaluations exacerbate this problem if marketing is allowed to drive the conversation. Security vendors are required to focus on high-fidelity alerting. If they implement default rules that are not high fidelity, they run into issues with preventing the legitimate business activity of their clients.  

I have heard many anecdotes that follow this pattern: a researcher is attempting to do something that is blocked by a well-known security control only to attempt the same activity a week later and the activity is not blocked. 

This is not to say that the MITRE ATT&CK Evaluations do not hold promises. The evaluations are enticing and the scientific method they follow is sound. The larger promise behind the evaluations is the ability to accurately measure coverage and paint a full threat detection landscape. Like any painting, it is a composition of individual strokes on a canvas, and the quality of the painting depends on each stroke.

Enhancing the Model 

Enter the idea of capability abstraction. Jared Atkinson, a researcher who is consistently advancing the philosophy of detection engineering, wrote a fantastic blog on this. In short, we need to identify the root artifacts that are generated when a technique is executed, ideally via any procedure. The artifacts identified here can be considered a “base condition” for detection. If we then focus our detections around that identified “base condition”, we will create efficient detections that have maximum visibility on the target behavior. 

Unfortunately, this is not possible in all cases and is even less possible in a real environment where visibility is limited. To help visualize and work around this problem, Jared has expanded the idea to trace all possible execution paths and artifacts for a single technique.

This could be used on a technique-by-technique basis to identify a set of visible base conditions that provide complete coverage. While comprehensive, this may not be entirely practical or sustainable on a large scale. So how else are we to measure our coverage? 

In detection engineering, a goal is to identify an abnormality generated by attacker activity in your environment. Being able to programmatically identify this typically leads to high-fidelity detection. If we take this goal and focus on a base condition, we can begin to create comprehensive, durable detections that will be nigh impossible for an attacker to evade, much less know about in advance.  

Unfortunately, without the knowledge of a graph similar to the one above and logging of a perfect base condition, how do we achieve maximal coverage? The answer to this uncertainty is that we must test many procedures and paint the picture through their aggregation. Jared compares this to the limit of a function, which I think is apt. 

This method only touches on how to identify and classify one technique. With 188 techniques and 379 sub-techniques, this activity must scale for us to paint our entire threat detection landscape.  

In practice, detection is not a binary, detection is a process. An organization’s coverage depends on where they are in within that process, therefore we need to measure that process to paint our detection landscape. 

Measuring the Model 

Detection is generally carried out in the following consecutive steps:

4-step graphic of the threat detection pipeline.

Each step in the pipeline is a piece of metadata that should be tracked alongside procedures to paint our landscape. These pieces of data tell us where we do or do not have visibility and where our technology, people, and processes (TPPs) fail or are incomplete.

LoggingGenerally, logs must be collected and aggregated to identify malicious activity. This is not only important from a forensic perspective, but also for creating, validating, and updating baselines. 
DetectionDetection can then be derived from the log aggregations. Detections are typically separated by fidelity levels, which then feed alerts. 
AlertingAlerts are any event, detection, or correlation that requires triage and may warrant a more in-depth response action. Events at this level can still be somewhat voluminous but are typically deemed impactful enough to require some human triage and response. 
ResponseResponse is where technology is handed off to people and processes. Triage, investigation, escalation, and eviction of the adversary occur within a response. Response is usually executed by a security operations or incident response team. The response actions vary depending on the internal playbooks of the company. 
PreventionThis sits somewhat outside the threat detection pipeline. Activities can, and often are, prevented without further alerting or response. Prevention may occur without events being logged. Ideally, preventions should be logged to feed into the detection and alert steps of the pipeline. 

Paint the Rest of the Landscape 

By assembling these individual data points for several procedures, we can confidently approximate a coverage level for an individual technique. We can also identify commonalities and create categories of detection to cover as much or as many of the base conditions as our visibility allows. Once many techniques are aggregated in this way, we can begin to confidently paint our threat detection landscape with all the nuance observed at the tactical level. A great man once said “We derive much value by putting data into buckets,” (Robert M. Lee) and it is no different here. 

By looking at what data sources provide logging, detection, and prevention we can get a true sense of product efficacy. By looking at coverage over the different phases of the kill chain, we can start to prioritize choke points, detective efforts, emulation, deception, and research. By cataloging areas where prevention or detection are not carried forward to the alerting or response phases, we can better evaluate gaps, more accurately evaluate security products, and more efficiently spend budget or hours fixing those gaps with breach and attack simulation or similar tests. 

The different levels (strategic, operational, tactical) drive each other. Apart from auditing, this feedback is the primary benefit of metrics, which can be problematic if the correct ones aren’t chosen. This collection bias is a vicious cycle especially if based on a low-resolution understanding of the threat detection landscape.  

As teams get larger and the set of operations a security team performs gets more diverse, leadership becomes more difficult; feedback is essential to providing a unified direction and set of directives that enable a set of disparate teams to work together effectively. 

The data derived here is also useful in many other ways: 

  • Red teams and purple teams
    • Able to plan more effective tests
    • Focus on known or suspected gaps
    • Generate telemetry in known problem areas for hunting and detection engineering 
  • Threat Intelligence teams
    • Able to focus collection efforts on problematic TTPs 
    • Easily evaluate the actionability of their intelligence
  • Threat Hunting teams
    • Able to focus on hunting more effectively
    • Easily find coverage gaps
  • Detection Engineering teams
    • Able to identify low-hanging fruit
    • Choke point kill chain tactics
    • Work more effectively in a decentralized manner
  • SOC analysts
    • Will have better situational awareness
    • Documentation to validate assumptions against
  • New personnel to the environment
    • Resource for immediate orientation
    • Resource for a broad understanding of this area of operations
  • SOC Managers
    • Effectively direct and engage these subordinate teams
    • Communicate on a shared picture
  • CISOs
    • Have confidence in their detection coverage
    • Understand the effect of and plan for product/resource changes more effectively
    • Orchestrate cross-team efforts more effectively

The pipeline that turns an activity on a system into an event that is responded to by the security team can be long and complicated.

Common pitfalls in threat detection.

There are many steps in threat detection and each one must be followed for most techniques. Technique coverage can often only be approximated after attempting and cataloging the differences among many procedures. Knowledge of your coverage is your map of the battlefield, and influences your decisions and directives and thus the activity that occurs at the strategic, operational, and tactical levels. 

If you are relying on vendor coverage without further extension or customization then you are susceptible to anyone who can defeat that vendor’s security product. By having a map, doing analysis, and designing behavior-based threat detections you are creating a delta that will make sure you’re not the slowest man running from the bear.  

Currently, NetSPI offers this under the Breach and Attack Simulation Services. Collaboratively as a purple team, we will develop capability abstracts and identify base conditions for threat detection, visibility gaps, and areas in the detection pipeline where an earlier stage is present but not carried forward by executing many procedures across the MITRE ATT&CK Framework.

Discover how the NetSPI BAS solution helps organizations validate the efficacy of existing security controls and understand their Security Posture and Readiness.

X