Back

Q&A: Doug Brush Talks Incident Response, DJI Drones, and Mental Health in Cybersecurity

When the term “reality check” is used, it’s intended to get someone to recognize the truth about a situation. In a fast-moving industry like cybersecurity, reality checks from its leaders are necessary. Thinking pragmatically about the solutions to our biggest challenges helps drive the industry forward. 

I recently sat down with Splunk global advisory CISO Doug Brush on the Agent of Influence podcast, a series of interviews with industry leaders and security gurus where we share best practices and trends in the world of cybersecurity and vulnerability management. During our conversation he shared three major cybersecurity reality checks: 

  • Most tools out there are going to be part of the picture, but they’re not going to solve everything. The slow progression of incident response today is not a technology problem.
  • Chinese-based organizations, such as DJI Drones and TikTok, have much in common with the Bay Area tech community. We have a lot to learn from them.
  • A top-down mentality must be applied to mental health in cybersecurity. Prioritizing mental health should be adopted at the C-Suite level.

Read on for highlights from our conversation around the evolution of incident response, security practices at Chinese-based organizations, mental health in cybersecurity, and more. You can listen to the full episode online here, or wherever you listen to podcasts.

Nabil: You’ve done many different incident response investigations. How that has that evolved over time – or over the course of your career?

Doug: I wish incident response has evolved more. I would say it’s a slow evolution. Early on, it was very manual process to parse things. Say if you’re doing dead box forensics, or even memory forensics to a large degree, there weren’t tools that could automate some of those processes. 

By no means is a tool an answer to all problems, but it’s going to help build efficiencies if you understand the process. We had to deconstruct things in hex editors, it was a very manual process and took a very, very long time. Now, you can script automate a lot of those and these tools can build databases – so that’s gotten better.

When I see things like the SolarWinds incident, we focus on the TTPs around how somebody gets in. And once they get in, they move laterally, privilege escalation, build backdoors, get domain, get other accounts, and build this persistence mechanism. We’ve been tracking this since 2006/2007. There’s nothing new about it. And that’s the frustrating part to me. While we think some of the technologies evolved to allow us to be more efficient, some of the root things that we should be looking for, we are not. I think there needs to be a greater focus on detection and response and building our response capabilities, as opposed to an afterthought past defense.

Nabil: Is there a reason why that hasn’t happened yet, or why it’s taking so long?

Doug: It’s hard. And it’s not a technology problem. I work for a technology vendor, I would like to say. “we’re the best in the world and we can stop everything, detect anything,” but that’s not the reality. Most of the tools out there are going to be part of the picture, but they’re not going to solve everything. 

When you look at the entire security operations, it’s going to be people, process, then technology. Technology is only a small percentage, it’s not your entire program. We get really excited about cool, new shiny objects. We all go to Black Hat and RSA, and we all pat ourselves on the back that all these new things are coming out. The reality is that we’re solving the same problems we saw 30 years ago. We don’t have good asset inventory. We don’t have visibility of our environments.

Nabil: Let’s shift gears to talk about a topic that I quite enjoy. I would love to learn about your work with various Chinese based organizations – DJI Drones and TikTok. In particular, what do you think about the privacy and security concerns that people bring up about using their technology?

Doug: It gets overly politicized at times. Inevitably, the Chinese government has their agenda, and I would add the blanket statement that there are also a lot of Chinese companies that don’t necessarily align with how the Chinese government operates. Some of these companies I’ve talked to have said, “You folks in the U.S. think we’re the enemy and think we’re stealing all this data. But we’re just a startup.”

The thing that surprised me most in Shenzhen was that the tech center reminded me of the Bay Area. It was very westernized and had a startup vibe with many young professionals. That’s the fallacy that we have: they’re against us. We don’t realize how much we have in common. They have a distrust of their government, just as we have a distrust of our own government. They have a mentality of “trust but verify” more than we appreciate. They have some built out documented and thoughtful programs when it comes to governance and organization.

In reality these companies are trying to create cool products just as we are. The reason DJI Drones became so popular is because they work really well. They built a vertically integrated manufacturing process where they weren’t using third parties – they had control over their supply chain. They manage third party risk well in advance. There are a lot of things that these organizations do that allow them to be competitive in the capitalistic and development space that we need to learn from. 

We have to change this mindset that, because you’re in a specific country, you have to share the viewpoints of whatever the loudest political party is at that time. We need to try to look at things in a more pragmatic and realistic way.

Nabil: You’re a big advocate for mental health. It’s a huge issue and an area of focus today in the security industry, especially due to things like staffing shortages and burnout. What advice do you have for security leaders when addressing mental health?

Doug: Yeah, it’s a tough one, there’s no doubt about it. The last few years have been particularly tough, but it’s an issue that’s been coming up for a long time that we don’t talk about enough. First of all, we need to have honest and frank discussions about it. There was a nominated study in 2019 that looked at global cybersecurity professionals. 91% of the CISOs surveyed said that their stress levels were suffering and 60% felt really disconnected from their work role. In the U.S., almost 90% of CISOs have never taken a two-week break from their job. And a lot of them feel that a breach is inevitable in their environment. 

We talk about top-down security and top-down leadership, which should go for mental health too. It has to be something that is adopted at the board and C-Suite level. Leaders should recognize that they’re only as good as the people that are working for them… when they’re at their best. Humans aren’t batteries, you can’t just revolve through them. The cost of acquiring the good cybersecurity professional right now is very high and CISOs are even harder to find and you don’t want to be churning through these people. Continuously hiring people, training them, and getting them onboarded, increases the cost and reduces efficiencies. We need to change this idea of how we hire. 

I would say it’s changed since I started in consulting. It was very easy to continue this idea that you had to work 80-90 hours a week. More of the folks that I’ve hired in the past decade or so have focused on balancing mental health. We shouldn’t expect someone to work overtime each week if we want the best from them. Happier staff results in better work, more efficiencies, higher employee retention – which, in turn, results in happier customers and more top line revenue.

When people feel the best, they perform at their best. This idea that it’s mental health versus business is a zero-sum game. If we construct that from the leadership level down and appreciate the fact that you can do more to retain your employees by giving them a better self-care environment, they’re going to be better employees for you. Investing in employee health, mental health, and wellbeing is non-negotiable. 

Nabil: Can you also share a little bit about the neurodiversity initiatives you’re supporting at Splunk?

Doug: The mental health aspect is just one part of the neurodiversity journey. When we talk about diversity in the workplace it should also include neurological differences like autism, ADHD, mood, and other functions. These have historically been viewed with a negative perception, but they’re just natural variation in the human genome. These folks have exceptional abilities alongside what is traditionally been viewed as a “disability.” Recognizing that it’s not something that needs to be fixed is a shift that needs to be adopted and supported. 

Instead of saying, “thou shalt think like we do,” it’s this idea that a diverse mental environment is going to give you more candidates, and probably a better output. When I’ve had a diverse staff and we all get in the room, I don’t get affinity bias. My greatest fear is that I’m going to build my own echo chamber of people telling me what I want to hear. We need diversity in thought to increase better output for our customers. You’ll find that you get a better outcome overall when you bring a lot of different people to the table.

For more, listen to episode 33 of Agent of Influence. Or, connect with Doug on LinkedInTwitter, or listen to his podcast, Cyber Security Interviews.

Listen Now to Episode 33 of Agent of Influence with Doug Brush - The Evolution of Incident Response, Lessons Learned from Chinese-Based Tech Companies, Mental Health, and More
Back

Illogical Apps – Exploring and Exploiting Azure Logic Apps

When we’re doing Azure cloud penetration tests, every single one is different. Each environment is set up differently and uses different services in tandem with one another to achieve different goals. For that reason, we’re constantly looking for new ways to abuse different Azure services. In this blog post, I’ll talk about the work I’ve done with abusing Azure Logic Apps. I’ll walk through how to obtain sensitive information as a user with the Reader role and how to identify/abuse API Connection hijack scenarios as a Contributor.

What are Logic Apps?

In Azure, Logic Apps are similar to Microsoft’s “Power Automate” family, which my colleague Karl Fosaaen examined in his blog, Bypassing External Mail Forwarding Restrictions with Power Automate. A Logic App is a way to write low-code or no-code workflows for performing automated tasks.

Here’s an example workflow:

  • Every day at 5:30PM
  • Check the “reports” inbox for new emails
  • If any emails have “HELLO” in the subject line
  • Respond with “WORLD!”

In order to perform many useful actions, like accessing an email inbox, the Logic App will need to be authenticated to the attached service. There are a few ways to do this, and we’ll look at some potential abuse mechanisms for these connections.

As a Reader

Testing an Azure subscription with a Reader account is a fairly common scenario. A lot of abuse scenarios for the Reader role stem from finding credentials scattered throughout various Azure services, and Logic Apps are no different. Most Logic App actions provide input parameters for users to provide arguments, like a URL or a file name. In some cases, these inputs include authentication details. There are many different actions that can be used in a Logic App, but for this example we’ll look at the “HTTP Request” action.

Below is an HTTP Request action in the Logic App designer, with “Authentication” enabled. You can see there are several fields that may be interesting to an attacker. Once these fields are populated and saved, any Reader can dump them out of the Logic App definition.

HTTP Request action in the Logic App designer, with “Authentication” enabled

As a tester, I wanted a generic way to dump these out automatically. This is pretty easy with a few lines of PowerShell.

$allLogicApps = Get-AzLogicApp
foreach($app in $allLogicApps){
    $appName = $app.Name.ToString()
    $actions = ($app.Definition.ToString() | ConvertFrom-Json | select actions).actions
    #App definition is returned as a Newtonsoft object, have to manipulate it a bit to get all of the desired output
    $noteProperties = Get-Member -InputObject $actions | Where-Object {$_.MemberType -eq "NoteProperty"}
    foreach($note in $noteProperties){
        $noteName = $note.Name
        $inputs = ($app.Definition.ToString() | ConvertFrom-Json | Select actions).actions.$noteName.inputs    
    }
    $params = $app.Definition.parameters
}

The above snippet provides the raw definition of the Logic App, all of the inputs to any action, and any parameters provided to the Logic App. Looking at the inputs and parameters should help distill out most credentials and sensitive information but grepping through the raw definition will cover any corner cases. I’ve also added this to the Get-AzDomainInfo script in the MicroBurst toolkit. You can see the results below.

Get-AzDomainInfo script in the MicroBurst toolkit

For something like the basic authentication or raw authentication headers, you may be able to gain access to an externally facing web application and escalate from there. However, you may be able to use the OAuth secret to authenticate to Azure AD as a service principal. This may offer more severe privilege escalation opportunities.

Another thing to look out for as a Reader are the “Run History” and “Versions” tabs. 

The Run History tab contains a history of all previous runs of the Logic App. This includes the definition of the Logic App and any inputs/outputs to actions. In my experience, there is a tendency to leak sensitive information here. For example, below is a screenshot of the Run History entry for a Logic App that dumps all secrets from a Key Vault.

This is a screenshot of the Run History entry for a Logic App that dumps all secrets from a key vault.

While dumping all secrets is unrealistic, a Logic App fetching a secret that is then used to access another service is fairly common. After all, Key Vaults are (theoretically) where secrets should be fetched from. By default, all actions will display their output in Run History, including sensitive actions like getting secrets from a Key Vault. Some actions that seem interesting are actually benign, like fetching a file from SharePoint doesn’t just leak the raw file, but others can be a gold mine. 

The Versions tab contains a history of all previous definitions for the Logic App. This can be especially useful for an attacker since there is no way to remove a version from here. A common phenomenon across any sort of development life cycle is that an application will initially have hardcoded credentials, which are later removed for one reason or another. In this case, we can simply go back to the start of the Logic App versions and start looking for removed secrets. 

The Versions tab contains a history of all previous definitions for the Logic App.

It’s worth noting that this is largely the same information as the Run History tab and is actually less useful because it does not contain inputs/outputs obtained at runtime, but it does include versions that were never run. So, if a developer committed secrets in a Logic App and then removed them without running the Logic App, we can find that definition in the Versions tab. 

I’ve added a small function, Get-AzLogicAppUnusedVersions, to MicroBurst which will take a target Logic App and identify any versions that were created but not used in a run. This may help to identify which versions you can find in the Run History tab, and which are only in the Versions tab.

Interested in MicroBurst? Check out NetSPI’s other open source tools

Testing Azure with Contributor Rights

As with all things Azure, if you have Contributor rights, then things become more interesting. 

Another way to provide Logic Apps with authentication is by using API Connections. Each API connection will pertain to a certain Azure service such as Blob Storage or Key Vaults, or a third-party service like SendGrid. In certain cases, we can reuse these API Connections to tease out some, perhaps unintended, functionality. For example, a legitimate user creates an API Connection to a Key Vault, which could then be used to access another service. However, if we create our own Logic App, then we can perform related actions in the context of the user who created the API Connection. 

Here’s how the scenario that I described above would work.

  1. An administrator creates the Encrypt-My-Data-Logic-App and gives it an API connection to the Totally-Secure-Key-Vault
  2. A Logic App Contributor creates a new Logic App with that API connection 
  3. The new Logic App will list all secrets in the Key Vault and dump them out
  4. The attacker fetches the dumped secrets from the Logic App output and then deletes the app

To be clear, this isn’t really breaking the Azure or Logic Apps permissions model. When the API Connection is created, it is granted access to an explicit resource. But it’s very possible that a user will grant access to a resource without knowing that they are exposing additional functionality to other Contributors. At least from what I have seen, it is not made evident to users that API Connections can be reused in this manner.

For the example above, you may be saying “So what, a Contributor can just dump the passwords anyways.” You would be correct, but to perform this attack you only need two permissions: Microsoft.Web/connections/* and Microsoft.Logic/*. This can come into play for custom roles, users with the Logic App Contributor role, or users with Contributor scoped to a Resource Group.

For example: an attacker has Contributor over the “DevEnvironment” Resource Group. For one reason or another, an administrator creates an API Connection to the “ProdPasswords” Key Vault. The ProdPasswords vault is in the “ProdEnvironment” Resource Group, but the API Connection is in the DevEnvironment RG. Since the attacker has Contributor over the API Connection, they can create a new Logic App with this API Connection and dump the ProdPasswords vault. This scenario is a bit contrived but in essence you may be able to access resources that you normally could not.

For example: an attacker has Contributor over the “DevEnvironment” Resource Group. For one reason or another, an administrator creates an API Connection to the “ProdPasswords” Key Vault. The ProdPasswords vault is in the “ProdEnvironment” Resource Group, but the API Connection is in the DevEnvironment RG. Since the attacker has Contributor over the API Connection, they can create a new Logic App with this API Connection and dump the ProdPasswords vault. This scenario is a bit contrived but in essence you may be able to access resources that you normally could not.

For other types of connections, the possibilities for abuse become less RBAC-specific. Let’s say there’s a connection to Azure Active Directory (AAD) for listing out the members of an AD group. Maybe the creator of the connection wants to check an email to see if the sender is a member of the “C-SUITE VERY IMPORTANT” group and mark the email as high priority. Assuming this user has AAD privileges, we could hijack this API Connection to add a new user to AAD. Unfortunately, we can’t assign any Azure subscription RBAC permissions since there is no Logic App action for this, but it could be useful for establishing persistence. 

Situationally, if the subscription in question has an Azure Role Assignment for an AAD group, then this does enable us to add our account (or our newly created user) to that group. For example, if the “Security Team” AAD group has Owner rights on a subscription, you can add your account to the “Security Team” group and you are now an Owner. 

  1. The Administrator user creates and authorizes the User-Lookup-Logic-App
  2. An attacker with Contributor creates a new Logic App with this connection
  3. The new Logic App adds the “attacker” user to Azure AD and adds it to the “Security Team” group
  4. The attacker deletes the Logic App
  5. The attacker authenticates to AAD with the newly added account, which now has Owner rights on the subscription from the “Security Team” group

Ryan Hausknecht (@haus3c) discussed the above scenario in a blog about PowerZure. He mentions that he chose not to implement this into PowerZure due to the sheer number of potential actions. As a result, there is no silver bullet abuse technique. However, I wanted a way to make this as plug-and-play as possible. 

API Hijacking in Practice 

Here is a high-level overview of programmatically hijacking an API Connection.

  1. In your own Azure tenant, create a Logic App (LA) replicating the functionality that you want to achieve and place the definition into a file. (This step is manual)
  2. Get the details of the target API Connection
  3. Plug the connection details and the manually created definition into a generic LA template
  4. Create a new LA with your malicious definition
  5. Retrieve the callback URL for the LA and trigger it to run
  6. Retrieve any output or errors
  7. Delete the LA

Since this is a scenario-specific attack, I’d like to walk through an example. I’ll show how to create a definition to exploit the Key Vault connection as I described earlier. I’ve published the contents which result from following these steps, so you can skip this if you’d prefer to just use the existing tooling.

First, you’ll need to have a Key Vault set up. Put at least one secret and one key in there, so that you have targets for testing. 

You’ll want to create a Logic App definition that looks as follows.

Creating a Logic App definition

If you run this, it should list out the secrets from the Key Vault which you can view within the portal. However, if we want to fetch the results via PowerShell, we’ll need to take one more step.

Select the “Logic app code view” window. Right now, the “outputs” object should be empty. Change it to the following, where “secrets_array” is the name of the variable you used earlier. 

“object”: {
"result": {
                "type": "Array",
                "value": "@variables('secrets_array')"
            }
}

Now you can get the output of that workflow from the command line as follows:

$history = (Get-AzLogicAppRunHistory -ResourceGroupName "main_rg" -Name "hijackable")[0]; $history.Outputs.result.Value.ToString()

You can see this in action in the example below.

Automation!

So, above is how you would create the Logic App within your own tenant. Automating the process of deploying this definition into the target subscription is as simple as replacing some strings in a template and then calling the standard Az PowerShell functions. I’ve rolled all of this into a MicroBurst script, Invoke-APIConnectionHijack, which is ultimately a wrapper around the Az PowerShell module. The main function is automating away some of the formatting nonsense that I had to fight with when doing this manually. I’ve also placed the above Key Vault dumping script here, which can be used as a template for future development.

The script does the following:

  • Fetches the details of a target connection
  • Fetches the new Logic App definition
  • Formats the above information to work with the Az PowerShell module
  • Creates a new Logic App using the updated template
  • Runs the new Logic App waits until it is completed and fetches output, and then deletes the Logic App

To validate that this works, I’ve got a user with just the Logic App Contributor role in my subscription. So, this user cannot dump out key vault keys.

To validate that this works, I’ve got a user with just the Logic App Contributor role in my subscription. So this user cannot dump out Key Vault keys.

I’ve also changed the normal Logic App definition to a workflow that just runs “Encrypt data with key”, to represent a somewhat normal abuse scenario. And then…

PS C:\Tools\microburst\Misc\LogicApps> Get-AzKeyVault
 
PS C:\Tools\microburst\Misc\LogicApps> Invoke-APIConnectorHijack -connectionName "keyvault" -definitionPath .\logic-app-keyvault-dump-payload.json -logicAppRg "Logic-App-Template"
Creating the HrjFDGvgXyxdtWo logic app...
Created the new logic app...
Called the manual trigger endpoint...
Output from Logic App run:
[
  {
    "value": "test-secret-value",
    "name": "test-secret",
    "version": "c1a95beef1e640a0af844761e1a842cf",
    "contentType": null,
    "isEnabled": true,
    "createdTime": "2021-07-07T19:24:06Z",
    "lastUpdatedTime": "2021-07-07T19:24:06Z",
    "validityStartTime": null,
    "validityEndTime": null
  }
]
Successfully cleaned up Logic App 

And there you have it! A successful API Connection hijack.

Detection/Defenses for your Azure Environment

Generally, one of the best hardening measures for any Azure environment is having a good grip on who has what rights, and where they are applied. The Contributor role will continue to provide plenty of opportunities for privilege escalation, but this is a good reminder that even service-specific roles like Logic App Contributor should be treated with caution. These roles can provide unintended access to other connected services. Users should always be provided with the most granular, least privileged access wherever possible. This is much easier said than done, but this is the best way to make my life as an attacker more frustrating.

To defend against the leakage of secrets in Logic App definitions, you can fetch the secrets at run time using a Key Vault API connection. I know this seems a bit counter intuitive given the subject of this blog, but this will prevent readers from being able to obtain cleartext credentials in the definition.

To prevent the leakage of secrets in the inputs/outputs to actions in the Run History tab, you can enable the “Secure Input/Output” setting for any given action. This will prevent the input/output to the action from showing up in that run’s results. You can see this below.

To prevent the leakage of secrets in the inputs/outputs to actions in the Run History tab, you can enable the “Secure Input/Output” setting for any given action. This will prevent the input/output to the action from showing up in that run’s results.

Unfortunately, there isn’t a switch to flip for preventing the abuse of API Connections. 

I often find it helpful to think of Azure permissions relationships using the same graph logic as BloodHound (and its Azure ingestor, AzureHound). When we create an API Connection, we are also creating a link to every Contributor or Logic App Contributor in the subscription. This is how it would look in a graph.

When we create an API Connection, we are also creating a link to every Contributor or Logic App Contributor in the subscription. This is how it would look in a graph.

In essence, when we create an API Connection with write permissions to AAD, then we are transitively giving all Contributors that permission.

In my opinion the best way to prevent API Connection abuse is to know that any API Connection that you create can be used by any Contributor on the subscription. By acknowledging this, we have a justification to start assigning users more granular permissions. Two examples of this may look like:

  • A custom role that provides users with almost normal Contributor rights, but without the Microsoft.Web/connections/* permission. 
  • Assigning a Contributor at the Resource Group level.

Conclusion

While the attack surface offered by Logic Apps is specific to each environment, hopefully you find this general guidance useful while evaluating your Logic Apps. To provide a brief recap:

  • Readers can read out any sensitive inputs (JWTs, OAuth details) or outputs (Key Vault secrets, results from HTTP request) from current/previous versions and previous runs
  • Defenders can partially prevent this by using secure inputs/outputs in their Logic App definitions
  • Contributors can create a new Logic App to perform any actions associated with a given API Connection
  • Defenders must be aware of who has Contributor rights to sensitive API Connections

Work with NetSPI on your next Azure cloud penetration test or application penetration testing. Contact us: www.netspi.com/contact-us.

Back

Retention is Key to Overcoming Today’s Cybersecurity Hiring Challenges

The cybersecurity talent shortage has troubled organizations for years. Whether you work for a large enterprise, financial institution, a services firm, or anything in between, it’s likely that you have cybersecurity roles listed on your website today that have yet to be filled. With the reported increase in cybercrime and sophistication of attacks, the demand for cybersecurity talent is at an industry high.

ISACA’s State of Cybersecurity 2020 reports that 62% of IT and security leaders surveyed say their cybersecurity teams are understaffed. Plus, those who took longer to fill cybersecurity positions reported more cyber attacks.

The (ISC)2 Cybersecurity Workforce Study 2020 reports nearly identical numbers: 64% of the cybersecurity professionals surveyed reported staff shortages. The study also revealed that the Global cybersecurity workforce gap – the difference between the number of skilled professionals that organizations need to protect their critical assets and the actual capacity available to take on this work – was 3.12 million in 2020.  

In other words, we’ve got our work cut out for us – and it’s clear that, as an industry, we need to reimagine our hiring efforts to keep pace with the demand. In this blog, we’ll discuss how the events of the past year have reshaped hiring, creative recruitment ideas, hiring challenges, and why employee retention is key. Bonus: NetSPI’s COO shares advice to help narrow the talent gap while essential cybersecurity positions remain unfilled.

The COVID-19 pandemic has reshaped hiring

The COVID-19 pandemic has caused rapid change in the way we work – and the way we hire. Largely, it has given organizations more flexibility to find the best talent. Dependent on the position, recruitment efforts are no longer bound by geography, and we have the ability to source the best talent regardless of where they live.

Tsedal Neeley, a professor at Harvard Business School and author of the book Remote Work Revolution: Succeeding From Anywhere, communicates this point perfectly in a recent NPR article: “We have changed. Work has changed. The way we think about time and space has changed. Workers now crave the flexibility given to them in the pandemic — which had previously been unattainable.”

Aligned with this sentiment, it is a job seekers market. We’re in the midst of what many are calling “The Great Resignation.” More than ever, employers need to put their best foot forward, offer flexible work options, and understand that work must accommodate life – not the other way around.

7 creative approaches to recruiting

Ask any organization about their primary recruiting channels and the standard answers you will likely receive include popular job sites (LinkedIn, Indeed, etc.), job fairs, and staffing firms. However, there are many creative ways to step outside of those bounds and get creative about where you source your cybersecurity talent. In the name of information sharing, here are seven ways NetSPI has discovered some of its best talent:

  1. GitHub: Are there open source tools that your technical team use on a regular basis? Look to see who has contributed to those tools for potential candidates. 
  2. Twitter: The security community is very active on Twitter. Explore hashtags specific to your organization (#penetrationtesting) or follow along with topical dialogues and threads. Often, you’ll come across a public thread of candidates actively looking for cybersecurity work.
  3. NetSPI University (NetSPI U)NetSPI U is a full-time, paid training program that focuses on pentesting. It’s geared toward entry level security professionals that want to get started in the space. The result? Qualified technical talent who understand our technologies, processes, and vision. More on this later.
  4. Meetups: Continued engagement with security organizations such as OWASP, DEFCON, and others have been instrumental in growing our network of candidates.
  5. Referral Program: In 2021 alone, we have hired 20 employees as a result of our employee referral program. We trust our employees to recommend the best talent in their network.
  6. Re-Hires: Given the flexibility of remote work, we were able to reengage with many past NetSPI employees who had moved where there was not a NetSPI office location.
  7. Investing in Internal Recruiters: While staffing firms are certainly helpful, we believe that investing in internal recruitment resources produces the best results. Because we’re boots-on-the-ground at NetSPI each day, we understand the organization’s standards and know exactly what to look for, from technical skills to culture fit.

Cybersecurity hiring challenges

The fact that the cybersecurity industry has a 0% unemployment rate could be seen as the industry’s greatest challenge, but it’s also an opportunity. An opportunity for employers to recognize that the hiring process is a two-way street and, just as employees need to communicate why they would be the best fit for a given position, employers must effectively communicate their strengths, perks, and get competitive about what they have to offer. Ultimately, it’s increasing the urgency for organizations to create a positive candidate experience for applicants.

Another major challenge specific to cybersecurity is the high level of standards that are set for technical security professionals. There is little room for error in this industry as one mistake could result in a detrimental outcome. We take pride in the high-quality work every NetSPI team member produces and it’s important – and an ongoing challenge – to find technical talent that is in alignment.

The security industry has countless disciplines. Cloud securityapplication securitynetwork security, the list goes on – not to mention the subcategories within each discipline… AWS, mobile apps, IoT, etc. There are many different areas of security that people can specialize in, but we’ve noticed a lack of candidates with a strong focus in a niche area. While there are benefits to having a well-rounded candidate, it is particularly difficult to hire for a specific cybersecurity need today.

Finally, barrier to entry is very high in the security industry. That’s why we developed and continue to invest in NetSPI U, our training program for entry-level penetration testers. Often, the requirements on job descriptions are too stringent to open the door to new candidates with a passion for cybersecurity, but perhaps do not have a traditional security background or education. It features hands-on labs and opportunities to shadow some of the most brilliant minds in security. It gives entry-level security professionals the foundation to jump start their career

Learn more about NetSPI U and apply for our next NetSPI U class online here.

Employee retention should be the priority

Above all, investing in the growth of your existing team and retaining top talent is key to closing the talent gap. If you invest in your current workforce, people will be more apt to want to join your team.

Employee retention remains a priority for NetSPI, which is why we achieved an industry-high retention rate of 92% in 2020. Pulling from our 2021 Top Workplaces survey results, here are the top five reasons people stay with us.

Retention is Key to Overcoming Today’s Cybersecurity Hiring Challenges
  • Collaboration: Collaboration is one of NetSPI’s brand pillars. One quote from the Top Workplaces survey summarizes this well: “The people I work with are dedicated to making sure everyone succeeds to the best of their ability – and enjoys working along the way!” 
  • Fun: We take pride in the relationships our employees have with one another. The ability to create an environment where employees enjoy their work is essential to the success of our organization.
  • Talent: The caliber of people our employees get to learn from and work with is a huge draw for NetSPI. Top talent attracts top talent.
  • Growth: We don’t give our employees a ceiling for growth. We cross-train and hire within when possible. We believe in promoting people who can do the job, and do not base growth on years of experience. An indicative quote from the Top Workplaces survey: “Career growth is always visible and in reach.”
  • Innovation: Something we do well is creating technologies that solve common problems not only for our clients, but also for our services team. The more we can automate redundant, time-consuming tasks, the more time our consultants can focus on finding the vulnerabilities that tools cannot.

Beyond hiring: 3 alternative actions that can help narrow the cybersecurity talent gap

It is important to note that today’s cybersecurity talent ‘crisis’ does not have an overnight solution. However, there are some things organizations can do in the interim to address the growing demand for talent today. We spoke with NetSPI Chief Operating Officer (COO) Charles Horton to learn what steps organizations can take to narrow the gap. Here’s what he had to say:

Automation is a critical tool for narrowing the cybersecurity talent gap. Without automating the more tedious, administrative tasks, security practitioners will not have time to focus on the strategic work at hand – or professional development activities. Keeping talent focused on the more creative, strategic work they enjoy and allowing tools to do the administrative work will keep teams engaged. Also, automation helps keep the team interested in their work and gives them opportunities to build their skills as they support key security initiatives. 

Cybersecurity leaders continue to be challenged by filling roles that require candidates with mid- to senior- level experience – and entry level job openings continue to be in high demand. Companies will need to do more with fewer people. To accomplish this, the adoption of program-level or ‘as a service’ partnerships with third parties that can provide dedicated support and capacity to keep key security initiatives running as an extension of the internal security team. This allows security teams to be agile and flexible with their talent as needs arise.   

Companies that can think outside of the box can take a creative approach to narrowing the cybersecurity talent gap. This would require building training programs that could be delivered internally, through external partners, or a blend, in combination with targeting candidates that may not have an exact match from a skills perspective but have a core set of skills that could be leveraged to accelerate training. These hiring and training tactics could yield a skilled workforce in cybersecurity where opportunities were not initially visible. 

If there’s one thing we love most about the cybersecurity industry, it’s the willingness to share big ideas and new concepts with one another. If you have recommendations for closing the talent gap or are curious about a career at NetSPI, reach out to jobs@netspi.com.

Love Where You Work – NetSPI is Hiring!
Back

Cyber Defense Magazine: Align Business Logic with Vulnerability Management to Mature Your Security Program

NetSPI managing director Florindo Gallicchio wrote an article that was featured in the August 2021 issue of Cyber Defense Magazine:

There’s no doubt about it: attack surfaces grow and evolve around the clock. With network configurations, new tools and applications, and third-party integrations coming online constantly, an atmosphere is being created that opens the possibility of unidentified security gaps. The fact is that cyberattacks can affect your business and are, unfortunately, more prevalent than natural disasters and extreme weather events. And we know from our own NetSPI research that nearly 70 percent of security leaders are concerned about network vulnerabilities after implementing new security tools.

Prevention is key to a mature cybersecurity program. In fact, according to a recent Ponemon Institute study, when cybersecurity attacks are prevented, organizations can save resources, costs, damages, time, and reputation. Yet, companies still may think they are protected by buying the latest cybersecurity technologies or just by working to change team behaviors that pose the most risk (i.e., using stronger passwords, avoiding phishing scams, etc.). While there is a place in a security program for these and other security measures, time and budget constraints create major barriers. Therefore, it is critical that an organization’s vulnerability management program is strongly built on a strategy that is risk-based and business aligned.

Florindo’s article can be found on pg. 74 of the August Issue of Cyber Defense Magazine. Download the issue online here: https://cyberdefensemagazine.tradepub.com/free/w_cyba125/.

Back

Financial Services Security Veteran Travis Hoyt Joins NetSPI as CTO

As CTO, Travis will drive penetration testing, adversary simulation, and attack surface management product strategy to support clients and services teams.

Minneapolis, Minnesota  –  NetSPI, the leader in enterprise penetration testing and attack surface management, today announced Travis Hoyt as its new Chief Technology Officer (CTO). In his new role, Travis is responsible for enhancing and expanding NetSPI’s technology-enabled services portfolio.

Travis brings over 20 years of cybersecurity leadership experience to NetSPI, previously leading security programs for major financial institutions, including Bank of America and TIAA, where he focused on application security and technology-enabled control transformation. Embracing innovation, he has built and patented two technologies from scratch – a vulnerability assessment and management platform and a posture management solution – well before the market.

“The client perspective and spirit of innovation Travis adds to our team is invaluable to our business and the success of our clients,” said Aaron Shilts, President and CEO at NetSPI. “Travis has a track record of bringing the vision, design, and execution of technologies to life. With his leadership, we are eager to continue disrupting the historically-stagnant pentesting and vulnerability management space.”

“The quality of the NetSPI team and their reputation for innovation is unmatched in the penetration testing industry,” said Travis. “As CTO I’m excited to provide immediate input into the product roadmap and help the team recognize what we need to do to provide the most value to our clients. Looking to the future, I’m eager to start exploring the next generation architecture that will drive the industry forward.”

Connect with Travis Hoyt on LinkedIn, or learn more about NetSPI’s penetration testing services.

About NetSPI

NetSPI is the leader in enterprise security testing and attack surface management, partnering with nine of the top 10 U.S. banks, three of the world’s five largest healthcare companies, the largest global cloud providers, and many of the Fortune® 500. NetSPI offers Penetration Testing as a Service (PTaaS) through its Resolve™ vulnerability management platform. Its experts perform deep dive manual penetration testing of application, network, and cloud attack surfaces, historically testing over 1 million assets to find 4 million unique vulnerabilities. NetSPI is headquartered in Minneapolis, MN and is a portfolio company of private equity firms Sunstone Partners, KKR, and Ten Eleven Ventures. Follow us on FacebookTwitter, and LinkedIn.

Media Contact:
Tori Norris, NetSPI
victoria.norris@netspi.com
(630) 258-0277

Back

Greatest Moments from Black Hat 2021 and DEF CON 29

At the beginning of the month, the NetSPI team ventured out to Las Vegas for the highly anticipated Black Hat USA and DEF CON 29 cybersecurity conferences. Given the hybrid nature of the events this year, the crowd was much thinner and the halls of Mandalay Bay much quieter – reports mention that Black Hat attendance was one-fourth of a typical year’s attendance pre-pandemic. 

While quieter than usual, there were still many opportunities to connect with one another face-to-face, rather, mask-to-mask. I sat down with my colleagues who attended the conferences – both in-person and virtually – to get their take on what went down at the events this year. After all, “What happens in Vegas… gets posted on the NetSPI blog.” Right? From keynotes to topics/themes to hacks, read on for five of the greatest moments from Team NetSPI’s time in Las Vegas.

1. Call for collaboration: The Joint Cyber Defense Collaborative

Jen Easterly, the newly appointed head of the Cybersecurity and Infrastructure Security Agency (CISA), used her platform at Black Hat to build trust and personal relationships with the private sector. During her talk she noted her plans to continue the work that former CISA head Chris Krebs started, specifically around building relationships between CISA, the private sector, and government. 

Secretary of Homeland Security Alejandro Mayorkas delivered the final keynote at Black Hat which echoed much of Easterly’s call for collaboration. He took to the virtual stage to recruit security professionals to work for DHS and to talk about the need to diversify the workforce. He cited two specific ways hiring private sector professionals at DHS could increase collaboration: acting as a bridge between the hacker community and DHS as well as mentorship.

Both keynotes highlighted the Joint Cyber Defense Collaborative, a new CISA initiative that plans to “bring together public and private sector entities to unify deliberate and crisis action planning while coordinating the integrated execution of these plans.”

Read more about the keynote speeches online at SC Media:

2. Caution around supply chain attacks

Supply chain attacks are just getting started, warned Corellium COO Matt Tait during his keynote speech. He cited the exploitation of zero-day vulnerabilities as the driver for the increase in software supply chain attacks. Since 2014, the number of zero-day vulnerabilities detected “in the wild” has increased 236 percent. 

The road to securing the supply chain is not going to be easy, Tait reassured. But he did share his thoughts on two critical steps we can take to get started: improvements to bug bounty programs and Certificate Transparency. Contrary to the keynotes from Easterly and Mayorkas, Tait suggested that platform vendors hold most of the responsibility for securing the supply chain, and government intervention or regulation will not do much to address the problem.

For more, read Channel Futures and The Daily Swig’s coverage of the keynote.

3. Ransomware policy panel

The panel on ransomware policy solutions at DEF CON was a highlight for the NetSPI team. It featured co-chair of the Ransomware Task Force Chris Painter, security researcher Robert Graham, and lawyer Elizabeth Wharton.

They discussed the varying aspects and challenges of handling a ransomware attack. (Hint: it’s not as cut-and-dry as banning ransom payments). The panel debated the role of cybersecurity insurance, whether to pay a ransom, the need to understand the granular details of an attack, and more.

Robert Graham pointed out that the true problem with ransomware is that organizations aren’t looking at how the ransomware is getting into the systems, they’re focusing more on whether their recovery efforts are hardened. He brings up a great point and highlights a problem that NetSPI is helping to solve with its new Ransomware Attack Simulation service.

Info Security Magazine wrote a detailed recap of the panel – check it out.

4. Team NetSPI at DEF CON

We may be biased but learning from colleagues at DEF CON was certainly a “greatest moment” from the conference. This year, Portland-based director Karl Fosaaen and our newest NetSPI director Chad Rikansrud presented at the conference. 

Karl is one of the foremost experts on Azure penetration testing. His presentation at the DEF CON Cloud Village focused on Azure password extraction. In the talk he showcases how to use the password extraction functionality in MicroBurst, a toolkit he created that contains tools for attacking different layers of an Azure tenant. He also walked through a real example of how it was used to find a critical issue in the Azure permissions model that resulted in a fix from Microsoft. For those that missed Karl’s talk, register for his upcoming webinar: Azure Pentesting: Extracting All the Azure Passwords.

During Chad’s talk he and container security expert Ian Coldwater told the story of the first mainframe container breakout. They became the first people on the planet to escape a container on a mainframe, and they explain how they did it. Watch on YouTube: Crossover Episode: The Real-Life Story of the First Mainframe Container Breakout.

5. More DEF CON talks worth watching

Our services team looks forward to meeting up at DEF CON each year. And while the annual NetSPI happy hour on the Las Vegas Strip is likely everyone’s top moment of the weekend, there were plenty of interesting talks held during the conference. Here are five talks worth watching on-demand if you didn’t catch them at the show:

  1. New Phishing Attacks Exploiting OAuth Authentication Flows – Jenko Hwong, Researcher at Netskope
    Overview: This talk details OAuth authentication flow for phishing and abusing refresh tokens to pivot and avoid audit log entries.
  2. Offensive Golang Bonanza: Writing Golang Malware – Ben Kurtz, Host of the Hack the Planet Podcast
    Overview: This talk breaks down why Golang is so useful for malware with a detailed tour through the available components used for exploitation, EDR and NIDS evasion, and post-exploitation, by one of the main authors of the core components.
  3. Hacking G Suite: The Power of Dark Apps Script Magic – Matthew Bryant, Red Team at Snapchat
    Overview: This talk delves into the dark art of utilizing Apps Script to exploit G Suite (AKA Google Workspace).
  4. Bundles of Joy: Breaking MacOS via Subverted Applications Bundles – Patrick Wardle, Creator of Objective-See
    Overview: This session provides an easy way to bypass all of Mac’s native malware protections. For a summary of the bypass, view the slide at 24:50.
  5. Hacking Humans with AI as a Service – Eugene Lim, Glenice Tan, Tan Kee Hock
    Overview: They present the “nuts and bolts” of an AIaaS phishing pipeline that was successfully deployed in multiple authorized phishing campaigns.

The conversations around collaboration, securing the supply chain, ransomware, and more were invaluable. As were the opportunities for those that were able to meet safely in-person. Whether you were there in person, attended virtually, or simply kept an eye on the announcements/news coming out of the event, it feels great to feel a sense of community in the security space yet again.

Join Team NetSPI at Black Hat and DEF CON next year – we’re hiring!

Back

The Secret to a Successful Risk-Based Vulnerability Management Program: Risk Scoring

Gartner anticipates that, by 2022, organizations that use a risk-based vulnerability management process will experience 80% fewer breaches. So, how can an organization make this shift and achieve a risk-based vulnerability management program? Two words: Risk scoring.

Leveraging risk scores for remediation prioritization and quantifying risk allows companies to prioritize budgets and resource allocation and focus on the security activities that could have the greatest impact to their business. And the idea of incorporating risk scoring intelligence to make the shift to a risk-based vulnerability management program is evolving. 

Through the collaboration of NetSPI’s development, engineering, and product teams, we’ve uncovered an accurate, data-driven methodology to calculate both aggregate and vulnerability risk scores using the data available from our penetration testing and vulnerability management platform, Resolve™. Let’s dig deeper.

What is risk scoring? 

In its most abstract form, risk is “the effect of uncertainty on objects involving exposure to danger.” At its foundation, cyber security risk is ultimately a function of (threat x vulnerability). While the definitions are helpful, it is important to look at your security program with a new lens and assess how your organization quantifies its risk – and is it even important to do so? Simply, the answer is yes. Quantifying and measuring cybersecurity risk is one of the most important components to a successful risk-based vulnerability management program.

The evolution of risk-based vulnerability management

Vulnerability incident resolution used to be reactive. Companies would wait for something to be exploited, then fix it. As IT systems became more integral to business operations, the need to be proactive in cyber defense became evident. Many tools have been developed that can hastily provide a list of vulnerabilities, but companies were quickly overwhelmed and overloaded with the number of identified vulnerabilities without direction or priority assigned for remediation. 

The introduction of Governance, Risk, and Compliance (GRC) software that could correlate all vulnerabilities aligned to business controls and identify the “true risks” to the company allowed some prioritization of risk. This management activity was done through technology in a system without human touch, lacking real world controls and exceptions. This caused the technologies to be complicated, difficult to implement, and require extensive customization. The latest vulnerability management market entrants are touting their ability to utilize AI to try and predict an exploit before it ever happens. But organizations are spending a lot of money on this technology, and it’s hard to predict. The usage of AI and other automated tools opaquely calculates the likelihood of a vulnerability exploit and offers limited customization to the companies using the technology. 

Today, the gold standard is a risk-based vulnerability management program. One where we prioritize vulnerability remediation efforts based on the true risk it presents to your specific organization, as opposed to a program that focuses purely on compliance “check the box” activities or a program that is so overwhelmed it remediates vulnerabilities ad-hoc as they show up, as opposed to appropriately prioritizing them.

For more insights, watch our webinar: The Evolution of Risk-Based Vulnerability Management.

How to use your risk score metrics to help find, prioritize, and fix vulnerabilities

Risk scoring allows companies to manage their evolving attack surface unlike they were able to before. The first step is to develop a customized risk lifecycle that will be the foundation on which risk data is generated. This includes identifying both the external and internal threats and vulnerabilities, as well as the assets that could be attacked. The decision then must be made on the best course of treatment, with options including mitigating, transferring, or accepting the risk. 

Here are the seven factors that impact how risk scores are determined in our Resolve™ platform:

  • Impact – If this vulnerability was to be exploited, how severe would it’s impact be? 
  • Likelihood – How likely is it that an attacker can and will attack this space? 
  • Environmental Modifiers – Think broadly about the asset and the environment in which the vulnerability is located.
  • Temporal Modifiers – Focuses on exploit code maturity, confidence, and remediation requirements. Temporal modifiers bring your risk score to life.
  • Industry Comparisons – How does your risk compare to other organizations or peers in your sector? 
  • Threat Actors – Are threat actors actively exploiting vulnerabilities present in your environment? 
  • Remediation Risk – Using the remediation SLAs available through PTaaS, all vulnerabilities are automatically assigned customizable due dates. Use remediation risk to determine your aggregates that require attention from a compliance perspective.

Vulnerability risk scoring is particularly beneficial in terms of remediation prioritization as it is calculated when you look at (vulnerability risk x the cost of resolution). If the vulnerability is deemed high severity, but the impact on your business is low (if exploited), the risk score would be on the lower side, and it may not be worth spending the money to fix it. And vice versa.

When it comes time to put your risk score to use, here are a few remediation considerations to keep in mind:

  • Prioritize – Prioritization is the most difficult part. Companies today can effectively identify vulnerabilities through penetration testing services, but how do they figure out which ones to fix first? What are the true risks to the business? This will vary depending on your business. 
  • Evaluate – Organizations must understand the efficacy of their risk mitigating controls. Manual pentesting and vulnerability scans still need to be done to validate your efforts are working as intended. 
  • Utilize the Data – Once you have a risk score, use it to validate and drive decisions around resource allocation, remediation prioritization, spend validation, track risk over time, industry benchmarking, and more.
  • Effectiveness – Are you on track to remediate your vulnerabilities before any threat materializes? Are your vulnerability and aggregate risk scores improving over time?

We see it every day. Companies are facing an immense number of vulnerabilities that humans have to manually sift through to assess and prioritize. Having a risk-based vulnerability management program in place allows organizations to identify, prioritize and remediate risks within their organization, saving time, headaches, and – perhaps most importantly – dollars in the end. 

Back

NetSPI Adds Risk Scoring to its Penetration Testing and Vulnerability Management Platform

As a part of a risk-based vulnerability management program, organizations can leverage NetSPI’s risk scoring for industry benchmarking, prioritization of security activities, and more.

Minneapolis, Minnesota  –  NetSPI, the leader in enterprise penetration testing and attack surface management, today announced the addition of risk scoring to its ResolveTM penetration testing and vulnerability management platform. In conjunction with Penetration Testing as a Service (PTaaS), NetSPI’s risk scoring intelligence helps its clients prioritize, manage, and remediate the vulnerabilities that present the greatest risk to their business. 

NetSPI’s new risk scoring capabilities dynamically integrate into PTaaS to provide both a granular vulnerability risk score as well as an aggregate risk score for an organization and its projects, assets, applications, and networks. Risk scoring is only available to NetSPI clients that leverage its penetration testing services.

The risk scores serve as a quantitative metric for risk reduction over time, cybersecurity spend validation, resource allocation, and industry benchmarking. NetSPI’s risk score enables organizations to incorporate business context and the respective threat landscape to accurately prioritize remediation of vulnerabilities.

“There are varying approaches to assigning vulnerability severity, but risk today extends far beyond individual vulnerabilities,” said Jake Reynolds, Head of Product at NetSPI. “The key is to recognize the risks most likely to disrupt the business, identify the threats that would increase those risks, and prioritize the most appropriate mitigations to protect your organization from those threats. NetSPI’s risk scoring does just that.”

According to Gartner[i], organizations with a risk-based vulnerability management program are expected to experience 80% fewer breaches.

“Reactive cybersecurity is a thing of the past. Security leaders must get proactive and take a risk-based approach to stay ahead of today’s adversaries,” said NetSPI President and CEO Aaron Shilts. “Our risk scores enable NetSPI clients to make proactive security decisions based on their unique risk factors. In other words, it allows them to confidently allocate budget and resources to the vulnerabilities that matter most.”

Learn more about PTaaS online here or contact us for a demo of NetSPI’s penetration testing and vulnerability management platform, Resolve™.


[i] Gartner, 2019 – Forecast Analysis: Risk-Based Vulnerability Management, Worldwide (Gardner, Dale)

About NetSPI

NetSPI is the leader in enterprise security testing and attack surface management, partnering with nine of the top 10 U.S. banks, three of the world’s five largest healthcare companies, the largest global cloud providers, and many of the Fortune® 500. NetSPI offers Penetration Testing as a Service (PTaaS) through its Resolve™ vulnerability management platform. Its experts perform deep dive manual penetration testing of application, network, and cloud attack surfaces, historically testing over 1 million assets to find 4 million unique vulnerabilities. NetSPI is headquartered in Minneapolis, MN and is a portfolio company of private equity firms Sunstone Partners, KKR, and Ten Eleven Ventures. Follow us on FacebookTwitter, and LinkedIn.

Media Contact:
Tori Norris, NetSPI
victoria.norris@netspi.com
(630) 258-0277

Back

NetSPI named a Minne Inno Blazer Award winner

On August 5, 2021, NetSPI was named Minne Inno’s Blazer Award winner for the High Tech Company category:

After honoring 50 companies as Inno on Fire honorees, Minne Inno — the Business Journal’s news outlet focused on the startup scene — presents this year’s Blazer Award winners. The Blazer winners were selected from the 50 Fire honorees by a panel of judges who chose one company from each category that is lighting its industry on fire.

High Tech Company

NetSPI

NetSPI doubled down on talent and grew its team over the past year.

Earlier this summer, the Minneapolis-based cybersecurity firm added a ransomware attack simulation, in addition to its portfolio of penetration testing services.

“It was a good time for us, because we were already in the middle of disrupting an already stale industry,” Shilts said. “We moved fast, we over communicated, but more than anything, we just focused on taking care of our customers.”

Moving forward, NetSPI has plans to keep disrupting the industry without compromising quality.

“Cyber is still fast moving and very innovative, but when you’re really a disruptor and changing the way people consume a service, that gets everybody excited,” Shilts said.

To learn more, read the full article here:
https://www.bizjournals.com/twincities/inno/stories/inno-on-fire/2021/08/05/blazer-awards.html

Back

4 Cybersecurity Trends to Watch: DDoS, 5G, Staffing and More

Last year was an interesting year for cybersecurity. As the pandemic caused chaos, adversaries capitalized on it. In the first half of 2020, Neustar saw a 151 percent increase in distributed denial-of-service (DDoS) attacks. While we saw the bulk of those attacks in May, things didn’t slow down in the second half of the year. In fact, DDoS attacks continued to grow, peaking in September when Neustar mitigated over 3,100 attacks alone. Neustar mitigated over 25,000 DDoS attacks in 2020, and even now, in 2021, we are still seeing attacks at a higher rate than before the pandemic. 

As things begin to “normalize,” we are seeing a handful of cybersecurity trends emerge as a direct result of the pandemic. From my discussion with NetSPI’s Nabil Hannan on the Agent of Influence podcast, here are four cybersecurity trends to watch throughout the second half of 2021 and beyond.

Ransom-related DDoS attacks:

One of the most interesting findings from our DDoS research was the reemergence of ransom-related DDoS (RDDoS) attacks, where the targeted organization receives a ransom note that claims if they don’t pay, the adversary will attack their infrastructure. This technique has been around since the 90s, but it started coming back in vogue late last year. 

By now, we are all aware that ransomware is running rampant. You see attacks like JBS and Colonial Pipeline making headlines, but we often do not hear about other attacks, like RDDoS attacks, that are taking place. What is shocking to me is the number of emergency mitigations my team is handling and the frequency of those attacks – we are addressing RDDoS attack threats at a rate of nearly five per week.

The increase in ransomware and RDDoS attacks brings up the dilemma of whether organizations should pay the ransom or not. My take is that a company should make the right business decision that makes sense for them, their board, their industry, and their consumers. After all, they are the ones who understand how their business model is structured. 

While I believe paying the ransom puts a target on your back, it is a question that doesn’t have a straightforward answer. The adversaries are great at what they do and they’re only getting better. It’s up to organizations to shift their mindset to proactively think about security – and it shouldn’t take a disaster to make this shift. Now that it has gone mainstream, everyone has to make sure they’re prepared for a ransomware attack. Investments made in cybersecurity today will pay dividends when an attack occurs in the near future.

Managing 5G security:

The growth of 5G is another area to watch. I see it drastically changing the cybersecurity landscape, specifically around how it will change how all devices are connected, and interconnected, to the world around them. 5G has nearly 100x the bandwidth that we had in prior generations, and it allows devices to directly connect to the network. Regardless of what you do for cybersecurity protection today, every threat gets amplified when you add 5G into the equation.

One of the biggest challenges in cybersecurity is knowing who is connecting to your infrastructure. This becomes even more critically important with 5G because many devices in the future will not be behind a firewall, a VPN, nor a NAT (Network Address Translation). I anticipate we will start to see two shifts in the attack landscape as a result of 5G. First, more attacks will be launched from mobile and other small devices. Second, attacks will target personal islands of security instead of larger infrastructures.

Cybersecurity staffing:

Studies say that the pandemic has accelerated the way we work by six to 10 years. In other words, there was an expectation that in six to 10 years we would be in the type of hybrid work environment we now find ourselves in – working from anywhere, and at any time. It has expanded our attack surface and increased our need for more robust cybersecurity efforts

One thing that keeps me up at night is the fact that the cybersecurity unemployment rate is 0%. This year, experts anticipate 3.5 million open cybersecurity jobs – enough to fill 50 NFL stadiums. As security professionals, we have to be right 100% of the time and the attacker only has to be right once. For that very reason it is worrisome to think about what we are going to do when these attacks start to really ramp up and scale and we cannot fill enough security roles to keep up. 

Too many security tools: 

There is not a single tool out there that can check all the boxes. If a tool claims it can do 17 great things, it’s more likely that they’re doing 17 things, but none of them are great. It then becomes a lot of plug-and-play, and we end up buying tools based on the marketing hype, as opposed to the true reason we should implement something into our infrastructure. 

As people become more reliant on the marketing promises, it opens more avenues of exposure. A tool is only as good as the process it is meant to augment or facilitate. As cybersecurity professionals, it is important that we focus on vigilant processes to ensure we are protected from threats. We have to continuously stay updated on our current patch levels, be diligent with account management, and focus on protecting what is critical to our business first and build from there. Hardened processes augmented by appropriate tools will help keep your infrastructure, business, and employees secure.  

There is a lot to consider as we move into the second half of the year and uncover the lasting technological impacts of the COVID-19 pandemic. Listen to episode 31 of Agent of Influence to hear more cybersecurity trends and insights.

Listen to Agent of Influence, Episode 31 with Michael Kaczmarek now

Discover how the NetSPI BAS solution helps organizations validate the efficacy of existing security controls and understand their Security Posture and Readiness.

X