Month: July 2020
Let’s face it. The chefs in our lives were right when preaching the “clean as you go” philosophy while cooking. Keeping counters and utensils washed and put back in place helps thwart the influx of bacteria and spread of cross contamination that could make us sick. Shouldn’t that same philosophy apply to cyber security, too? Foregoing a “clean as you go” program and conducting a penetration test just once each year may check a compliance box, but ultimately prove to be unsuccessful when it comes to protecting your network and assets from the potential “bacteria” that can enter at any time.
Systems and applications in any organization become alarmingly vulnerable if monitored under a one-and-done scenario. An ongoing and continuous vulnerability management program or penetration testing program is an important guard against the potential threat to your technology assets that hackers pose nearly every second of the day. In fact, a University of Maryland study says that hackers attack every 39 seconds (on average 2,244 times a day). Think of how vulnerable your technology assets are in this environment if only penetration tested once a year.
As an aid to help put structure around a continuous penetration testing program, here are four core considerations that should be a key part of an always-on security program.
1. Prevent Breaches with an ‘Always On’ Testing Mentality
There’s no doubt about it: attack surfaces grow and evolve around the clock. With network configurations, new tools and applications, and third-party integrations coming online constantly, an atmosphere is created that opens the possibility of unidentified security gaps. This white paper points to the fact that cyber-attacks can affect your business and are almost as prevalent as natural disasters and extreme weather events. And we know from our own NetSPI research that nearly 70 percent of CISO security leaders are concerned about network vulnerabilities after implementing new security tools.
And those CISOs’ concerns are valid: take the recent announcement from the U.S. Department of Homeland Security’s Cybersecurity and Infrastructure Security Agency (CISA). CISA published security advice for organizations that may have rushed out Office 365 deployments to support remote working during the coronavirus pandemic. A ZDNet article says that CISA warns it continues to see organizations that have failed to implement security best practices for their Office 365 implementation. CISA is concerned that hurried deployments may have led to important security configuration oversights that could be exploited by attackers. With continuous penetration testing in place, security leaders can identify high risk vulnerabilities in real-time and close those security gaps faster.
2. Automation Is a Tool; Human Logic Is Critical
Good pentesters use automated scanning tools (ideally from many different sources) and run frequent vulnerability discovery and assessment scans in the overall pentesting process. Vulnerability scanning is generally considered an addition to manual, deep-dive pentests conducted by an ethical hacker. Manual pentesting leverages the findings from automated vulnerability and risk assessment scanning tools to pick critical targets for experienced human pentesters to: 1) verify as high-fidelity rather than chasing false-positives, and then 2) to consider exploiting as possible incremental steps in an effort to eventually gain privileged access somewhere important on the network.
Purely automated tools and highly automated testing activities cannot adequately test the business logic baked into an application. While some tools claim to perform complete testing, no automated technology solution on the market today can perform true business logic testing. The process requires the human element that goes well beyond the capabilities of even the most sophisticated automated tools.
3. Penetration Testing Reports Don’t Have to Be Mundane
We can all agree that there isn’t much enjoyment in reading pages and pages of pentesting data presented in static excel or PDF documents. Now picture what the paperwork for a once-a-year penetration testing report. Gulp! Much like many of us consume the daily news headlines, so too should CISOs view the daily “headlines” of their vulnerability management programming through the display of live pentest report results.
Under this scenario, less time is spent analyzing penetration testing report data, opening valuable time to give to the important work of remediation. Insist on the following pentest report deliverables in your penetration testing program:
- Actionable, consumable discovery results to automatically correlate and normalize all of the data collected from multiple open source and proprietary tools.
- High quality documentation and reports related to all work delivered, including step-by-step screen-capture details and tester commentary for every successful manual attack.
4. Stay Ahead of the Attacks Through Remediation
To stay ahead of the every 39-second hacks every day, it’s important to enable fast and continuous remediation efforts to keep a threat actor at bay. This goes hand in hand with testing, analyzing, and reporting: if you’re not continuously testing for vulnerabilities, it’s highly probable that the issues remain unresolved. Layer in these remediation best practices into your pentesting program:
- Industry standard and expert specific mitigation recommendations for all identified vulnerabilities.
- Traceability and archiving of all of the work done to make each subsequent round of testing for your organization more efficient and effective.
Factoring these considerations—always on testing, manual testing, real-time reporting, and remediation—into the planning and design of penetration testing programs will significantly minimize the risk of damage or disruption that could occur in an organization, and dramatically boost the security of your cyber assets.
NetSPI Brings Scale, Agility, and Speed to Static Application Security Testing and Secure Code Review
Cloud Security: What is it Really, Driving Forces Behind the Transition, and How to Get Started
In a recent episode of Agent of Influence, I talked with Mike Rothman, President of DisruptOps. Mike is a 25-year security veteran, specializing in the sexy aspects of security, such as protecting networks, protecting endpoints, security management, compliance, and helping clients navigate a secure evolution in their path to full cloud adoption. In addition to his role at DisruptOps, Mike is Analyst & President of Securosis. I wanted to share some of his insights in a blog post, but you can also listen to our interview here, on Spotify, Apple Music, or wherever you listen to podcasts.
The Evolving Perception of the Cyber Security Industry
Mike shared the evolution of the cyber security industry from his mom’s perspective – twenty years ago, his mom had no idea what he did – “something with computers.” Today, though, as cyber security and data breaches have made headline news, he can point to that as being what he does – helping companies prevent similar breaches.
Cyber security has become much more visible and has entered the common vernacular. A lot of people used to complain that nobody takes the industry seriously, nobody cares about what we’re doing, and they marginalize everything that we’re talking about. But that has really flipped, because now nobody’s marginalizing anything about security. We have to show up in front of the board and talk about why we’re not keeping pace with the attackers and why we’re not protecting customer data to the degree that we need to. Security has become extremely visible in recent years.
To show this evolution of the industry, Mike noted he’s been to 23 out of last 24 RSA conferences, but when he first started going to the show, it was in a hotel on top of Nob Hill in San Francisco, and there were about 500 people in attendance, most of whom were very technical. Now the conference has become a huge staple of the industry with 35,000-40,000 people attending each year. (Read our key takeaways from this year’s RSA Conference.)
As many guests on the Agent of Influence podcast have noted, the security industry is always evolving; there’s always a new challenge or a new type of methodology that’s being adopted. However, at the same time, there are also a lot of parallels of things that don’t change. For example, a lot of the new vulnerabilities and things that are being identified today are ultimately still the same type of vulnerabilities we’ve been finding for the longest time – there’s still injection attacks, they just might be a different type of injection attack. I personally enjoy looking at things that are recurring and are the same, but just look and feel different in the security space, which makes it interesting.
What Does Cloud Security Really Mean?
Mike started to specialize in cloud security because he says he just got lucky. A friend of his, Jim Reavis founded the Cloud Security Alliance and wanted to offer a certification in cloud security, but he had no way to train people so they could obtain the certification. Jim approached Mike and Rich Mogull to see if they could build the training curriculum for him. As Mike and Rich considered this offer, they realized they A) knew nothing about cloud and B) knew nothing about training!
That was 10 years ago, and as they say… the rest is history. Mike and Rich have been teaching cloud security for the past 10 years, including at the Black Hat Conference for the past five years and advising large customers about how to move their traditional data center operations into the cloud while protecting customer data and taking advantage of a number of the unique characteristics of the cloud. They’ve also founded a company called DisruptOps, which originated from research Mike did with Securosis that they spun out into a separate company to do cloud security automation and cloud security operations.
As Mike says, 10 years ago, nobody really knew what the cloud was, but over time, people started to realize that with the cloud, you get a lot more agility and a lot more flexibility in terms of how you can provision, and both scale up and contract your infrastructure, giving you the ability to do things that you could never do in your own data center. But as with most things that have tremendous upside, there’s also a downside. When you start to program your infrastructure, you end up having a lot of application code that’s representative of your infrastructure, and as we all know – defects happen.
One of the core essential characteristics of the cloud is broad network access, which means you need to be able to access these resources from wherever you are. But, if you screw up an access control policy, everybody can get to your resources, and that’s how a lot of cloud breaches happen today – somebody screws up an access control policy to a storage bucket that is somewhere within a cloud provider.
Data Security and the Cloud
DisruptOps’ aim is to get cyber security leaders and organizations to think about how they can start using architecture as the security control as we move forward. By that he means, you can build an application stack that totally isolates your data layer from your compute layer from your presentation.
These are things you can’t do in your data center because of lateral movement. Once you compromise one thing in the data center, in a lot of cases, you’ve compromised everything in the data center. However, in the cloud, if you do the right thing from an isolation standpoint and an account boundary standpoint, you don’t have those same issues.
Mike encourages people to think more expansively about what things like a programmable infrastructure, isolation by definition, and default deny on all of your access policies for things that you put into the cloud would allow you to do. A lot of these constructs are kind of foreign to people who grew up in data center land. You really must think differently if you want to set things up optimally for the cloud, as opposed to just retrofitting what you’ve been doing for many years to fit the cloud.
Driving Forces Behind Moving from Traditional Data Centers to the Cloud
- Speed – Back in the day, it would take three to four weeks to get a new server ordered, shipped, set up in the rack, installed with an operating system, etc. Today, if you have your AWS free tier application, you can have a new server using almost any operating system in one minute. So, in one minute, you have unbounded compute, unbounded storage, and could set up a Class B IP network with one API call. This is just not possible in the data center. So there’s obviously a huge speed aspect of being able to do things and provision new things in the cloud quickly.
- Cost – Depending on how you do it, you can actually save a lot of money because you’re not using the resources that you had to build out in order to satisfy your peak usage; you can just expand your infrastructure as you need to and contract it when you’re not using those resources. If you’re able to auto scale and scale up and scale down and you build things using microservices and a lot of platform services that you don’t have to build and run all the time in your environment, you can really build a much more cost effective environment in order to run a lot of your technology operations.
However, Mike said, if you do it wrong, which is taking stuff you already paid for and depreciated in your data center and move it into the cloud, that becomes a fiasco. If you’re not ready to move to the cloud, you end up paying by the minute for resources that you’ve already paid for and depreciated.
- Agility – If you have an attack in one of your technology stacks, you just move it out, quarantine it, build a new one, and move your sessions over there. Unless you want to have totally replicable data centers, you can’t do this in a data center.
There are a lot of architectural, agility, cost, global capabilities, elasticity to scale up and down, and other reasons to take advantage of the capabilities of the cloud.
Resources to Get Started in the Cloud
Mike recommended the below resources and tools for people looking to learn more about the cloud:
- Read The Phoenix Project by Gene Kim, which Mike considers the manifesto of DevOps. Regardless of whether your organization is in the cloud or moving to the cloud, we’re undergoing a cultural transformation on the part of IT that looks a lot like DevOps. Some organizations will embrace the cloud in some ways, and other organizations will embrace it in others. The Phoenix Project will give you an idea in the form of a parable about what is possible. For example, what is a broken environment and how can you embrace some of these concepts and fix your environment? This gives you context for where things are going and what the optimal state looks like over time.
- Go to aws.amazon.com and sign up for an account in their free tier for a year and start playing around with it by setting up servers and networks, peering between things, sending data, accessing things via the API, logging into the console, and doing things like setting up identity access management policies on those resources. Playing around like this will allow you to get a feel for the granularity of what you can do in the cloud and how it’s different from how you manage your on-prem resources. Without having a basic understanding of how the most fundamental things work in the cloud, moving to the cloud will be really challenging. It is hard to understand how you need to change your security practice to embrace the cloud when you don’t know what the cloud is.
- Mike also plugged their basic cloud training courses which give both hands on capabilities, as well as background to be able to pass the Certificate of Cloud Security Knowledge certification. You’ll be able to both talk the language of cloud and play around with Cloud.
To listen to the full podcast, click here, or you can find Agent of Influence on Spotify, Apple Music, or wherever you listen to podcasts.
$8.19 million. That’s the average loss U.S. organizations face each year due to the damages of cyber security attacks, according to a Ponemon Institute study. More worrisome is the fact that the average time it took to identify and contain a breach was 279 days, a number that is growing. Cyber security and IT teams continue to feel unprepared in the event of a breach and struggle to keep pace with the ever-evolving threat landscape. Maintaining an always-on mentality, prioritizing vulnerability testing to faster remediation, and understanding the implications of an alert in an organization’s asset management platform are key to staying ahead. But in the long-term, also having a deep contextual knowledge of business operations as a whole should be considered fundamental to preparing and defending against escalating threats.
In 1989, Robert Morris created what has been widely acknowledged as the first computer threat, which spread so aggressively and quickly that it succeeded in closing down much of the internet. While the Morris Worm was the impetus to putting in place coordinated systems and incident teams to deal with cyberattacks, it wasn’t until the Target breach in 2013, in which information from 40 million credit and debit cards were stolen, that leaders in corporations began to fully understand that all levels of an organization must understand the potential threat of breaches and that ad hoc support of cyber security initiatives was no longer sufficient. Rather, all-encompassing programs of prevention, monitoring, and remediation must be in place.
Bringing Context to Incident Response
Incident response teams today must have full knowledge of the ecosystem and what systems need protecting (and the data residing within) to have a more comprehensive approach to protecting their organizations from cyber security threats. They can do so by adding context to incident response. Currently, if there is a threat event that occurs, the analyst has to synthesize the environment that they’re trying to defend before action can take place. But if they don’t have the contextual knowledge of their organization—what application supports what infrastructure, which impacts what business process and value stream—then that incident responder is already behind.
Security teams should understand what they are reacting to, how to recreate the view and immediately understand the ecosystem they are trying to protect so they can act on it right away rather than reverse engineer the situation, which it may be too late to do anyway. In that case, the threat actor may be able to move faster than the incident responder. Easily said, but as apps are starting to be decomposed, the ecosystem is becoming even more distributed, making the context even harder for incident response managers to understand. With more and more application security and applications offered in containers, in the cloud (or cloud native), or offered serverless and through functions-as-a service-platforms, incident responders are now in a position in which they need to understand the contextual challenge of the threats. It is critical that incident responders understand what type of threat they are responding to and what it is they are trying to protect in the larger business sense. Helping to create context is going to be an emerging challenge that needs to be addressed by the industry and community in the future.
Creating Better Asset Management Platforms to Improve Incident Response
When creating asset management platforms, I recommend that CISOs work with their team to base that development on context around the business and the technology. When the platform isn’t so rigidly defined in the context of an application, we start to make connections with the infrastructure to the business processes and the value streams. And it is then that you can truly start to be a counselor to senior leadership and articulate the business impact of any given threat. Through contextualization, you’ll immediately know when you have the asset data and the association, and whether it is of lesser importance (and you don’t need to wake up the CEO!). Or vice versa, when there is a high-fidelity threat that is hitting your flagship application that is behind the capabilities of the entire business process. That is when it will warrant executive leadership attention, but now you will be in a position to also provide solutions to remediation.
Some areas I’ve explored while developing asset management platforms revolve around visualization. I’m looking at the integration between logging and monitoring capabilities and the data they generate through asset management tools, but also other solutions like cloud and container monitoring platforms and the telemetry they provide. Then I’m looking at the visualization tools that are out there that can create these views. Picture this asset management platform chronology:
- Data comes up through logging and monitoring capability
- Incident Responder quickly determines it is a problem
- Through the functionality of the asset management platform, the backends stitches together all that data and pulls up a visualization tool that is able to map the internal environment or/cloud environment that shows the team that this alert is associated with a particular container, which is a part of a particular ecosystem/value stream that is talking to these specific databases
- Incident Responders quickly react to visual cues, improved through real-time contextual awareness, so they can more quickly appreciate the danger and immediately take on real action to thwart the threat
That is a future state that positions incident responders as a force to be reckoned with against the ever-evolving threat landscape.
Improving Your Standing in Incident Response
In addition to investing in understanding the context of your incident response plans, I offer the following advice to improve incident responders’ professional standing:
- Become Invaluable as Subject Matter Experts—Understand the ecosystem of your organization, the context in which threats may occur and the consequences on the business values streams so you can quickly synthesize the information to give the broader team – even the C-Suite – insights and counsel.
- Always Remain Curious, Even Suspicious—Have your radar always on so that, for example, if a new threat comes out, which may or may not even impact your environment but may be within your vertical market, you can preemptively guard against them.
- Understand the Threat and its Potential Impact—Be readily able to ascertain if there is a concern in your environment through volume metrics (i.e., how much of that problem do we have?) and through risk quantification (i.e., threat W is against X so not a concern, but threat Y is against Z so it is a big concern).
There is real opportunity to improve real-time contextual awareness so incident responders can more quickly appreciate what they have so they can immediately action on it rather than waste time in making inferences about the environment. To be sure, incident response plans are ever evolving, and some plans are undoubtedly better than others. It boils down to whether the incident responders are executing on the plan and have an appropriate contextual appreciation of the environment, the ecosystem, the business value streams and the stakeholders involved to get the right people to the table to best defend against adversaries.
On July 9, 2020, NetSPI Managing Director Nabil Hannan was featured in Dark Reading.
Google “pen testing return on investment (ROI)” and you will find a lot of repetitive advice on how to best communicate the value of a pen-testing engagement. Evaluate the costs of noncompliance penalties, measure the impact of a breach against the cost of a pentest engagement, reduce time to remediation, to name a few. While all of these measurements are important, pen testing provides value beyond compliance and breach prevention, even through a financial lens. Let’s explore the critical steps to successfully define and communicate ROI for security testing.
Read the full article here.
Depending on the industry an organization is in, there are a multitude of specific, acronym-heavy rules, regulations, and frameworks which must be adhered to, especially for industries with extremely sensitive and valuable data, including healthcare, banking, and energy. For many years, these compliance-first frameworks – HIPPA for healthcare, PCI-DSS for credit card handling, and NERC-CIP for energy companies, to name a few – were the structure around which IT leaders managed their security programs. To further complicate things, there are multiple compliance-based frameworks that overlap and even others that are specific to the states in which an organization does business, like CCPA. A common example of cyber security compliance? Once a year (typically) organizations are required to have an outside, third party evaluate its programs. Voilà! An organization is secure, right? Not always.
In my opinion, building your security program around a framework for compliance, ensures an organization is compliant, but doesn’t necessarily make it secure. In fact, if you’re simply implementing a security strategy to check a box, it’s likely that your systems are vulnerable to cyber adversaries. While security is foundational in these compliance-based frameworks, historically it was deemphasized for a period of time. But things are changing – specifically, the way we think about security is shifting away from a compliance-first mindset. Big data breaches got the attention of Boards of Directors from a financial (read: fines, lawsuits) and reputational loss standpoint. From a technology standpoint, there’s no longer an inside and outside of the organization and just defending perimeters with firewalls is no longer adequate. And, one more example, with a move away from a waterfall release of applications to a more agile development philosophy, it makes business sense to elevate the frequency of vulnerability assessments, even moving to a continuous, ongoing monitoring of internet-facing attack landscapes to more adequately protect against unauthorized access to an organization’s intellectual property.
Organizations that have a more mature technology footprint are surely interested in doing everything they possibly can to find and fix vulnerabilities. And even in a mature scenario, there’s ample opportunity to put in an action-based framework that ties up to an organization’s controls and security framework. Consider this: the world’s leading research organization, Gartner, found that between 2014-2018 approximately 41 percent of clients had either not selected a framework or had developed their own ad hoc framework. It goes on to show that failure to select any framework and/or build one from scratch can lead to security programs that:
- Have critical control gaps and therefore don’t address current and emerging threats in line with stakeholder expectations.
- Place undue burden on technical and security teams.
- Waste precious funding on security controls that don’t move the needle on the organization’s risk profile.
How can we begin to administer a security-based framework? Quite simply, just begin. It doesn’t have to be perfect from the get-go. Consider it a work in progress. After all, the threat actors, technology assets, and detective controls are constantly changing. Thus, you will need to constantly change and adapt your continuous, always-on security and vulnerability management program. Here are some best practices to help you begin implementing your security-based framework changeover.
- Evaluate the landscape: Determine whether there has been a security framework or controls catalog developed for your specific industry sector. The NIST Cybersecurity Framework is a good place to start. But what happens when there is no industry-specific or government-mandated security framework and control catalog? In this case, security capability maturity and team capacity and capability become the key inputs in selecting your security control framework and control catalog. (Source: Gartner)
- Engage with organizational leadership outside of technology: Develop a scrum planning team with legal, risk, and front-line business unit representatives to help identify discrete regulatory or legislative obligations that need consideration.
- Audit your internal and external environment: Identify the contextual factors that could influence your selection of security framework and control.
- Invest in your people: Admit to technology fatigue and that some significant investments aren’t optimized to meet set objectives or are redundant. Instead, invest in a people-first, pentesting team that can approach security from the eyes of an attacker.
- Develop a plan based on continuous improvement: Combine manual and automated pentesting to produce real-time, actionable results, allowing security teams to remediate vulnerabilities faster, better understand their security posture, and perform more comprehensive testing throughout the year.
Remember: Just because an organization’s cyber security program is compliant, doesn’t mean it is secure. If an organization approaches its security programs from a security-first mindset, most likely it will comply with the necessary compliance rules and regulations. I see compliance as a subset of security, not the other way around.