Month: January 2021
If your organization is currently leveraging the cloud, there’s a good chance you are either using Amazon Web Services (AWS) or Microsoft Azure. Together, these two products make up 51% of the market share for cloud service providers. Given the way many cloud adoption programs operate, you might be using both. No matter which platform you’re on, it is important to note that each cloud provider has its own security considerations.
First, we should cover some background around cloud computing and security. With traditional on-premise models, security teams have access to established tools, technologies, and methodologies for dealing with security events in the environment. The cloud on the other hand, has relatively fewer security tools, resources, and established procedures available, as well as an overall higher probability for data to be exposed if a mistake is made.
As organizations migrate their resources from on-premise environments to the cloud, significant “technical debt” may also occur. Meaning there may be a lack of understanding around the technical aspects and security risks of the cloud environment. Nevertheless, organizations continue to migrate to the cloud, as its benefits often outweigh potential security concerns. Among the top reasons for cloud adoption is providing access to data from anywhere, disaster recovery, flexibility, and relieving IT staff workloads. These benefits, among others, are why organizations pay and trust cloud providers to host and manage their data and applications – but should they rely on the providers for security?
While both AWS and Azure certainly have robust cloud computing security efforts in place, it is important to understand that cloud security is a shared responsibility among providers and organizations. While cloud providers will provide underlying security for the platform infrastructure, the users of the platform still need to securely configure cloud services. This is where cloud pentesting becomes critical to organizations using the cloud.
Cloud Penetration Testing 101
Cloud penetration testing is used to identify security gaps in cloud infrastructures and provide actionable guidance for remediating the vulnerabilities to improve an organization’s overall cloud security posture and achieve compliance [read: 4 Reasons You Need Cloud Penetration Testing]. Testing can differ between cloud platforms and knowledge of the nuances can help your organization reach cloud security maturity.
There are three main components to NetSPI’s cloud pentesting methodology:
- Internal Testing: Testing the internal networks and services, much like you would an on-premise data center or on-premise network for internal virtual network vulnerabilities.
- External Testing: Testing any services that may be exposed to the Internet; Services that are fully run and operated by the cloud provider, like Azure app services, or any network services that may be externally exposed through virtual machines or firewalls.
- Configuration Review: An analysis of the services that are being used in a specific cloud provider to identify misconfigurations, enumerate available services and the network architecture, and learn how everything is being implemented inside of the environment. Notably, configuration review informs internal and external pentesting engagements.
For an introduction to cloud pentesting watch this webinar: Intro to Cloud Infrastructure Penetration Testing.
AWS versus Azure Cloud Pentesting
From an external and internal network pentest perspective, AWS and Azure are fundamentally similar. Some may argue that one or the other is slightly more likely to have external issues arise, but where AWS penetration testing and Azure penetration testing differ greatly is in the configuration review process. Given that they are two separate platforms, they will have different approaches for services configuration.
Let’s start with Azure. As part of the migration to Azure, the on-premise Microsoft network, users, and groups (commonly tied to Office 365) are all transitioned to Azure Active Directory. As this happens, it can create situations where users from the on-premise environment are given direct, or indirect, rights to resources in the cloud. Whether users or administrators are aware, these accounts are now targets for attackers, as the attacker might have an easier time going after a non-administrative account from the internet.
While AWS can integrate (or federate) directly with Active Directory, AWS has its own Identity and Access Management (IAM) platform. The IAM system in AWS can be complicated, and if administrators are not careful, they can easily grant exploitable permissions to IAM users through policies and roles. A common target for privilege escalation in AWS is EC2 instances that are configured with excessively permissioned roles. If an attacker can gain access to the EC2 instance, they can use native AWS technology to escalate their privileges in the account.
Each of the cloud platform’s vulnerabilities can be correlated with the way the identity and authorization policies are applied to the different applications and services hosted in the cloud. NetSPI’s goal during a cloud penetration test is to identify these vulnerabilities and show how these issues could be practically exploited in a cloud environment.
Regardless of the platform, investing time to understand your chosen cloud provider and its architecture will help security teams avoid “technical debt”, and be better prepared to efficiently find and fix vulnerabilities in any of the services specific to each cloud provider. Look for an experienced penetration testing company like NetSPI to test your Azure, AWS, or other cloud infrastructures as part of internal testing, external testing, and configuration review.
On January 22, 2021, NetSPI Managing Director Nabil Hannan was featured in TechTarget:
When you hear the term “pen testing,” what do you envision? A web app test done with a dynamic scanning tool? A test done by a human being who’s digging deep to replicate what an attacker would do in the real world?
What about the term “network pen testing?” An automated discovery of your network infrastructure resulting in a pages-long report on what assets you have? A real-life person examining how your network is architected in order to flesh out vulnerabilities?
Depending on who you ask, each of the responses above could be right. And therein lies the conundrum. There’s no standardized lexicon in the cybersecurity world and it’s causing confusion among independent and organizational security professionals alike.
For organizations, the challenge is using the right terminology so they can seek out and price comparable services to meet their security needs, as well as understand exactly what they’re consuming from the security professionals they engage. For cybersecurity professionals, the hurdle lies in understanding just what an organization needs and expects to accomplish its security goals. And, if your industry is compliance-focused, regulatory drivers will also determine what type of assessments your company must perform, making it critical that you get your terminology right.
Read the full article here: https://searchsecurity.techtarget.com/post/Standardize-cybersecurity-terms-to-get-everyone-correct-service
A semi-critical vulnerability was uncovered in the popular container orchestration platform Kubernetes last month: CVE 2020 8554.
I say “semi-critical” because it scores a paltry 6.3 on the Common Vulnerability Scoring System (CVSS). But two things make this vulnerability interesting and worth studying: first, it affects all versions of Kubernetes. Second, it cannot be patched. Whether you have Kubernetes in your wheelhouse or not, you do not want vulnerabilities that cannot be patched, particularly ones that affect all versions of an application.
In this article, I will explore CVE 2020 8554: how it happened, how it was found, and the lessons we can all learn from it.
How the Kubernetes MitM vulnerability happens:
It cannot be patched because it is not an implementation bug, meaning there were no mistakes made in code implementation. It happened because Kubernetes allows any tenant in a multi-tenant cluster, with certain control over their own routing, to reroute the traffic of any other tenants on that cluster.
Kubernetes provides a matryoshka nesting doll of abstraction layers. You can have one or many “clusters,” inside of which are one or many “nodes.” Each node maps to a computer (physical or virtual) running one or many “pods” of one or many “containers.” Inside each container live the software components that comprise your application. Each layer of abstraction has its own scope of policy, or its configuration. Additionally, Kubernetes has configuration for “namespaces” which cut across layers and are useful for providing isolation for the different tenants of your application.
A traditional network will have infrastructure services like Domain Name System (DNS), Address Resolution Protocol (ARP), or Network Address Translation (NAT) to ensure that client requests find their way to the servers it needs. With Kubernetes, client requests must find their way to the software running in the appropriate container (inside the pod, inside the node, inside the cluster). This process can be cumbersome. Kubernetes lets you manage these routes with configuration it calls “services.” You can set up load balancing services and external IP services which function as physical load balancers and the NAT translation that happens at your network’s edge.
The MitM vulnerability can exist here because these services are configured at the pod layer, but you can have pods with different tenants alongside one another.
No one is to blame for the vulnerability. It is a result of two decisions that unknowingly created a gap: 1) to let Kubernetes users configure services in a certain way and 2) To let clusters have multiple tenants. No one could foresee the security vulnerability that these two requirements would create when taken together.
Why not put a limit on the configurations?
Why not have single-tenant clusters? Or why not prevent tenants from altering these services? Many organizations do, but there are limits to the scalability they can achieve. And for those that do, there are third party solutions to help prevent and detect exploits of this kind of vulnerability.
How the Kubernetes MitM vulnerability was found:
“The most exciting phrase to hear in science, the one that heralds new discoveries, is not ‘Eureka!’ but ‘That’s funny…’” — Isaac Asimov
Architecture flaws are notoriously hard to identify. Finding them typically requires a deep understanding of the security implications of a lengthy set of decisions. This points to the importance of manual penetration testing. Gifted penetration testers or analysts who can perform manual protocol analysis or threat modeling are essential in finding vulnerabilities that tools cannot.
In this case, it was a builder that found the vulnerability. As they write in their blog post on the discovery, Etienne Champetier was deploying a Kubernetes cluster for a client when something that should’ve worked failed. A workaround that also should’ve worked also failed. Finally, something that never should’ve worked was successful. Etienne identified the security implications of the problems and reported them to the Kubernetes security team.
Lessons learned from the Kubernetes MitM vulnerability:
No form of static, dynamic, or interactive scanner could have found this flaw. I can’t help but reflect on our industry’s reliance on lightning–fast scanning of applications to keep defects from hitting production. Lightweight security testing via scanners is a valuable tool. A vital one, in fact. But it is not the whole toolbox. Only a skilled technologist who was willing to get elbows-deep in the technology could have discovered this flaw. Manual security testing of applications is critical to think like a real-world adversary.
Learn more about NetSPI’s approach to application penetration testing.
Starting, or even refining, a cyber security program can be daunting. Because a security program is as individual as an organization and must be built around business objectives and unique security aspirations, there’s no one-size-fits all solution and the number of tools and services available can overwhelm. The good news is that, if you’re about to embark on a security journey, the following activities will set you on the right path.
Define Your S-SDLC Governance
No matter what security techniques you end up using, you must start by defining your Secure Software Development Lifecycle (S-SDLC) governance security gates and incorporate them into your SDLC. For each gate definition, make sure you collect information needed to determine whether a component passes or fails before the software can advance to the next phase of development. For example, before promoting your application from the coding phase, you might want to do a static analysis scan. If that scan reveals a critical vulnerability, you’ll want to prevent your application from being promoted to the testing phase. Instead, report that vulnerability to the development team, who then must resolve the problem to the degree that the piece of software passes the next static analysis scan without revealing any more critical vulnerabilities. Only then do you advance your application to the testing phase.
For governance rules to be effective, you have to build a collaborative culture within your development organization and communicate and evangelize about these processes. Make sure everyone involved is aware of, and understands, the expectations to which they’re being held.
Secure Design Review
Secure Design Review (SDR) is a broad term with many different definitions. It can refer to high level, pen and paper exercises to see if there are common issues with the application being developed. It can also mean a deep analysis complete with full blown threat models. Or anything in between. Regardless of your approach, SDR allows your organization to catch vulnerabilities at the design level to adopt better security controls. SDR allows organizations to start adopting a culture of security by focusing on developing secure by design frameworks or libraries that create opportunities to efficiently implement re-usable security features as appropriate. A positive outcome? Peace of mind.
Penetration Testing and Security Testing as Part of QA
Penetration testing to assess internal and external infrastructures, often driven (but not exclusively) by governance or compliance regulations, is one of most common activities involved in cyber security programs. Note: It often requires expertise that you might not have inhouse as you get your security efforts underway. Fortunately, there are plenty of firms out there that are really good at it, and outsourcing may be your best option – especially for assets that meet mission-critical risk thresholds. Ultimately, penetration testing’s biggest value for your new security program is that it will reveal just how secure your SDLC is, which you defined in the previous steps.
Security testing is also typically performed by outside experts. However, if you have a group internally who’s already doing some sort of testing – like functional testing or QA testing – it’s easy to introduce basic concepts that allow them to test for vulnerabilities. For example, when your QA testers are building test cases, encourage them to adopt techniques like constantly building edge and boundary test cases. At a bare minimum, this will assess your application from an input validation, and output encoding perspective.
If you’re doing pentesting, look at the results and build test cases based on them into your QA workflow as well. As an example, verbose error messages should be examined. How many times have you tried to log into an app, mistyped the password and received an error message along the lines of: “Your user ID is right, but your password is wrong.” A message like that can give an attacker information they can use to brute force all possible passwords to effectively determine which are valid and which aren’t.
Interactive Application Security Testing (IAST) is gaining popularity quickly and is a rising star amongst application security testing and discovery techniques. Because it is instrumented into running the application on the server side, it can report issues that are truly exploitable, which results in the IAST tool reporting little to no false positives.
Create a Threat and Vulnerability Management Process
To improve your risk posture, it is advisable that organizations create a threat and vulnerability management process. In other words, a process to measure the rate at which you’re identifying vulnerabilities and the rate at which they’re being addressed. Next, create a centralized system to manage the vulnerabilities themselves – and build metrics to be sure you’re getting the right business insights into your program.
Before we go further, let’s clarify what metrics and measurements are, as there can be a lot of confusion around what each term means. Measurement is a fact or number used to quantify something. A metric is usually a combination of measurements, frequently a ratio, that provides business intelligence. For example, “I had three cups of coffee today” is just a measurement. My blood/caffeine ratio, however, would be a metric. The fact that I had three cups of coffee today doesn’t tell me much. But the amount of caffeine in my blood tells me something that might be important.
To take the example a step further, people sometimes will take raw data, such as the number of vulnerabilities found, and use that to measure their success. Wrong! You have to build key performance indicators (KPIs) and key risk indicators (KRIs) that are based on your business risks. Use your KPIs and KRIs to develop metrics that will guide you in your application security journey.
Initially, you might be able to only build metrics on coverage, such as the percentage of your applications portfolio that is currently being tested. Over time you can build more mature metrics to determine things like holistic policy compliance and later, look at effectiveness metrics for things like penetration testing and secure code review. Lastly, when you are heavily focused on remediation and reducing
Develop Application Security Standards
Chances are, you have security policies that you need to adhere to, whether established internally, by regulatory bodies or even customers. It is important to unify them to build application security standards applicable to your business and SDLC practices. Then, enforce them with automation whenever possible. For example, you might want to customize static analysis or dynamic analysis tools so they understand what your standards are. These tools will trigger an alert when a certain security standard isn’t being met.
Various automation tools and techniques are available that can improve the quality and security of the software that you’re implementing, including:
- SAST – Static analysis security testing
- DAST – Dynamic application security testing
- IAST – Interactive application security testing
- RASP – Real-time application self protection
- SCA – Software composition analysis
For a deeper dive into these tools, check out this Cyber Defense Magazine article, starting on pg. 65.
Identify and Inventory Open Source Risk
Open source code is everywhere. It’s convenient, replicatable and efficient to use. Many developers employ it. With open source code, however, you need to maintain a heightened awareness of possible security risks.
Maintain an inventory of all open code that you’re using throughout your organization. While those components might not currently pose a risk today, or be known to contain vulnerabilities, some type of zero day vulnerability could be discovered on a particular component. The moment that happens, you need to identify: 1) whether you’re using the component that’s vulnerable, and 2) know where you’re using it and whether your software is now exploitable. You’ll also want to track any possible licensing conflicts as early as possible to avoid legal headaches.
The Longest Journey…
… begins with the first step. If the application security journey you’re about to embark on feels like the epic trek of a lifetime, don’t worry. These six security activities will start you on solid footing and help you navigate along the way.
One Take CEO Interviews: How NetSPI is Growing Despite Covid-19, PLUS 3 Things to Do Now to Protect Your Data
On January 5, 2021, NetSPI President and COO Aaron Shilts was featured on the podcast, One Take CEO Interviews with Dale Kurschner.
Cybersecurity business leader Aaron Shilts discusses how he is leading his employees through the stresses and changes brought on by Covid-19, organic growth and a recent acquisition. He also shares three things your business should do if it hasn’t already done so to avoid a devastating cyberattack.
Shilts is president and COO of Minneapolis-based NetSPI, the industry leader in enterprise security testing and vulnerability management. NetSPI works with eight of the top 10 U.S. banks, three of the world’s five largest health care companies and the largest cloud providers. In December, it acquired Utah-based Silent Break Security to create a complete package for offensive cyber security and attack surface management.
Other points covered in this One Take CEO Interview interview include:
- How Covid-19 affected NetSPI’s workforce
- Nothing connected to the Internet is safe, so what can you do?
- How much cyber security can affect mergers and acquisitions
- What he anticipates his greatest challenge will be in 2021
- The upsides of working through a pandemic
Listen or watch the full interview on Spotify or YouTube – or visit the MN Perspectives website.
Application Security is a crucial component to all software development today. At least, it should be as cyber security concerns continue to grow at the same furious pace as the number of apps out there. However, here at NetSPI, we talk with a lot of software development teams who haven’t yet adopted a security mindset, thereby placing not only their programs at risk of cyber-attacks, but their entire organizations as well.
If you’re fighting resistance within your organization to incorporate security measures into the software development life cycle (SDLC), this blog post is for you. We’re going to set straight four of the most common myths and misconceptions we hear among those who don’t have robust application security processes in place.
Myth #1 – An application security team is optional
On the contrary – an application security team today is a must. Someone within your organization should own the function. The good news is that you don’t need a big team to manage it. In fact, we’ve seen programs that work really well with small teams – even teams composed of just one person, in some cases.
Another must: enable an application security culture and nurture that culture across the entire organization, paying special attention to key stakeholders who contribute to your application development lifecycle. Some companies foster an application security philosophy with a security champions program, where leaders in the software applications organization are nominated to advocate on behalf of the application security team. The beauty of this approach is that you have team members within your software engineering organization who can accelerate and fix vulnerabilities quickly. In many cases, they can help reduce the number of vulnerabilities your applications have in the first place. The best side-effect of this approach is that you start organically evangelizing a culture of application security within your organization.
Myth #2 – My organization is too small to have an application security team
This belief is especially common among startups. As intimated above, no organization is too small to focus on application security, mainly because it isn’t just about finding vulnerabilities. You can start by creating governance processes that define security measures and that guide implementation of a secure SDLC, such as:
- Introduce technologies at different points during your SDLC to ensure you capture vulnerabilities early, before a hacker or attacker can exploit your software.
- Integrate security concepts into your software by building application security-specific requirements that become part of your software before a single line of code is even written.
- Create security use cases (also known as misuse and abuse cases) and build functional requirements that focus on security concepts. Then, make sure that your developers have access to those requirements and implement the software against them.
- Educate developers on defensive programming techniques to be able to build software that is naturally resilient to attacks.
Myth #3 – Because we love DevOps and we’re an Agile organization, we can’t have an application security team
Organizations that feel this way usually believe that security teams slow things down. However, security doesn’t have to slow you down when you use the right tools and processes at the right times; and a relatively new concept known as DevSecOps can help. DevSecOps is a culture in which security is integrated between the development and operations functions to close the gap between the development, security, and operations teams, three roles which are historically siloed. If these three roles are required to work more collaboratively, a shared responsibility for application security is created, which enables a DevOps and/or an Agile organization to introduce security as a frictionless component of all processes. Ultimately, the objective is to make security-driven decisions and execute security actions at the same scale and speed as development and operations decisions and actions. To succeed with this approach, an organization must adopt a DevSecOps culture.
Myth #4 – Application security teams will slow us down
As mentioned above, application security doesn’t have to be a hinderance. If you’re using best practices and building good quality software, security is an inherent part of that. Most software performs better and is more efficient when it’s developed securely in the first place. When you adopt a security mindset, your SDLC will flow smoothly, enable you to build better software and can even save you money in the long run.
Getting started with application security:
Best practice dictates the introduction of appropriate touchpoints throughout each phase of your process.
Education, for example, is a good first step:
- Educate your product managers and business analyst(s) on common security vulnerabilities and real-world scenarios of how these security vulnerabilities had a severe impact on an organization, so they can help guide security requirements for your software and always be security conscious.
- Educate developers on defensive programming to make sure they implement software that is naturally resilient against vulnerabilities.
- Educate your teams who are involved with testing and deployment to detect vulnerabilities using various techniques like manual penetration testing, adversarial simulations or red teaming activities.
Learn more about secure code review and building application security into your software development lifecycle.
Second, during the planning phase, create security requirements, or benchmark your program, so that you can understand how mature your organization’s SDLC is, from a security perspective, and so that you can take educated steps to evolve and elevate it over time.
Third, in the design phase, construct your software so that it is naturally resilient to attacks. When you’re building use cases, be sure to add misuse and abuse cases. An example of a misuse/abuse case would be when an attacker tries to “brute force” all possible usernames and passwords into those fields in a login page. You can address such a case by making the software automatically lock an account after multiple wrong tries. You should also create a velocity or anti-automation check to prevent an automated tool and scripts from brute-forcing its way into compromising your application.
During the coding phase, you can not only educate your coders on writing secure code, you also can employ techniques like static analysis, manual code review, and composition analysis to identify vulnerabilities early in your SDLC.
In the testing phase, you have the opportunity to leverage manual penetration testing, dynamic scanning, and build risk-based test cases based on the misuse and abuse cases defined earlier.
Lastly, in the deployment phase, test your detection controls, perform adversarial simulations and red teaming activities. Consider manual penetration testing or implement technologies like RASP to offer continuous protection of an application even if a perimeter is breached.
Because in today’s world software is everywhere – from refrigerators and coffeemakers to medical equipment and data farms – application security is becoming ever-more complex and increasingly critical. Every software development organization, no matter how large or small, must focus on application security to protect its products, the end users and, ultimately its own organization.
For more information, watch my presentation at the recent Cyber Security Summit or contact us to learn how you can get started on your own application security journey.
On January 4, 2021, NetSPI Managing Director Florindo Gallicchio was featured in SC Magazine:
The pandemic transformed the workforce for organizations across all verticals, with employees quickly and unexpectedly transitioned from offices to working from home. The new year brings more complications. Vaccine distribution could mean a return to offices, but most experts expect a new hybrid model to emerge. Pile that on top of the already challenging situation posed by a supposed skills gap and efforts to improve diversity, and 2021 will introduce an array of workforce shifts across the community.
As part of our year in review, which looked at critical events during the last year and how they might influence 2021, SC Media collected predictions across a range of categories from cybersecurity experts. Here, experts offer their perspectives on the 2021 cyber workforce.
There will continue to be more security jobs than people to fill the roles, says Florindo Gallicchio, managing director at NetSPI:
“Security leaders will be challenged by filling roles that require candidates with mid- to senior- level experience – and entry level job openings will continue to be in high demand. Because of this, companies will need to do more with fewer people. This will result in increased adoption of program-level partnerships with third parties or using vendors to fill in-house positions at scale.”
Read the full article here: https://www.scmagazine.com/home/year-in-review/from-diversity-efforts-to-pandemic-recovery-workforce-issues-will-evolve-in-2021/