SecureLink and NetSPI partner to enable enterprises to manage their attack surface with tech-enabled penetration testing services

Dubai, UAE and Minneapolis, Minnesota  –  SecureLink, the Trusted Risk Advisor and subsidiary of StarLink signed distribution agreement with NetSPI, a leader in Enterprise Security Testing and Attack Surface Management, for the MEA region.

Pioneers in penetration testing, NetSPI is changing the pentesting scenario to make it easier for enterprises to track trends and improve their vulnerability management program. The Technical Assessments include Web Application Penetration Testing, Mobile Application Penetration Testing, Source Code Review, Infrastructure Vulnerability Assessment, Red Teaming, and Breach and Attack Simulation.

Through this partnership, NetSPI can capitalize on SecureLink’s consultancy, sales, and marketing expertise, utilize the direct connect with decision-makers in their extensive customer base to create and convert opportunities for Cybersecurity Testing Services provided by NetSPI as well as take advantage of the years of trust built by SecureLink in this region.

Manish Pardeshi, Director, SecureLink commented that, “We are privileged to onboard NetSPI in our ecosystem that can offer our customers a more continuous and scalable assessment of their environment with NetSPI’s Penetration Testing as a Service (PTaaS) and ensure real-time visibility and full control over the testing program.”

“We are proud to announce our partnership with SecureLink, the well-established cybersecurity leader in the MEA region. Together we will transform the cybersecurity testing industry with NetSPI’s technology-enabled services and expertise,” said Aaron Shilts President and CEO at NetSPI. “In partnership with SecureLink, multinational enterprises in MEA now have access to NetSPI’s penetration testing and adversary simulation services to test their applications, networks, and cloud at scale and better manage their expanding attack surface. The sophistication, methodology, and value, provided by SecureLink and NetSPI is unmatched.”

About SecureLink

SecureLink is a risk advisory firm headquartered in Dubai, UAE, and part of the StarLink Group of companies that has a turnover of USD 500 Million, over 375 employees and presence in 20 countries in the META region, including UK and USA.  SecureLink is an independent advisory firm assisting customers in identifying, mitigating, and managing their business risks. SecureLink provides comprehensive assessment of risks across People, Process & Technology and helps with the right governance frameworks to ensure that risks are continuously monitored and acted upon. SecureLink offers these services via its partner community to develop frameworks and implement platforms for automation of governance, risk, and compliance requirements. For more information about SecureLink, please visit

About NetSPI

NetSPI is the leader in enterprise security testing and attack surface management, partnering with nine of the top 10 U.S. banks, three of the world’s five largest healthcare companies, the largest global cloud providers, and many of the Fortune® 500. NetSPI offers Penetration Testing as a Service (PTaaS) through its Resolve™ penetration testing and vulnerability management platform. Its experts perform deep dive manual penetration testing of application, network, and cloud attack surfaces, historically testing over 1 million assets to find 4 million unique vulnerabilities. NetSPI is headquartered in Minneapolis, MN and is a portfolio company of private equity firms Sunstone Partners, KKR, and Ten Eleven Ventures. Follow us on FacebookTwitter, and LinkedIn.

SecureLink Press Contact:
Raji Joy John
Marketing Director, StarLink
+971 4 2794000

NetSPI Media Contact:
Tori Norris, NetSPI
(630) 258-0277


CSO: 10 essential skills and traits of ethical hackers

On October 26, 2021, NetSPI Director of People Operations Heather Neumeister was featured in an online article by CSO:

What if you could spend your days trying to gain access to other people’s networks and computer systems—and not get in trouble for it? Of course, that’s every spy and cybercriminal’s dream, but only ethical hackers, also known as white hat hackers or penetration testers, can feel sure that they’ll get away with their break-ins. These security pros are hired to probe systems for vulnerabilities, so that their targets can figure out where their security needs beefing up.

Ethics. OK, maybe this seems obvious, since the word “ethical” is right there in the job description. But the truth is that a pentester is given a lot of responsibility and power, and it’s important to feel sure that they won’t abuse it.

Heather Neumeister is director of people operations at NetSPI, which specializes in penetration testing and attack surface management. “Assessing a candidate’s ethics is based on both background and personal assessment,” she explains. “When part of the criteria being considered for a new hire is ethics and morals, there is always going to be an element of gut instinct. But it’s also important to ask questions around why someone chose to get into pentesting, as you can usually quickly identify a person’s intent during initial conversations. To find people with strong ethics and morals, it can be helpful to look at the activities a candidate does in the greater community. Extracurriculars like non-profit work, public research, and open-source contributions can be useful indicators of a higher ethical standard, as it’s often the case that those who choose to positively benefit the security industry without personal gain are those who are truly committed to ethical behavior.”

Read the rest of the CSO article here:


3 Frightening Cybersecurity Threats Lurking this Halloween

It’s no coincidence that Halloween and Cybersecurity Awareness Month are both observed in October. Just as monsters, ghosts, and witches wreak havoc in our favorite Halloween movies, cyber adversaries haunt organizations across the globe with their increasingly sophisticated attack tactics.

There are three cybersecurity threats that, in my opinion, are the most frightening of them all: ransomware, work from home attacks, and software as a service (SaaS). Have no fear, not only will this article reveal the spookiest threats, but I’ll also share tips and best practices for prevention – no spell book required!

Beware of ransomware

Paying a ransom has no guarantees. On average, only 65% of encrypted data was restored after a ransom was paid, according to the Sophos State of Ransomware 2021.

By now, we can all generally define ransomware. It’s making national headlines due to its widespread impact in both the cyber and physical world. One of the more frightening aspects of ransomware is the uncertainty of the attack, specifically the varying attacker motivations.

Killware is an emerging ransomware threat in which the motivation is to impact critical infrastructure with the intent to do harm. In the case of Killware, they are not after money. It’s ransomware with no decryption keys. They want you to be down and stay down. For more, this USA Today article explains possible Killware scenarios and motivations.

It’s also a fluid and uncertain legislative and regulatory space. As it becomes more challenging to recover from a ransomware attack, payment is often the fastest way to get back to business. So, what happens if ransom payments become illegal? 

Ransomware attack outcomes can also vary significantly. For example, just because you pay, doesn’t mean you will get the decryption keys or access to all your data. Often, ransomware families blackmail organizations with stolen data to increase their financial gain. 

Ransom payments also fluctuate. Just this year it was reported that CNA Financial paid $40 million in ransom. And Palo Alto Networks found that the average ransomware payment climbed 82% since 2020 to a record-high $570,000 in the first half of 2021. 

For more on ransomware trends and best practices for ransomware prevention, download our Ultimate Guide to Ransomware Attacks.

Ransomware is a financial loss event and should be treated as such. It’s no longer the sole responsibility of cybersecurity and technology teams, finance, and others responsible for managing business and financial risk have a critical role to play.

Ransomware simulation assessments can remove some of the uncertainty surrounding these adversarial attacks. An attack simulation can benchmark how well an organization is positioned to detect, prevent, and defend against ransomware. Are your controls sufficient? Are your response teams effective? If there is a detection or response failure… can you recover? These are questions NetSPI’s Ransomware Attack Simulation service and AttackSim technology platform can help address.

Haunted by work from home attacks

Nearly 80% of IT and security leaders believe their organizations lack sufficient protection against cyberattacks despite increased IT security investments made to deal with distributed IT and work-from-home challenges, according to a survey from IDG Research Services and Insight Enterprises.

The percentage of people in the U.S. working from home doubled between 2019 and 2020, according to the U.S. Bureau of Labor Statistics American Time Use Survey. Now more than ever, organizations are embracing flexible work environments and, with that, comes employees connecting to external WiFi networks.

Consider this: Each employee device is an extension of your corporate network. The workstation itself is provisioned and managed by IT, but beyond that, they do not have control over these devices. Home networks are a black box, even more so if you use a router supplied by your internet provider. More concerning are the uncontrolled connections (coffee shops, hotels, family member’s homes, etc.) that can serve as another entry point for an attacker to access the device.

Another factor to consider is the management of personal devices. Through the pandemic, we’ve seen a shift away from office phones and often people use their personal cellphones to manage their work. It’s the lack of control organizations have over these devices that is the most frightening.

The shift to work from home ultimately broadens an organization’s attack surface. But that is the reality of our workforce today. Remote work is here to stay in some capacity and infosec teams are tasked with creating security tactics and policies to ensure business continuity and productivity… simultaneously.

To address work from home security challenges a focus on endpoint security is critical, particularly for devices not inside the ‘walled garden’ of your corporate network. Network penetration testing can help you identify the right level of protection and telemetry for your endpoint controls.

I also anticipate technology innovation in the attack surface management space to help infosec professionals tackle the many challenges that accompany a remote workforce: asset management, shadow IT, bring your own device (BYOD), and more.

Software as a Service (SaaS) in the shadows

1 out of 3 employees at Fortune 1000 companies regularly use SaaS apps that haven’t been explicitly approved by internal IT departments, according to IBM.

Add to that the fact that organizations use an average of 110 SaaS applications, according to the 2021 State of SaaSOps report, and there’s a real issue with SaaS visibility and security. The adoption SaaS platforms has increased given its ability to enable remote work, create workflow efficiencies, and collaborate (see: Zoom, Slack, Teams, Wrike).

SaaS adoption requires you to examine the security of your extended attack surface, but its footprint doesn’t receive the same level of shared responsibility as infrastructure as a service (IaaS) or cloud environments. We put a lot of trust into the security of SaaS providers today, however, these applications present many interesting security challenges.

Most people connect direct from a managed device to the SaaS platform without going through a secure corporate network, which creates authentication and identity and access management (IAM) challenges. For example, are you requiring SSO or multi-factor authentication for SaaS platforms? How do you ensure authentication best practices for SaaS applications outside the corporate network? 

SaaS platforms are a critical component of our workflow today and contain troves of sensitive data. With the rapid adoption of SaaS applications today, it is important for security teams to align and communicate SaaS security policies within their organizations and ensure secure configuration of SaaS platforms. To strengthen security, SaaS security posture management is key. 

Defined by Gartner in the Hype Cycle for Cloud Security, SaaS security posture management (SSPM) is “tools and automation that can continuously assess the security risk and manage SaaS application security posture.” This could include continuous monitoring and alerts, configuration review, comparison against industry frameworks, and more.

For a detailed conversation on SaaS posture management, CEO and Co-Founder at Adaptive Shield Maor Bin joins us on the Agent of Influence cybersecurity podcast next month. Tune in!


Q&A: David Quisenberry Discusses Cybersecurity Careers, Collaboration, Mentorship, and OWASP Portland

Being a cybersecurity leader is not for the faint of heart. The increasing sophistication of adversaries and number of successful breaches puts significant pressure on security teams today. For advice, I invited Pacific Northwest infosec leader David Quisenberry to join me on the Agent of Influence podcast, a series of interviews with industry leaders and security gurus where we share best practices and trends in the world of cybersecurity and vulnerability management

During our conversation, David shared four ways he’s approaching cybersecurity leadership today by:

  1. Tapping into his wealth management experience.
  2. Collaborating across the organization.
  3. Working closely with his local security community as the president of OWASP Portland, Oregon.
  4. Creating a solid network of mentors.

Continue reading for highlights from our conversation around wealth management, collaboration mentorship, OWASP Portland, and more. You can listen to the full episode on the NetSPI website, or wherever you listen to podcasts.

Can you tell me about your career transition from wealth management to cybersecurity?

David Quisenberry: The decisions you make, the careers you go down, the things you do – everything’s interrelated and connected. There are some differences, but there are also a lot of similarities. In wealth management, you deal a lot with risk tolerance. Like companies, when someone is just starting off as an investor, they don’t have a lot of money and they’re much more willing to take on risk and do things that a more established person, family, or trust might not do with their money because they have a fiduciary standard to make sure that they invested wisely for all the beneficiaries. 

Again, like companies, when they’re building out their foundation of revenue, security may not be as front and center to a lot of the decisions that get make. But as companies grow, a lot of enterprise corporations view security and risk tolerance much differently. They want to understand all the risks that go into each decision they make as a business. Risk tolerance is a similar theme. 

Another thing that’s very similar between the world of wealth management and the world of cybersecurity is you’re always reading, always studying. As a wealth manager, you’re constantly keeping up to speed on all the trends with global investment flows, global economics, and mutual funds. That obviously translates into the security world where you’re reading and learning things all the time. 

Lastly, convincing others to take action. As a wealth manager, there’s this tension, especially if you’re working with families where you want them to save as much as possible so that they can have a lot of money in retirement for their kids, for their charities. They care about their future self, but they also want to live life. There are these tensions, and you have to convince someone to spend their dollars one way versus the other. This is very similar to security work with developers, product owners, product managers, et cetera. It’s a constant game of understanding all the various priorities and working together to identify that sweet spot of paying security debt, staying on top of future security debt, as well as getting other features built to drive your business forward. 

You are taking some unique approaches to better collaborate across different teams within your organization. Can you share with us the approach you’re taking? Are there things that worked well or not?

David Quisenberry: One of the philosophies I have about most things is “relationships first.” As an information security manager, I’ve tried to take the approach of being available and approachable. If somebody sends me a question or an email, most of the time I will drop what I’m doing so that I can answer them in that moment. Even when people get frustrated with me, I take the opportunity to take a step back and think, “We have a tension right now. But they’re thinking about security. What’s not clear? How do I communicate the why?” If I can accurately explain the why, it’s going to help so much. I try to take that relationship first approach, identify those early wins, and set clear expectations of what is a milestone and then celebrate those when we hit them.

It’s important to have regular monthly meetings with the scrum masters with the various leadership teams for the different products, engineering managers, project managers. I encourage them to ask questions and know that we’re going to have an opportunity to dig into things. To prepare for those meetings, we have a working agenda that both parties can add to, and I also try to give visibility into data and analytics. 

As the president of the OWASP chapter in Portland, how did you get involved with the community? And what are some interesting things that you’re doing that might be different from other chapters? 

David Quisenberry: David Merrick introduced me to the chapter. I started going to the chapter meetings whenever I could. Around late 2018/2019, I started being mentored by the previous chapter President, Ian Melvin, who’s been an amazing mentor and really helped me along in my progress. He got me more involved on the on the leadership side. 

What I found is most OWASP chapters have leadership that have been laboring hard for a long time to keep the chapter going. If you’re willing to help bring in speakers, engage in membership, promote social media activity, or think of topics to present, they’ll open up. Especially if you prove yourself and that you can deliver on it. What I found with the Portland chapter was that, as I started getting involved, we needed to meet developers where they’re at. 

We did a lot early on when I took over as president of the Portland OWASP chapter. We built out a mentorship program where we had around 24 people with varying skill levels meeting regularly. We really ramped up our social media presence, specifically Twitter and LinkedIn. We used which helped us solidify returning visitors and provided an easy mechanism for people to RSVP to our monthly meetings. By the end of 2019, we were close to 50-60 people per meeting. And we brought in a lot of great speakers. 

And then then COVID-19 hits, and suddenly you can’t meet in person anymore. We had to do everything virtually, but we were able to continue our path of monthly, or bi-monthly meetings. We also have another thing that we do as a local chapter, which is study sessions. More hands-on, shorter sessions or labs and then 40-minutes hands on keyboard. Working with Burp Suite or Wireshark. You name it. We started a podcast in late 2019 and that’s been super successful. We had 6,500 listeners or so over this last year and some interesting guests. We’re also exploring some other opportunities for cybersecurity training with other chapters. We’ve been trying to collaborate more with the chapters around us and that’s been going quite well. 

NetSPI’s Portland, OR office is growing. Check out our open cybersecurity careers in PDX!

I wouldn’t be where I am today without my involvement with OWASP. If you’re interested in truly excelling and expanding your horizons in the security space, these community meetings and chapters really pay dividends in the long run. I’d be curious to get your perspective on any guidance you have on how to choose a mentor that’s right for you?

David Quisenberry: The first thing I would say is don’t have one mentor. And I think of it almost as a personal board of directors. For myself what I want is people from across the spectrum. So, business leaders, engineering leadership, security leadership, different types of security leadership. I want 4-10 people that I talk with quarterly, some people more often. I want to be able to have multiple perspectives to bounce ideas off when I’m having a hard time with something at work or a moral decision I need to make or just trying to understand what is normal or what is acceptable. 

One of the big things that I always try to push with my mentors is what are you learning? What are you reading? What are the things that you go back to time and time again? There is a saying that I always think about: “If you dig, you get diamonds.” Where can you dig to get diamonds? With mentors, I try to have a lot of people, I try to be real with them, and make it clear that this is only between us. And I also try to pay it forward. I want to help people and lots of people have helped me. 

There is a book by Keith Ferrazi I read a long time ago, it’s called Never Eat Alone. It’s all about how people like to help people. We’re all hesitant to ask for advice or ask for help. But the most successful people ask for help all the time. As humans, we like to help each other. His whole thing is to find out what you want to do and where you want to get, and then build a relationship action plan to move your way there. He’s also big on building your network before you really need it. If you’re in a job hunt, and you’re trying to build your mentorship, or mentor platform at that point in time, that’s going to be hard to do. But if you’re in career, and you start building that network, and you don’t need to use it for a couple years, by the time you do need to use it those people will know that you’re genuine and know who you really are. They’ll be more than willing to help you. 

For more, listen to episode 23 of Agent of Influence. Or, connect with David on LinkedIn or listen to the OWASP Portland podcast.

Agent of Influence Episode 23 with David Quisenberry


Three Threat Modeling Automation Conjectures

For those engaged in the timely production of high-quality software, threat modeling is an invaluable method to minimize rework. Design defects can include useless code, or “cruft” and can be costly to fix. But the manual process of threat modeling doesn’t always fit well into ever-tightening iterative development methodologies. 

Fortunately, our industry is making big strides in the direction of automating threat modeling. I’ve explored as many of these tools as I can while looking for something that works best for us here at NetSPI. While I’m quite optimistic about the very near future of threat modeling automation, I’ve got my reservations. These reservations can be generalized as conjectures. 

If you are buying or building threat modeling automation, here are three conjectures to take into consideration.

Conjecture 1: The Only Automation is Semi-Automation

Don’t expect to run a threat modeling process entirely free of human care and attention. No matter your methodology, threat modeling operates on ideas about software – and the outputs of threat modeling are other, better ideas about software. Completely automating the improvement of these ideas requires expressing them in a useful format and yielding the resulting better ideas in a format suitable for implementation by another automated system. 

You see where this is going. 

It also requires producing genuine improvements which would unquestionably result in a better system overall: error free output.

So, let’s consider how semi-automation is a more realistic expectation than full automation by looking at this input-process-output (IPO) model in more detail, from bottom to top.

Automating the Outputs

The results of a threat modeling assessment do not need to be consumable as non-functional requirements (NFRs). They can inform coding standards, product roadmaps, build procedures, test plans, monitoring activities, and more.

For example: let’s say the consumer-grade IoT device your team is building requires customizations to the kernel level containerization system for OTA updates. Your product manager sees this as an indicator of how cutting edge this device is compared to the market, while your architect sees this as a necessary annoyance. But what the threat modeler sees is an unmanaged attack surface, accessible from the network, written in C, executing in Ring 0. 

What can your team do with this information, besides draft security requirements? Adjust your static analysis strategy? Amend your vendor management boilerplate to mandate relevant training? Whip up a fuzzing protocol? How many of these represent automatable opportunities? 

Threat modeling is a decision support process. You can automate aspects of it, but you’ll be limited by the amount of decision-making that is automated.

Now, you may have scripts available to automate the creation of backlog items—Jira tickets and the like. Keep in mind that 90 percent of all security tooling outputs are false positives. Threat modeling automation systems make no promises of being any different. So, you can either devote human care and attention to triaging the results, or you can let the implementation team do the triage work themselves. Either way, there’s still work to be done.

Automating the Processing

Threat modeling is a security process, and security is one of many aspects of quality. We used to think of the interaction between security and the software development process as one of trade-offs. Perhaps some still do. 

Many organizations are beginning to approach their going software concerns by finding an optimal balance considering known limitations. It isn’t security versus usability. It’s making sure our products are suitably usable, secure, performant, testable, resilient, scalable, marketable, et cetera.

So, your turnkey end-to-end threat modeling automation has to be able to recognize and accommodate other requirements in terms of the product’s usability, reliability, marketability, scalability, et ceterability. If it doesn’t, it will fall to you to strike the right balance. And if you’re the one striking a balance, you don’t have a fully automated system. 

Automating the Inputs

What tools do your security architects use? The ones I work with mostly use whiteboards. Many use team collaboration / CMS software like Confluence. Some use drawing tools like Visio. Does anyone still use Rational Rose?

If your threat modeling automation can meaningfully parse this information, great. If not, and you have to reproduce the architect’s design, then you won’t achieve full automation. 

Otherwise, what inputs can be automatically fed into your threat modeling tool?

Automatic scanning of Infrastructure-as-Code files can bring to light threats to the infrastructure. They may not have much to say about the actual software, though. And automatic code scanners tend to ignore those values of quality that I enumerated above. 

Finally, threat modeling tools that scan implementation artifacts often lack efficiency. You’ve already built to your design. Any findings produced by a scanner are opportunities for rework, and as I said at the beginning, threat modeling is supposed to minimize rework. 

Conjecture 2: Your Tool’s Diagrams and Your Team’s Diagrams Should Be Compatible

Whether your tool consumes or emits them, diagrams of the subject system must be recognizable by the implementation team as being a genuine, faithful reflection of the values of that system. Tools that invite you to re-invent or re-think the system’s architecture in a new schema tend to miss the mark.

This is not to say that re-diagramming is always problematic. Architecture diagrams must reflect the values of the organization, such as structure, redundancy, symmetry, priority, urgency, or flow. This helps them present the system—especially its attack surfaces—naturally. Automatically generated diagrams tend to disregard these values.

Conjecture 3: Your Tool’s Guidance Should Be Delivered with Humility

As mentioned earlier, threat modeling operates on ideas about software and its outputs include better ideas about software. The best tools and techniques will lead the threat modeler to the best ideas, faster.

But architecture works with abstractions about systems. Lacking a complete architecture description, any threat modeling tool is working on incomplete input. And who has time to produce complete architecture documents?

Have you seen a 300-page architecture document? Probably. But have you ever seen a 300-page architecture document that was up to date?

The problem arises when a threat modeling tool can’t adjust to the subtleties of your software. If a tool mistakes design elements for threats, you’ll be required to spend time adjusting its output.

Sometimes your tool will just be wrong through no fault of its own and it is easier to ignore the tool than to correct it.

Your Threat Modeling System Shouldn’t Be Repudiating Raisins

Some design intricacies are difficult to articulate. Consider the ‘R’ in STRIDE: Repudiation.

The Orange Book lists accountability as a fundamental requirement of computer security:

Audit information must be selectively kept and protected so that actions affecting security can be traced to the responsible party. A trusted system must be able to record the occurrences of security-relevant events in an audit log. The capability to select the audit events to be recorded is necessary to minimize the expense of auditing and to allow efficient analysis. Audit data must be protected from modification and unauthorized destruction to permit detection and after-the-fact investigations of security violations.”

Clearly, the non-repudiation of audit logs is an important aspect of a system, and conventions around logging should be designed to be of adequate depth and granularity, and resilient against forging and deletion. 

But what’s true for audit logs isn’t true for every single aspect of every single software product. 

Suppose you were threat modeling a smart appliance, like a smart toaster. We want to make a simple change with little security impact, perhaps extending its capabilities to allow it to handle raisin bread. What are our repudiation concerns? What does that mean? Someone fakes a raisin? The question is trivial, and pondering it is not a great use of time.

A little time spent deciding what actions really warrant logging is time well spent. Applying a blanket repudiation standard to every system element, on the other hand, is tedious. By extension, tools that alert to every form of threat every time you make an adjustment to your architecture are tedious. A tool should be able to measure the threat at the proper scale. Tool output should be non-punitive.

Threats Can Be Features

Moreover, sometimes repudiation is not an attack but a feature. Consider repudiation in the following system contexts: ballot secrecy, civil-rights-related anonymity, digital cash, drive encryption.

For these systems, the implementation of some non-repudiation controls is antithetical to the business goal of the system.

Similarly, many systems offer user-impersonation features for support purposes, basically spoofing-as-a-service. Such functionality needs to thread a tight needle of security attention. Uniformly treating all forms of spoofing as threats is incorrect. 

Should tooling let users treat threats as security features? Maybe. These are edge cases. Perhaps this is a nice-to-have. It would suffice to have a tool treat its recommendations as suggestions for consideration.

Final Thoughts

Threat modeling is a time-consuming process and deserving of as much automation as we can throw at it. The teams making the current generation of tooling are right to be proud of their products. But these tools have limitations to be kept in mind, whether you are building or buying them.

Prevent and detect architecture flaw with NetSPI’s threat modeling assessments

The Importance of Reviewing Source Code for Security Vulnerabilities: Two Years After the SolarWinds Breach

We’ve made it to the two-year anniversary of the SolarWinds cybersecurity breach. Not the anniversary of its public disclosure, but during Fall 2019 attackers first gained access to the SolarWinds systems. There’s a lot to learn from the breach. The biggest takeaway? Supply chain vulnerabilities are not to be taken lightly.

The breach highlights the significance of a thorough and robust review of security vulnerabilities in the source code of software products – namely hidden backdoors. Analyzing the attack, the threat actors were able to inject malicious code and successfully masquerade the payload, which should have been detected during a secure code review or audit activity focused on finding anomalies or a malicious piece of code.

While many software suppliers today do some form of secure code review activity on their software, there are major variations in the depth and breadth of reviews performed across companies that can drastically impact results. In this blog, I’ll dive into the secure code review process and explain the necessary components needed to identify the risk your source code may pose to your organization if it is not reviewed thoroughly.

What is a Secure Code Review? 

For a detailed and digestible breakdown of secure code review, read OWASP’s Code Review Guide. The authors reiterate that “secure code review is the single-most effective technique for identifying security bugs early in the system development lifecycle.” Let’s start with the basics:

As I define it, secure code review is a process used to verify if the source code being developed contains security flaws and that it has the necessary security controls to prevent an adversary – external or internal – from exploiting vulnerabilities in software.

In other words, the purpose of a secure code review is not only to find security vulnerabilities but also to evaluate if the source code is robust enough to evade security attacks. In addition, it can also verify that developers are following your secure development policies and procedures.

While penetration testing can help you look at your application security holistically, supplementing your testing strategy with a secure code review allows for more comprehensive coverage. According to the OWASP Code Review Guide, “a general rule of thumb is that a penetration test should not discover any additional application vulnerabilities relating to the developed code after the application has undergone a proper security code review.” 

When Should I Implement Secure Code Review or Static Analysis? 

Depending on the criticality of the application and AppSec program maturity, security teams may implement secure code review at various touchpoints in the SDLC or bake it into an Agile development process. 

Just as penetration testing adds more value to your security program when it is implemented continuously, secure code review – when done incrementally and across the SDLC – can dramatically improve your secure software development lifecycle. Below, I outline five key timeframes where secure code review or static application security testing (SAST) should be implemented:

  1. During development:
    Integrating security scanning directly into the developer’s integrated development environment (IDE) is a great way to avoid introducing security vulnerabilities in your code as the software is being written. It provides real-time feedback on secure coding guidelines and is also a great tool to help your teams get in the habit of secure development. Finding and fixing vulnerabilities during development ensures they do not get propagated further down the road. According to National Institute of Standards and Technologies (NIST), the cost of fixing a security defect once it’s made it to production can be up to 60 times more expensive than during the development cycle.
  1. Pre-commit checks (pull/merge requests):
    When merging code from multiple developers into a single branch, it is highly likely that security vulnerabilities will be introduced. While peer code reviews on pull requests generally focus on functional bugs, it is recommended to also do a quick check on security issues. This will help eliminate critical and high severity issues early in the chain. Today, all popular code repositories with code collaboration features provide integration with SAST tools. However, automated tools tend to be too noisy and should be carefully configured with limited rulesets at this stage to not significantly impact developer workflow.
  1. Post-commit (during build/testing phase):
    As source code gets ready to be part of final release of a product, a thorough secure code review is critical to ensure the product is free of security bugs. SAST tools with a broader ruleset can be integrated into continuous integration pipelines to report on existing security vulnerabilities, which can be reported into bug trackers or team backlogs.

    However, breaking the build every time a security bug is reported may not be viable, since tools are prone to false positives. One can improve static analyzers by refining rulesets using regular feedback from reported issues.
  1. Post deployment:
    While integrating static analysis tools in the DevSecOps pipeline may seem straight forward, the speed at which new code is expected to release makes it harder to find and fix all security bugs in every sprint or release.

    Running a SAST scan periodically from production code repositories and having scheduled secure code reviews implemented into your annual calendar will help give your security posture a boost. Also, these help meet the requirements of governing bodies and regulations such as PCI DSS and HIPAA.

    To eliminate noise and reduce the workload on developers, these scans should be triaged by your product security experts, or “champions,” who can closely work with the developers to resolve vulnerabilities. However, not all organizations are adequately staffed with security analysts that have secure code review skills making partnering with a secure code review vendor a viable option. 
  1. Suspicion of malicious intent:
    This is a more reactive strategy for secure code review. When you’re suspicious of malicious activity in your organization, or have detected a potential breach, do an on-demand code review scan to validate your suspicions and identify the breach at its core.

    Remember that code review activities should be aligned with your organizations goals and provide value. For example, in case of malicious threat vectors, review should be focused on identifying points of interest (POI) in code that can combine to produce malicious constructs. Analyze the interaction between various POIs, looking for mutation, obfuscation, and inversion of control used to conceal malicious activity.

The Secure Code Review Process

There are many variables that can impact the secure code review process. As mentioned at the beginning of this article, the depth and breadth of a secure code review can vary greatly. To get the most out of your testing, here are four areas that can make the biggest difference:

  1. Define the Scope
    The scope of each secure code review will vary based upon a multitude of factors, including the threat factors involved, the coding languages, number of lines of code, and the criticality of the application. Is it a “crown jewel” application? If so, it will be important to increase the frequency of your code review cycles and put emphasis on the remediation of vulnerabilities.
  1. Custom Checklists
    Predefined, custom checklists based on the threat model of your product are essential for secure code review success. Generic checklists aren’t very helpful, considering application security is not one-size-fits-all. Custom checklists for each software application can be extremely time-consuming. To help, use frameworks such as the OWASP Application Security Verification Standard as a starting point. Lastly, and perhaps most importantly, ensure your checklists are consistently maintained and updated.
  1. Automated Scanning
    Automated vulnerability scanning enables manual testers to spend more time finding business-critical vulnerabilities in the code. All automated scanning tools are not created equally – some are better than others and some meet unique needs. Are your tools meeting your requirements? Are you being strategic with your automated scanning? Audit and recognize whether there are existing gaps in your tooling and have a plan to evolve and enhance your technologies. Look for these three qualities in your automated scanning tools: the ability to be customized, integrate into the CI/CD pipeline, and reduce noise/false positives.
  1. Manual Testing
    To find vulnerabilities that tools miss, human context is necessary. Manual secure code review is especially important for high-risk, sensitive applications – the one with greater business risk. Humans can apply the needed business logic to write custom scripts and approach a secure code review from the perspective of a real adversary. Additionally, with automation, comes false positives. Having a human triage, the vulnerabilities found in your code is a huge benefit – consuming a raw, data-heavy report is not enough. 

How to Choose Your Secure Code Review Team

To outsource or not to outsource. There are benefits to building a secure code review team within your organization, but it does require a lot of investment and time to evolve your tools and train new resources. Do you have the capacity to maintain the automated scanners, increase the frequency of your secure code review, and update your checklists? If not, here are four reasons to consider working with a secure code review provider:

  1. Better coverage (more vulnerability findings) because they are focused on one activity vs. internal teams who are typically pulled into many different security activities.
  2. Work with your vendor to find the best balance between automated scanning and manual testing.
  3. Contextual remediation recommendations. Plus, the development of code libraries that can be leveraged for remediation.
  4. Dynamic reporting via NetSPI’s Penetration Testing as a Service (PTaaS). NetSPI delivers all secure code review results through its PTaaS delivery platform. The platform enables:
    1. Enhanced, real-time reporting
    2. Accelerated remediation
    3. Reduced administrative time
    4. Continuous testing
    5. Faster vulnerability scanning with Scan Monster™, our proprietary continuous scanning technology

Final words

As discussed above, to prevent your organization from falling victim to next major supply chain attack implementing regular secure code reviews is an essential touchpoint. While continuous static analysis checks provide valuable integration in development environment, adoption should be incremental as per your program goals – and without significantly impacting developer workflow. Collecting feedback from SAST tools, customizing rulesets, and adopting manual code review and verification process are all useful contributors to enhance your application security posture.

Start your Secure Code Review Journey with NetSPI

Twin Cities Business: Blocking Cybercriminals from Accessing Company Data

On October 4, 2021, NetSPI COO Charles Horton was featured in an article in Twin Cities Business:

In the digital age, a ransomware family can be as destructive as an old school Mafia family. In June, meat processor JBS paid an $11 million ransom to cybercriminals after its plants, including one in Worthington, were shut down by a cyberattack.

It followed a May episode in which Colonial Pipeline Co. paid a $4.4 million ransom to hackers so it could resume the flow of fuel on the East Coast.

“Ransomware is evolving and it’s becoming more sophisticated,” said Charles Horton, COO of NetSPI. “You don’t have just singular threat actors looking for weaknesses.”

Minneapolis-based NetSPI started marketing a new cybersecurity service in June just as businesses large and small were rattled by the scale and brazen nature of those attacks.

October has been Cybersecurity Awareness Month since it was launched in 2004 by the U.S. Department of Homeland Security and the National Cyber Security Alliance. In recent years, cyberattacks have been elevated as a top concern of business executives, because of the damage being done by cyberthieves and the need to constantly identify and combat new threats.

President Joe Biden issued a statement on Friday addressing the topic. “I am committed to strengthening our cybersecurity by hardening our critical infrastructure against cyberattacks, disrupting ransomware networks, working to establish and promote clear rules of the road for all nations in cyberspace, and making clear we will hold accountable those that threaten our security,” Biden said.

Often a web of people works together to attack a business, a nonprofit, or a public agency. First come the malware creators, Horton said. Then other bad actors “go out and find the vulnerabilities, which could be different than the groups that actually execute the ransomware.” He noted these players are “chained together” in an operating model.

Horton said some businesses have a false sense of security about what level of protection their current cybersecurity systems provide.

‘Ransomware attack simulation’

“The gap that we found is with event monitoring tools,” he said. “They only identify a very low percentage of the most common attacks.” NetSPI now offers a “ransomware attack simulation” service.

NetSPI’s product mimics the “tactics, techniques and procedures” used by ransomware attackers, so more threats can be detected, Horton said. “When they do [find them], alerts just start firing off and a customer or a business can execute their response plan.”

“Breach and attack simulation” is a new market segment in cybersecurity, in which companies can test their ability to block ransomware attacks, according to a 2021 report from global advisory firm Gartner.

NetSPI sells its new technology-enabled service directly to customers. NetSPI provides cybersecurity offerings to nine of the 10 top banks in the United States, Horton said. What it will charge for the simulation service depends on the depth and breadth of the attack simulation assessment.

“We are up to more than 200 different attack plays that we can run on a daily basis in a business environment,” Horton said, which are designed to prevent attackers from installing malware, accessing data, and then demanding a hefty ransom.

Read the rest of the Twin Cities Business article here:


NetSPI Software Security Expert Nabil Hannan Featured on BBC World News

NetSPI Managing Director Nabil Hannan sat down with BBC World News anchor Lewis Vaughan Jones on October 4, 2021 to talk about a global social media outage. Nabil discusses the current state of the world’s software ecosystem, what it means for modern businesses, and the potential of a passwordless world.

Watch the clip below:


TechTarget: 5 principles for AppSec program maturity

On October 4, 2021, NetSPI Managing Director Nabil Hannan was featured as a guest contributor for TechTarget:

Software and applications are present in everything from consumer goods to medical devices to submarines. Many organizations are evaluating their application security, or AppSec, to ensure their strategies are mature and not vulnerable to cyber attacks.

According to Forrester Research, applications remain a top cause of external breaches. The prevalence of open source, APIs and containers only adds to the complexity of the problem.

Most organizations struggle to understand how to approach AppSec program maturity. Given many organizations have switched from Waterfall to Agile in their software development lifecycle (SDLC), practitioners are asking, “How do we continue to evolve our AppSec programs?”

Roadmaps can help navigate these issues. Organizations looking to develop mature programs need to be mindful of inherent team biases. For example, if the AppSec team comes from a pen testing background, the program may lean toward a bias. If the team is experienced in code review, then bias may shine through, too. While both disciplines are important and should be a part of an AppSec program, the experiences may cause bias when a more objective approach is needed.

Many mature AppSec frameworks exist, but a one-size-fits-all approach is not going to work. Every organization has unique needs and objectives around thresholds, risk appetite and budgets. This is largely why prescriptive frameworks, such as Microsoft Security Development Lifecycle, Software Security Touchpoints or Open Software Assurance Maturity Model, are not the answer. It’s best to tailor roadmaps on the specific needs and objectives of a particular organization.

5 principles for implementing an AppSec program

These five tenets can serve as a guide for implementing a mature AppSec program.

  1. Make sure people and culture drive success
  2. Insist on governance in the SDLC
  3. Strive for frictionless processes
  4. Employ risk-based pen testing
  5. Determine when to use automation in vulnerability discovery

Read Nabil’s 5 principles for AppSec program maturity on TechTarget’s SearchSecurity:

Discover why security operations teams choose NetSPI.