Back

Niloo Razi Howe Joins NetSPI Board of Directors

Niloo will leverage her experience leading corporate and product strategies across the cybersecurity industry to support NetSPI’s future growth.

Minneapolis, MN – September 27, 2023 – NetSPI, the global leader in offensive security, today announced the appointment of Niloo Razi Howe to its Board of Directors. Niloo brings an incredible track record of supporting companies equipped for major market disruption and will support NetSPI at a pivotal moment as the company revamps its product strategy. 

“The attack surface is expanding as new technologies are implemented at a breakneck pace. If you’re not continuously validating your security posture, you’re leaving your business wide open to evolving threats,” shared Niloo. “Having an offensive, adversarial mindset is critical to figuring out how to secure your business and build resiliency. I’m thrilled to support the NetSPI team as they continue to build solutions to these real, table stakes issues and help organizations get proactive with their security.” 

Niloo has been an investor, executive and entrepreneur in the technology industry for the past 25 years, with a focus on cybersecurity for the past 15. She currently serves on the Board of Directors of Pondurance, Tenable, Composecure, Recorded Future, Swimlane, among other notable cybersecurity companies. Prior to these appointments, Niloo was the Chief Strategy Officer for global cybersecurity companies RSA and Endgame where she led corporate strategy, development, and planning. Niloo also serves on several US government advisory boards including the Cybersecurity Infrastructure Security Agency’s (CISA) Advisory Council. 

“Niloo’s experience advising and leading high-growth, innovative cybersecurity companies is unmatched,” said Aaron Shilts, CEO at NetSPI. “She is exceptional at looking to the future and determining how organizations must adapt and evolve to succeed – and we couldn’t be more excited to have her join NetSPI during this period of evolution and disruption in offensive security.” 

This appointment follows consecutive years of high growth for NetSPI. In 2022, the offensive security company achieved 58 percent organic revenue growth, driven by continuous adoption of its Penetration Testing as a Service (PTaaS), Attack Surface Management (ASM), and Breach and Attack Simulation (BAS) platforms. Niloo will be instrumental in advising NetSPI’s product roadmap and vision.  

Learn more about NetSPI, its leadership team, and Board of Directors at https://www.netspi.com/about-us/.  

About NetSPI 

NetSPI is the global leader in offensive security, delivering the most comprehensive suite of penetration testing, attack surface management, and breach and attack simulation solutions. Through a combination of technology innovation and human ingenuity NetSPI helps organizations discover, prioritize, and remediate security vulnerabilities. Its global cybersecurity experts are committed to securing the world’s most prominent organizations, including nine of the top 10 U.S. banks, four of the top five leading cloud providers, four of the five largest healthcare companies, three FAANG companies, seven of the top 10 U.S. retailers & e-commerce companies, and many of the Fortune 500. NetSPI is headquartered in Minneapolis, MN, with offices across the U.S., Canada, the UK, and India. 

Media Contacts:
Tori Norris, NetSPI
victoria.norris@netspi.com
(630) 258-0277

Jessica Bettencourt, Inkhouse for NetSPI
netspi@inkhouse.com
(774) 451-5142

Back

Shaping the Future of AI and Cybersecurity

2023 is sure to be remembered as the year of artificial intelligence (AI). With the rapid adoption of ChatGPT, the rise of new job titles like “Prompt Engineer,” and nearly every workforce researching how the technology could be applied to their industry — AI entered the scene as a force to be reckoned with. 

NetSPI joined in with the launch of AI Penetration Testing to help teams bring their AI/ML implementations to market while staying confident in the security of their creations. We launched this new solution at Black Hat, which also saw a strong theme of AI technologies from attendees and vendors across the conference. Read our show recap here.

These are just a few headlines showing how AI took center stage in 2023. But at a high level, AI is still in its infancy. Teams are researching how they can use this new technology, and the results will continue to play out in coming years.  

With this in mind, we wanted to understand how our cybersecurity partners have approached AI so far and where they see opportunities to utilize it going forward. Read on to hear perspectives on AI in cybersecurity from several security experts.

Meet the Contributors 

This roundup includes contributions from NetSPI Partners and NetSPI’s Managing Director, Phil Morris. Learn more about our Partner Program.

How do you envision AI enabling the cybersecurity industry?  

“Artificial Intelligence stands to be a viable force multiplier for organizations that embrace it, unlocking the power of dormant data and drastically speeding time to insight. AI is transforming the industry by increasing efficiency and saving time by automating tasks. AI automation will serve as the framework for all industries and will be a key player with tasks and projects. AI algorithms have become a huge contributor to marketing campaigns as AI is predicting consumer behaviors and trends.” 

Daniel Alonso, Flagler Technologies Technologist 

“Artificial intelligence is a powerful tool for enhancing security and compliance. Algorithms capable of analyzing massive amounts of data can quickly identify threats, flag vulnerabilities and misconfigurations, and prevent attacks. By continually analyzing systems, controls, and processes against security frameworks and regulatory standards, AI and automation tools can alert security teams to any deviations or potential security risks. This real-time visibility increases overall efficiency and frees security teams to focus on complex, high-priority business initiatives.” 

Shrav Mehta, Secureframe CEO 

“Machine Learning and similar predictive analysis is already used extensively to identify patterns. Sometimes those patterns identify a ‘baseline’ so that outliers can be identified faster, and sometimes those patterns can be how adversaries explore and attack your network.
Generative AI (now just referred to as ‘AI’), is really a ‘word predictor’ in that it absorbs a corpus of data, trains itself to predict language patterns in that data, and then uses those patterns to predict how to answer a question based on that corpus of data (or one similar to it). So, whenever you have lots of data— typically unstructured or overly ‘messy’ data stored in persistent storage or coming through complex data streams, GenAI can help you make some sense of it.
I think that where we’re going to see a lot of experimentation is with the idea of ‘mini-AIs’, where we’ve built or extended a general AI model to be used as a niche platform to help us solve a more specific use case. We’re seeing that now in large language models being used to identify how to ‘hack’ a network or organization, and Microsoft is using that model in the development of its many ‘CoPilots’.”   

Phil Morris, NetSPI Managing Director 

“At Fellsway Group, we see AI as a transformative force that has the potential to revolutionize industries across the board. The application of AI technologies opens a world of opportunities for businesses to enhance efficiency, optimize processes, and drive innovation. Across industries, we envision AI enabling: 

  1. Process Optimization: AI can analyze vast amounts of data in real-time, identifying patterns, trends, and anomalies that humans might miss. This capability allows for the optimization of complex manufacturing processes, supply chain management, and resource allocation. 
  2. Predictive Maintenance: AI-powered predictive maintenance can help organizations reduce downtime and operational costs by identifying potential equipment failures before they occur. This approach allows for timely maintenance and prevents costly breakdowns. 
  3. Quality Control: AI-driven quality control systems can ensure product consistency and minimize defects by detecting minute variations that might go unnoticed through traditional inspection methods. 
  4. Personalized Marketing: AI can analyze customer data to create highly targeted marketing campaigns, tailoring product recommendations and offers based on individual preferences and behaviors. 
  5. Supply Chain Management: AI can optimize supply chain logistics, predicting demand patterns, optimizing inventory levels, and enhancing delivery routes to reduce costs and improve overall efficiency. 
  6. Safety and Risk Mitigation: AI-enabled sensors and systems can monitor safety conditions in hazardous environments, reducing risks to human workers. Additionally, AI can model and simulate potential risks to identify ways to mitigate them.” 
Steve Leventhal, Fellsway Group Managing Partner, and Robert Bussey, Fellsway Group Lead Consultant 

“AI will be a tool that security teams can use for multiple purposes. It will help them process massive amounts of data, allow them to scale with smaller teams and help the security operators get to the information that requires real intelligence to decipher. It will also help them write automation into their processes by providing code snippets and reusable functions to get to their end automation goals more rapidly.” 

Tim Ellis, Right! Systems, Inc. Chief Information Security Officer 
AI/ML Penetration Testing

On the other hand, what risks or drawbacks do you see associated with AI? 

“AI aligns with the ethical position of the user; it can be used for negative purposes as easily as it can be used for the betterment of humanity. AI is so powerful and disruptive that there can be concerns for privacy and potential market volatility. This is a key reason getting ahead of the AI intelligence in your industry is so important.” 

Daniel Alonso, Flagler Technologies Technologist 

“There are a few potential risks organizations must consider when evaluating and implementing AI tools. First and foremost is avoiding over-reliance on AI. For example, generative AI can be used for policy creation, but at best AI algorithms can generate baseline policies that will need human input and expertise to be tailored to the organization.
It’s also important to understand accuracy when querying AI, especially for platform-specific configurations and deep troubleshooting. For example, organizations may be using Github Copilot to generate code, but the tool might not have access to the company’s entire codebase to know best practices. As a result, it might generate code with security flaws or code that does not follow the standards set in the rest of the system.
Finally, it’s essential for companies to consider data security and privacy when using AI tools. As with any vendor, knowing what data is shared and how it’s used is incredibly important for your overall security posture. When evaluating AI vendors, find out if there’s a way to ensure only anonymized data is flowing into the tool, as well as where data is being stored and processed and how long it’s retained.” 

Shrav Mehta, Secureframe CEO

“Despite the hype that we’ve all seen over the first half of 2023, I don’t think that ‘AI’ projects are going to be successful without keeping a human component in the mix — it’s just too unreliable and lacks context in high-risk situations. 
That might change, but if I were exploring business cases in healthcare, life sciences, or financial advising, I’d be hesitant to just “take the systems word for it.” Remember, experience shows that more than 90 percent of AI/ML projects end up being rated, shall we say, less than successful. 
From my research it seems that most of the issues are not technical ones, but are concerned around the problems that the business is trying to solve and/or are based on some mistaken assumptions about both the results of the project and how the system or platform is built to get to that point. To many teams, this tech is just close enough to what they’ve been working with to seem like an easy reach, but in truth it’s a whole new way to look at projects and outcomes.” 

Phil Morris, NetSPI Managing Director 

“For the security operations team, a significant risk with AI is trusting it too much. AI isn’t actually intelligent, it just can often do a good job of appearing to be intelligent. At the end of the day, it’s just a tool and if you don’t have great human intelligence utilizing that tool and corralling what the output is you will end up with garbage output and holes in your defenses. The other risk that AI brings is on the attacker side. Since AI is just a tool, the same technology can be used to bring down defenses in ever more aggressive ways — and attackers are already quite good at automation. Attackers also don’t suffer the downside risk of AI making mistakes because mistakes are unlikely to hurt them at all.”  

Tim Ellis, Right! Systems, Inc. Chief Information Security Officer 

“While the potential benefits of AI are vast, it’s important to acknowledge and address the potential risks and drawbacks: 

  1. Bias and Fairness: AI systems can inadvertently inherit biases present in the data they are trained on, leading to biased outcomes and unfair decisions. 
  2. Job Displacement: Automation through AI could lead to job displacement in certain industries, potentially impacting the workforce. 
  3. Security Concerns: As AI systems become more interconnected and integrated, they could become targets for cyberattacks if not properly secured. 
  4. Privacy Issues: The use of AI in data analysis can raise concerns about the privacy and security of personal and sensitive information. 
  5. Ethical Considerations: Decisions made by AI systems might not always align with human ethical values, leading to difficult ethical dilemmas.” 
Steve Leventhal, Fellsway Group Managing Partner, and Robert Bussey, Fellsway Group Lead Consultant

Can you share any advice for how security teams can approach getting started with AI? 

“Yes—forget about losing your job to an AI platform—that isn’t going to happen, and if your C-suite is planning layoffs due to their new AI project, you should probably be working somewhere more grounded in the first place. 
I came into it via my analytics background, but I quickly discovered that—like many other emerging technologies of the past—things like security, privacy, and auditability haven’t been figured into the equation. If you can poison a training dataset, you’ve corrupted a model, and thousands (or millions) of dollars could be spent before you realize that. Alternatively, training data has huge privacy risks (and now, copyright risks) that need to be considered. 
AI is based on data and AI projects are based on well-known data processing pipelines, so don’t be overwhelmed by asking standard security- and risk-related questions — just don’t expect easy answers. Having a partnership with subject matter experts who understand how these systems are grown, how they evolve, and how they need to be protected is a good first step. Then you can learn from them and specialize wherever your heart takes you.”  

Phil Morris, NetSPI Managing Director 

“Absolutely, AI is used typically to answer a question or correlate data to provide a condition. Make sure your security data, syslog, streams and all other relevant points are part of your solutions result! Continuing to improve both security policies, and identifying new risks will be at the forefront of the security teams in tech.” 

Daniel Alonso, Flagler Technologies Technologist 

“Security teams must create an AI strategy that’s aligned with the organization’s overarching business and security objectives. AI-powered tools can help with a range of challenges, from complex tasks like continuous security monitoring, intelligence threat detection, and faster incident response, or simple tasks like creating new tabletop exercise prompts. Security teams should start by identifying specific cybersecurity challenges AI can help address.” 

Shrav Mehta, Secureframe CEO 

“I would recommend using vendor tools to start with on the data processing front. Look for vendors that are able to show clear ROIs in saving people time through their use of AI and machine learning. Everyone is promising this but not everyone is delivering so be careful to analyze the vendor’s claims for real outcomes and talk to references that have their own ROI models if possible.” 

Tim Ellis, Right! Systems, Inc. Chief Information Security Officer 

“For security teams looking to leverage AI, here are some key steps to consider: 

  1. Education and Training: Ensure that your security team has a solid understanding of AI concepts, algorithms, and potential applications in the security domain. 
  2. Identify Use Cases: Identify specific use cases where AI can enhance security operations, such as threat detection, anomaly detection, and fraud prevention. 
  3. Data Preparation: Data is crucial for training AI models. Gather high-quality, diverse, and relevant data to build effective AI systems. 
  4. Collaborate with Experts: Work with AI experts and data scientists to develop and implement AI solutions tailored to your security needs. 
  5. Test and Validate: Thoroughly test AI models in controlled environments to ensure their accuracy, robustness, and effectiveness before deploying them in critical security operations. 
  6. Monitor and Update: Continuously monitor AI systems for performance and adapt them as new threats and challenges emerge. 
  7. Ethical Considerations: Keep ethical considerations at the forefront. Ensure transparency, fairness, and accountability in AI-driven security decisions. 

By approaching AI implementation with a well-informed and strategic mindset, security teams can harness its power while mitigating potential risks. At Fellsway Group, we believe that responsible and thoughtful integration of AI can lead to significant advancements in industry and security alike.” 

Steve Leventhal, Fellsway Group Managing Partner, and Robert Bussey, Fellsway Group Lead Consultant 

These responses convey the research, hard work, and preparation that goes into determining how a company can best apply AI in their day-to-day business. One recurring theme is the need to include human analysis to verify data going into AI models and the results coming out. After all, AI can only be as smart as the person using it.  

Interested in sharing your perspective with us? Tweet us anytime @NetSPI.  

This article was written in collaboration with NetSPI’s Partners. Learn more about becoming a NetSPI partner here.

How to Get Away with Murder Macros

Have you ever felt personally victimized by Burp Suite’s Macros? Well fear not, after watching these three videos and following along with the exercises (including a custom practice app that we made just for you!), you’ll be a Macro Magician in no time! 

  1. Basics of Macros
  2. Gathering Dynamic Values
  3. Macros for Complex Situations

Basics of Macros

In this first video, I cover a couple of basics of Macros: what they are, why we might use them, and 2 demos of basic usage.

I recommend that while watching the video, you follow along with the demos that use this lab: https://portswigger.net/web-security/csrf/lab-no-defenses

(Psst, a side-quest for these 3 videos is to count the number of Scanny’s that appear!)

Gathering Dynamic Values

In this second video, I cover the next layer of complexity with macros: gathering dynamic values from responses and using them in following requests. I also touch on some related extensions: 

  • Token Extractor (this one is covered the most)
  • Custom Parameter Handler
  • Add Custom Header
  • Authentication Token Obtain and Replace
  • Stepper

Again, I recommend that you follow along with the demo using this lab: https://portswigger.net/web-security/csrf/bypassing-token-validation/lab-token-not-tied-to-user-session

For additional practice on this same concept as well as incorporating some elements from the first video, I recommend downloading OWASP’s Juice Shop and creating a login macro. Note that the tricky thing with that is the login request doesn’t contain a “Set-Cookie” header in the response. 

This one might be a bit complex, but another practice lab could be to make a macro to gather the CSRF token and CSRF key (cookie) to repeatedly change a user’s email via this lab: https://portswigger.net/web-security/csrf/bypassing-token-validation/lab-token-tied-to-non-session-cookie. Assume that you need to use a new CSRF Key cookie and CSRF token in each email change request. Hint: you’ll have two requests in the macro, one to get the initial CSRF token, CSRF key cookie, and unauthenticated session cookie, and the other to get the authenticated session. 

Macros for Complex Situations

In this last video, I cover and demonstrate macros for very complex situations that require multiple requests and multiple variable updates. 

The key steps that I cover for dealing with complex macros are:

  1. Replicate browser behavior in Repeater
  2. Look for reductions in steps
  3. Write down required URLs
  • Optionally: mark where parameters are set and used
  1. Select Macro steps and test
  2. Alternate between setting tokens and testing your Macro

Unsurprisingly, I recommend that you follow along with the demo using this lab: https://portswigger.net/web-security/oauth/lab-oauth-authentication-bypass-via-oauth-implicit-flow

Because that last concept is a real doozy, and you may not even feel confident after following along, we’ve built a custom application (RIGHT HERE) for you to run to be able to practice the concepts taught in all 3 of these videos.

What is this app?
A stock trading app that has a multi-step process. In order to efficiently test for the stored Cross-Site-Scripting (XSS) that exists on the application, you’ll need to make a macro! Also, be sure to have your volume up when you use the app…

Is there anything else I should know?
Here are the built-in users:

UsernamePassword
HugoI8StinkySocks!
Layla2BirdsInHand!
Silas99Problems&UR!1

Here are the Authorized Tickers:

  • LUV
  • EAT
  • HOG
  • PLAY
  • BOOM
  • BEN
  • CAKE
I’m having a hard time replicating the flow, can I have a hint?
(Click to reveal hint)

    If there’s something you can’t see, remember to check your responses. Yes, we are encouraging you to test in Burp!

I’m still having a hard time, but this time I can’t figure out how to have the whole flow automated. CAN I PLEASE GET ANOTHER HINT?
(Click to reveal hint)

    Remember that you can have both pre and post-request macros…you might need both here 😉

Ok, I’ve got the macro and I’ve found the XSS, is there anything else I can do?

You can try to replicate testing for XSS in Intruder by making sure that you’re following redirects and using Grep – Extract to return the outcome of your payload.

Another thing you can try is to go in blindfolded and practice brute-forcing at each step assuming no prior knowledge. For example, brute-force usernames, passwords, the MFA code, and ticker. You might have a harder time brute-forcing the longer CSRF token or the transaction ID, but you’re welcome to do that too!

Now, with all of that, hopefully you’ve conquered any lingering fears of Macros and can use them to aid your testing process.

Remember that their uses aren’t just limited to the examples that I’ve shown above, so get creative with it! Proactively think about when the introduction of a Macro might either save you time or allow you to introduce automation. In the meantime, check out more of our technical blogs on Web Application Penetration Testing here.

Back

Techopedia: Insider Threat Awareness Month Special: Security Experts Share 8 Ways to Address Insider Threats

During Insider Threat Awareness Month, Techopedia connected with 8 cybersecurity leaders and analysts who shared guidance on how organizations can protect their systems from threat actors. One of which was NetSPI Field CISO Nabil Hannan. Read the preview below or view it online.

+++

Defending your organization from threat actors outside your network is one thing. But it’s another thing entirely when they reside inside your organization.

A single malicious insider has the potential to use their access to resources to leak all the high-value data, personal identifiable information (PII), and intellectual property they have access to on a day-to-day basis.

Research conducted by Cyberhaven has found that insider threats are so common that nearly one in 10 employees (9.4%) will exfiltrate data within a six-month period. Most commonly, data leaked includes customer data and source code.

This Insider Threat Awareness Month, Techopedia connected with some of the top security leaders and analysts in the enterprise market to examine how organizations can protect themselves against malicious insiders.

Below are their comments (edited for brevity and style).

8. Security Hygiene

“This National Insider Threat Awareness Month, it’s important to raise awareness around some of the most commonly exploited vulnerabilities within an organization’s internal network. According to NetSPI’s 2023 Offensive Security Vision Report – which is based on more than 300,000 pen testing engagements – we found that excessive internal permissions continue to plague organizations.

We witnessed network shares or SQL servers that unintentionally allowed access to all domain users, which often contain sensitive information, credentials to other services, or customer data (such as credit card numbers or PII).

Unexpected excessive privileges lead to a large number of internal users having access to unintended sensitive data. All it takes is one rogue employee to cause major damage.

Additionally, weak or default passwords continue to be used within organizations, especially when accessing internal networks that contain highly sensitive information.

Unlike interfaces exposed externally, interfaces on the internal network typically don’t require multi-factor authentication, making the likelihood of compromise much greater. Basic security hygiene, as well as an understanding of internal sharing protocols, can provide a solid foundation in bolstering protection against insider threats.”

– Nabil Hannan, Field CISO at NetSPI.

You can read the full article at https://www.techopedia.com/security-experts-share-8-ways-to-address-insider-threats!

Back

Help Net Security: An Inside Look at NetSPI’s Impressive Breach and Attack Simulation Platform

Help Net Security interviews Scott Sutherland, VP of Research at NetSPI. They delve into the intricacies of the Breach and Attack Simulation (BAS) platform and discuss how it offers unique features – from customizable procedures to advanced plays – that help organizations maximize their ROI. Read the preview below or view it online.

+++

Can you provide a high-level overview of NetSPI’s Breach and Attack Simulation platform and what makes it unique?

We deliver a centralized detective control platform that allows organizations to create and execute customized procedures utilizing purpose-built technology and professional human pen-testers. Simulate real-world attack behaviors, not just IOCs, and put your detective controls to the test in a way no other organization can.

Can you speak to how organizations can visualize ROI through the NetSPI platform?

Breach and Attack Simulation solutions should help provide ROI in a variety of ways:

  • BAS solutions should provide data insights into where your detective and preventative control gaps are so you can make intelligent choices about where to invest your security dollars. This should include point-in-time and overtime reporting to justify or validate investments meaningfully. For example, this should include visualizations showing how investments in new data sources can increase alert coverage for common attack behaviors. Another typical example would be visualizing the increase in detection rule coverage results from adding another detection engineer.
  • Recruiting, training, and educating pentest and SOC teams can take time and money. Most BAS tools should include educational material that your teams can use to understand how to execute and detect common attack behaviors within the application. This can save both time and money in the long run.
  • There are hundreds, if not thousands, of hacker tools. Researching, installing, and running them to simulate the newest malicious behavior can be time-consuming and risky if the mechanisms are better understood. BAS solutions can take that off your team’s plate so they can focus on doing the job of simulation, detection engineering, and control validation/tuning.
  • Finally, tracking the average ransomware trends can help people estimate the potential cost of the ransomware incidents that BAS solutions are designed to help prevent and detect.

Continue reading at https://www.helpnetsecurity.com/2023/09/19/netspi-breach-and-attack-simulation-platform/.

Back

Network Computing: Industrial IoT Security Skills and Certifications: The Essentials

NetSPI Director of IoT and hardware pentesting Larry Trowell was featured in a Network Computing article on five tips network managers can take to get started with industrial network security. Read the preview below or view it online.

+++

What skills do network managers really need to properly secure industrial networks? What new protocols, frameworks, and regulations are important? And what conferences and certifications can help? Here are five tips to get started.

Whether you’re working in a water treatment plant or running the infrastructure for an energy company, network managers need training in the right skill sets to avoid cyber-attacks. Many options exist for technologists to address the cybersecurity skills gap in the Industrial Internet of Things (IIoT).

“There is no one-size-fits-all guideline for the skills and staff required to effectively and (equally important in the real world) efficiently secure an industrial system,” says John Pescatore, director of emerging security trends at the SANS Institute. “The overall maturity of IT operations and governance is a huge driver.”

Pescatore adds that “sloppy IT administration is the biggest driver behind most security incidents.”

Here are five tips on acquiring the skills needed in an IIoT environment:

1) Attend industry conferences

To gain knowledge in IIoT, attend training sessions in industrial control systems at the annual Black Hat conference, recommends Larry Trowell, director at penetration-testing company NetSPI. (Black Hat is owned by the same parent company as Network Computing.)

“It’s a two-day course and the best training I’ve seen for IIoT networks,” Trowell says. “It gives a basic overview and covers how to do passive analysis and wireless and software configurations.”

Become familiar with the operations technology (OT) mindset and architecture, advises Anand Oswal, senior vice president and general manager of network security at Palo Alto Networks. “The OT mindset is all around uptime, safety, and security, and we need to be familiar with that mindset.”

You can read the full article at https://www.networkcomputing.com/data-centers/industrial-iot-security-skills-and-certifications-essentials!

Back

VMblog: National Insider Threat Awareness Month 2023 Expert Roundup

NetSPI Field CISO Nabil Hannan shares advice on key vulnerabilities to be aware of during National Insider Threat Awareness Month. Read a preview below or view it online here.

+++

National Insider Threat Awareness Month (NITAM) is an annual, month-long campaign that takes place in September to educate government and industry about the risks posed by insider threats and the role of insider threat programs. This year’s theme is “Bystander Engagement,” which emphasizes the importance of all employees being aware of and reporting suspicious activity.

Insider threats are one of the most significant security risks facing organizations today. They can come from a variety of sources, including disgruntled employees, malicious insiders, and careless insiders. Insider threats can cause significant damage to an organization, including data breaches, financial losses, and reputational harm.

NITAM is a critical opportunity for organizations to raise awareness of insider threats and to implement effective insider threat programs. By educating employees about the risks and by encouraging them to report suspicious activity, organizations can help to protect themselves from insider threats.

Expert Commentary

In this round up article, we will be sharing commentary from a number of industry experts on the importance of insider threat awareness. We hope that this article will help to raise awareness of insider threats and that it will encourage organizations to take the necessary steps needed to protect themselves.

Nabil Hannan, Field CISO, NetSPI

“This National Insider Threat Awareness Month, it’s important to raise awareness around some of the most commonly exploited vulnerabilities within an organization’s internal network. According to NetSPI’s 2023 Offensive Security Vision Report – which is based on more than 300,000 pentesting engagements – we found that excessive internal permissions continue to plague organizations. We witnessed network shares or SQL servers that unintentionally allowed access to all domain users, which often contain sensitive information, credentials to other services, or customer data (suchas credit card numbers or PII). Unexpected excessive privileges leads to a large number of internal users having access to unintended sensitive data. All it takes is one rogue employee to cause major damage.

Additionally, weak or default passwords continue to be used within organizations, especially when accessing internal networks that contain highly sensitive information. Unlike interfaces exposed externally, interfaces on the internal network typically don’t require multi-factor authentication, making the likelihood of compromise much greater. Basic security hygiene, as well as an understanding of internal sharing protocols, can provide a solid foundation in bolstering protection against insider threats.”

Read the full article at https://vmblog.com/archive/2023/09/12/national-insider-threat-awareness-month-2023-expert-roundup-bystander-engagement.aspx.

Back

CSO: Emerging cyber threats in 2023 from AI to quantum to data poisoning

NetSPI CISO Norman Kromberg was featured in CSO’s latest article on emerging threats in 2023. Read a preview below or view it online here.

+++

In cybersecurity’s never-ending cat-and-mouse game with hackers and grifters, the threats are always evolving. Here are some of the main attacks experts see as the biggest and baddest on the horizon.

Companies using Microsoft Teams got news earlier in the summer of 2023 that a Russian hacker group was using the platform to launch phishing attacks, putting a new spin on a long-known attack strategy. According to Microsoft Threat Intelligence, the hackers, identified as Midnight Blizzard, used Microsoft 365 tenants owned by small businesses compromised in previous attacks to host and launch new social engineering attacks.

Threats evolve constantly as hackers and grifters gain access to new technologies or come up with new ways to exploit old vulnerabilities. “It’s a cat and mouse game,” says Mark Ruchie, CISO of security firm Entrust.

Phishing remains the most common attack, with the 2023 Comcast Business Cybersecurity Threat Report finding that nine out of 10 attempts to breach its customers’ networks started with a phish.

The volume and velocity of attacks have increased, as have the costs incurred by victims, with the 2022 Official Cybercrimes Report from Cybersecurity Ventures estimating that the cost of cybercrime will jump from $3 trillion in 2015 to a projected $10.5 trillion in 2025.

At the same time, security leaders say they see new takes on standard attack methods — such as the attacks launched by Midnight Blizzard (which has also been identified by the names APT29, Cozy Bear and NOBELIUM) — as well as novel attack strategies. Data poisoning, SEO poisoning and AI-enabled threat actors are among the emerging threats facing CISOs today.

“The moment you agree to be a CISO, you agree to get into a race you never win completely, and there are constantly evolving things that you have to have on your screen,” says Andreas Wuchner, field CISO for security company Panaseer and a member of the company’s advisory board.

Preparing for what’s next

A majority of CISOs are anticipating a changing threat landscape: 58% of security leaders expect a different set of cyber risks in the upcoming five years, according to a poll taken by search firm Heidrick & Struggles for its 2023 Global Chief Information Security Officer (CISO) Survey.

CISOs list AI and machine learning as the top themes in most significant cyber risks, with 46% saying as much. CISOs also list geopolitical, attacks, threats, cloud, quantum, and supply chain as other top cyber risk themes.

Authors of the Heidrick & Struggles survey noted that respondents offered some thoughts on the topic. For example, one wrote that there will be “a continued arms race for automation.” Another wrote, “As attackers increase [the] attack cycle, respondents must move faster.” A third shared that “Cyber threats [will be] at machine speed, whereas defenses will be at human speed.”

The authors added, “Others expressed similar concerns, that skills will not scale from old to new. Still others had more existential fears, citing the ‘dramatic erosion in our ability to discern truth from fiction.'”

Security leaders say the best way to prepare for evolving threats and any new ones that might emerge is to follow established best practices while also layering in new technologies and strategies to strengthen defenses and create proactive elements into enterprise security.

“It’s taking the fundamentals and applying new techniques where you can to advance [your security posture] and create a defense in depth so you can get to that next level, so you can get to a point where you could detect anything novel,” says Norman Kromberg, CISO of security software company NetSPI. “That approach could give you enough capability to identify that unknown thing.”

You can read the full article at https://www.csoonline.com/article/651125/emerging-cyber-threats-in-2023-from-ai-to-quantum-to-data-poisoning.html.

Back

Power Up Your Azure Penetration Testing

As businesses continue to embrace the cloud, the spotlight falls on safeguarding their growing digital environment. At Black Hat, NetSPI VP of Research Karl Fosaaen sat down with the host of the Cloud Security Podcast Ashish Rajan to discuss all things Azure penetration testing. In an era of constantly evolving technology and escalating cyber threats, voices like Karl’s become the bedrock of resilience for today’s cloud security. 

Catch the highlights below and watch the full episode here.

How is Azure pentesting different than AWS pentesting? 

Each cloud provider has its own identity platforms, so working within the platforms will be inherently different. For example, in AWS you might have IAM accounts, policies, roles, and groups, but within Azure, you’ve got a completely separate identity system through Azure Active Directory, soon to be Entra ID. 

“There’s a lot of overlap between the two different cloud providers — or any different cloud provider. When we built up our methodologies for doing cloud pentesting, we tried to make the methodologies vendor agnostic so they’d apply to any cloud vendor we’re working with.” 

Is cloud pentesting just configuration review? 

Configuration review is an important component of cloud pentesting, but from our perspective, we use configuration review as a component that informs the pentesting. Configuration review focuses on seeing what’s exposed to the internet, or what an internal networking looks like from virtual networks. Pentesting takes it to the next level by trying to find application network vulnerabilities and abuses of those misconfigurations that can be used to potentially gain access. 

“I think that’s the key component that might be missing for folks who see cloud pentesting as just config review. To actually pentest it, we have to exploit the vulnerabilities and show the potential impact there.” 

How would you compare cloud pentesting to network pentesting?  

There’s a lot of overlap between cloud pentesting and network pentesting. Karl’s background is in external and internal network pentesting, and a lot of the skills he gained early in his career carry over to cloud pentesting. Many organizations bring their on-prem applications and virtual machines up into the cloud, so the core principles of network security apply to the cloud too.  

“Those same pentesting principles that we had from network pentesting of identifying live services, seeing how we can exploit them, trying to identify vulnerabilities, it’s the same kind of ideas just applied to the cloud context.” 

What’s your thought process when you go down the path of an Azure penetration test? What’s your first step?  

Every engagement is unique, so it depends on the different resources within an environment. Start by establishing a baseline. For example, when looking at AWS versus Azure, the concept of passing a role to an AWS service has a similar counterpart in Azure. You have managed identities that you can pass to a specific service. Start by looking at what managed identities are out there, what roles resources, where things attach, who has rights to what, and try to start formulating that path toward potentially compromising an asset that could allow you to pivot over to something else. When we can start escalating this way, we’re able to build out a mental map that provides a baseline of the environment you’re in. 

“It’s really just getting a rough idea of what’s in the environment, situational awareness, identifying where your attack paths might be, and additionally, where the identities are.”  

Hear Karl and Ashish talk in-depth by listening to the full episode on Cloud Security Podcast’s LinkedIn page.

Back

Ignite Innovation with NetSPI’s New AI/ML Penetration Testing 

AI/ML is being rapidly adopted into many aspects of businesses. It is transforming the way we work because of its ability to reduce the efforts and costs to complete tasks, but we are only at the beginning of this technology’s potential. As the adoption and use cases continue to grow, it is critical that organizations understand the unique threats that AI/ML brings along with it, along with identifying weak spots and building more resilient models.  

NetSPI’s industry-leading AI/ML pentesting solution was built from decades of manual penetration testing expertise in network, application, cloud, and more, designed specifically to identify, understand, and mitigate risks of AI and ML models. This new solution allows you to improve overall resiliency to attacks and strengthen security with three unique offerings: 

  • The Machine Learning Security Assessment is designed to evaluate ML models, including Large Language Models (LLMs), against adversarial attacks, identify vulnerabilities, and provide actionable recommendations to ensure the overall safety of the model, its components, and their interactions with the surrounding environment. 
  • Our Infrastructure Security Assessment tests the surrounding infrastructure around your model. This assessment covers network security, cloud security, API security, and more, ensuring that your organization’s deployment adheres to defense in depth security policies and mitigates potential risks. 
  • And finally, the Web Application Penetration Testing offering evaluates the security and reliability of web applications utilizing LLMs and other machine learning integrations. Leveraging sophisticated manual processes and automated tools, we identify vulnerabilities and risks specific to LLM-integrated functionality, providing actionable recommendations to enhance security and safeguard sensitive data. 

If you would like to learn more about our AI/ML Pentesting, check out our data sheet, or contact us for a demo.

This blog post is a part of our offensive security solutions update series. Stay tuned for additional innovations within Resolve (PTaaS), ASM (Attack Surface Management), and BAS (Breach and Attack Simulation). 

Read past solutions update blogs: 

Discover how the NetSPI BAS solution helps organizations validate the efficacy of existing security controls and understand their Security Posture and Readiness.

X