Back

Shaping the Future of AI and Cybersecurity

2023 is sure to be remembered as the year of artificial intelligence (AI). With the rapid adoption of ChatGPT, the rise of new job titles like “Prompt Engineer,” and nearly every workforce researching how the technology could be applied to their industry — AI entered the scene as a force to be reckoned with. 

NetSPI joined in with the launch of AI Penetration Testing to help teams bring their AI/ML implementations to market while staying confident in the security of their creations. We launched this new solution at Black Hat, which also saw a strong theme of AI technologies from attendees and vendors across the conference. Read our show recap here.

These are just a few headlines showing how AI took center stage in 2023. But at a high level, AI is still in its infancy. Teams are researching how they can use this new technology, and the results will continue to play out in coming years.  

With this in mind, we wanted to understand how our cybersecurity partners have approached AI so far and where they see opportunities to utilize it going forward. Read on to hear perspectives on AI in cybersecurity from several security experts.

Meet the Contributors 

This roundup includes contributions from NetSPI Partners and NetSPI’s Managing Director, Phil Morris. Learn more about our Partner Program.

How do you envision AI enabling the cybersecurity industry?  

“Artificial Intelligence stands to be a viable force multiplier for organizations that embrace it, unlocking the power of dormant data and drastically speeding time to insight. AI is transforming the industry by increasing efficiency and saving time by automating tasks. AI automation will serve as the framework for all industries and will be a key player with tasks and projects. AI algorithms have become a huge contributor to marketing campaigns as AI is predicting consumer behaviors and trends.” 

Daniel Alonso, Flagler Technologies Technologist 

“Artificial intelligence is a powerful tool for enhancing security and compliance. Algorithms capable of analyzing massive amounts of data can quickly identify threats, flag vulnerabilities and misconfigurations, and prevent attacks. By continually analyzing systems, controls, and processes against security frameworks and regulatory standards, AI and automation tools can alert security teams to any deviations or potential security risks. This real-time visibility increases overall efficiency and frees security teams to focus on complex, high-priority business initiatives.” 

Shrav Mehta, Secureframe CEO 

“Machine Learning and similar predictive analysis is already used extensively to identify patterns. Sometimes those patterns identify a ‘baseline’ so that outliers can be identified faster, and sometimes those patterns can be how adversaries explore and attack your network.
Generative AI (now just referred to as ‘AI’), is really a ‘word predictor’ in that it absorbs a corpus of data, trains itself to predict language patterns in that data, and then uses those patterns to predict how to answer a question based on that corpus of data (or one similar to it). So, whenever you have lots of data— typically unstructured or overly ‘messy’ data stored in persistent storage or coming through complex data streams, GenAI can help you make some sense of it.
I think that where we’re going to see a lot of experimentation is with the idea of ‘mini-AIs’, where we’ve built or extended a general AI model to be used as a niche platform to help us solve a more specific use case. We’re seeing that now in large language models being used to identify how to ‘hack’ a network or organization, and Microsoft is using that model in the development of its many ‘CoPilots’.”   

Phil Morris, NetSPI Managing Director 

“At Fellsway Group, we see AI as a transformative force that has the potential to revolutionize industries across the board. The application of AI technologies opens a world of opportunities for businesses to enhance efficiency, optimize processes, and drive innovation. Across industries, we envision AI enabling: 

  1. Process Optimization: AI can analyze vast amounts of data in real-time, identifying patterns, trends, and anomalies that humans might miss. This capability allows for the optimization of complex manufacturing processes, supply chain management, and resource allocation. 
  2. Predictive Maintenance: AI-powered predictive maintenance can help organizations reduce downtime and operational costs by identifying potential equipment failures before they occur. This approach allows for timely maintenance and prevents costly breakdowns. 
  3. Quality Control: AI-driven quality control systems can ensure product consistency and minimize defects by detecting minute variations that might go unnoticed through traditional inspection methods. 
  4. Personalized Marketing: AI can analyze customer data to create highly targeted marketing campaigns, tailoring product recommendations and offers based on individual preferences and behaviors. 
  5. Supply Chain Management: AI can optimize supply chain logistics, predicting demand patterns, optimizing inventory levels, and enhancing delivery routes to reduce costs and improve overall efficiency. 
  6. Safety and Risk Mitigation: AI-enabled sensors and systems can monitor safety conditions in hazardous environments, reducing risks to human workers. Additionally, AI can model and simulate potential risks to identify ways to mitigate them.” 
Steve Leventhal, Fellsway Group Managing Partner, and Robert Bussey, Fellsway Group Lead Consultant 

“AI will be a tool that security teams can use for multiple purposes. It will help them process massive amounts of data, allow them to scale with smaller teams and help the security operators get to the information that requires real intelligence to decipher. It will also help them write automation into their processes by providing code snippets and reusable functions to get to their end automation goals more rapidly.” 

Tim Ellis, Right! Systems, Inc. Chief Information Security Officer 
AI/ML Penetration Testing

On the other hand, what risks or drawbacks do you see associated with AI? 

“AI aligns with the ethical position of the user; it can be used for negative purposes as easily as it can be used for the betterment of humanity. AI is so powerful and disruptive that there can be concerns for privacy and potential market volatility. This is a key reason getting ahead of the AI intelligence in your industry is so important.” 

Daniel Alonso, Flagler Technologies Technologist 

“There are a few potential risks organizations must consider when evaluating and implementing AI tools. First and foremost is avoiding over-reliance on AI. For example, generative AI can be used for policy creation, but at best AI algorithms can generate baseline policies that will need human input and expertise to be tailored to the organization.
It’s also important to understand accuracy when querying AI, especially for platform-specific configurations and deep troubleshooting. For example, organizations may be using Github Copilot to generate code, but the tool might not have access to the company’s entire codebase to know best practices. As a result, it might generate code with security flaws or code that does not follow the standards set in the rest of the system.
Finally, it’s essential for companies to consider data security and privacy when using AI tools. As with any vendor, knowing what data is shared and how it’s used is incredibly important for your overall security posture. When evaluating AI vendors, find out if there’s a way to ensure only anonymized data is flowing into the tool, as well as where data is being stored and processed and how long it’s retained.” 

Shrav Mehta, Secureframe CEO

“Despite the hype that we’ve all seen over the first half of 2023, I don’t think that ‘AI’ projects are going to be successful without keeping a human component in the mix — it’s just too unreliable and lacks context in high-risk situations. 
That might change, but if I were exploring business cases in healthcare, life sciences, or financial advising, I’d be hesitant to just “take the systems word for it.” Remember, experience shows that more than 90 percent of AI/ML projects end up being rated, shall we say, less than successful. 
From my research it seems that most of the issues are not technical ones, but are concerned around the problems that the business is trying to solve and/or are based on some mistaken assumptions about both the results of the project and how the system or platform is built to get to that point. To many teams, this tech is just close enough to what they’ve been working with to seem like an easy reach, but in truth it’s a whole new way to look at projects and outcomes.” 

Phil Morris, NetSPI Managing Director 

“For the security operations team, a significant risk with AI is trusting it too much. AI isn’t actually intelligent, it just can often do a good job of appearing to be intelligent. At the end of the day, it’s just a tool and if you don’t have great human intelligence utilizing that tool and corralling what the output is you will end up with garbage output and holes in your defenses. The other risk that AI brings is on the attacker side. Since AI is just a tool, the same technology can be used to bring down defenses in ever more aggressive ways — and attackers are already quite good at automation. Attackers also don’t suffer the downside risk of AI making mistakes because mistakes are unlikely to hurt them at all.”  

Tim Ellis, Right! Systems, Inc. Chief Information Security Officer 

“While the potential benefits of AI are vast, it’s important to acknowledge and address the potential risks and drawbacks: 

  1. Bias and Fairness: AI systems can inadvertently inherit biases present in the data they are trained on, leading to biased outcomes and unfair decisions. 
  2. Job Displacement: Automation through AI could lead to job displacement in certain industries, potentially impacting the workforce. 
  3. Security Concerns: As AI systems become more interconnected and integrated, they could become targets for cyberattacks if not properly secured. 
  4. Privacy Issues: The use of AI in data analysis can raise concerns about the privacy and security of personal and sensitive information. 
  5. Ethical Considerations: Decisions made by AI systems might not always align with human ethical values, leading to difficult ethical dilemmas.” 
Steve Leventhal, Fellsway Group Managing Partner, and Robert Bussey, Fellsway Group Lead Consultant

Can you share any advice for how security teams can approach getting started with AI? 

“Yes—forget about losing your job to an AI platform—that isn’t going to happen, and if your C-suite is planning layoffs due to their new AI project, you should probably be working somewhere more grounded in the first place. 
I came into it via my analytics background, but I quickly discovered that—like many other emerging technologies of the past—things like security, privacy, and auditability haven’t been figured into the equation. If you can poison a training dataset, you’ve corrupted a model, and thousands (or millions) of dollars could be spent before you realize that. Alternatively, training data has huge privacy risks (and now, copyright risks) that need to be considered. 
AI is based on data and AI projects are based on well-known data processing pipelines, so don’t be overwhelmed by asking standard security- and risk-related questions — just don’t expect easy answers. Having a partnership with subject matter experts who understand how these systems are grown, how they evolve, and how they need to be protected is a good first step. Then you can learn from them and specialize wherever your heart takes you.”  

Phil Morris, NetSPI Managing Director 

“Absolutely, AI is used typically to answer a question or correlate data to provide a condition. Make sure your security data, syslog, streams and all other relevant points are part of your solutions result! Continuing to improve both security policies, and identifying new risks will be at the forefront of the security teams in tech.” 

Daniel Alonso, Flagler Technologies Technologist 

“Security teams must create an AI strategy that’s aligned with the organization’s overarching business and security objectives. AI-powered tools can help with a range of challenges, from complex tasks like continuous security monitoring, intelligence threat detection, and faster incident response, or simple tasks like creating new tabletop exercise prompts. Security teams should start by identifying specific cybersecurity challenges AI can help address.” 

Shrav Mehta, Secureframe CEO 

“I would recommend using vendor tools to start with on the data processing front. Look for vendors that are able to show clear ROIs in saving people time through their use of AI and machine learning. Everyone is promising this but not everyone is delivering so be careful to analyze the vendor’s claims for real outcomes and talk to references that have their own ROI models if possible.” 

Tim Ellis, Right! Systems, Inc. Chief Information Security Officer 

“For security teams looking to leverage AI, here are some key steps to consider: 

  1. Education and Training: Ensure that your security team has a solid understanding of AI concepts, algorithms, and potential applications in the security domain. 
  2. Identify Use Cases: Identify specific use cases where AI can enhance security operations, such as threat detection, anomaly detection, and fraud prevention. 
  3. Data Preparation: Data is crucial for training AI models. Gather high-quality, diverse, and relevant data to build effective AI systems. 
  4. Collaborate with Experts: Work with AI experts and data scientists to develop and implement AI solutions tailored to your security needs. 
  5. Test and Validate: Thoroughly test AI models in controlled environments to ensure their accuracy, robustness, and effectiveness before deploying them in critical security operations. 
  6. Monitor and Update: Continuously monitor AI systems for performance and adapt them as new threats and challenges emerge. 
  7. Ethical Considerations: Keep ethical considerations at the forefront. Ensure transparency, fairness, and accountability in AI-driven security decisions. 

By approaching AI implementation with a well-informed and strategic mindset, security teams can harness its power while mitigating potential risks. At Fellsway Group, we believe that responsible and thoughtful integration of AI can lead to significant advancements in industry and security alike.” 

Steve Leventhal, Fellsway Group Managing Partner, and Robert Bussey, Fellsway Group Lead Consultant 

These responses convey the research, hard work, and preparation that goes into determining how a company can best apply AI in their day-to-day business. One recurring theme is the need to include human analysis to verify data going into AI models and the results coming out. After all, AI can only be as smart as the person using it.  

Interested in sharing your perspective with us? Tweet us anytime @NetSPI.  

This article was written in collaboration with NetSPI’s Partners. Learn more about becoming a NetSPI partner here.

Discover how the NetSPI BAS solution helps organizations validate the efficacy of existing security controls and understand their Security Posture and Readiness.

X