On August 8, 2023, insideBIGDATA shared NetSPI’s announcement of AI/ML Penetration Testing with a focus on identifying, analyzing, and remediating vulnerabilities on machine learning systems such as Large Language Models (LLMs) and providing grounded advice and real-world guidance to ensure security is considered from ideation to implementation. 

Read the full story online here

+++ 

NetSPI, the global leader in offensive security, today debuted its ML/AI Pentesting solution to bring a more holistic and proactive approach to safeguarding machine learning model implementations. The first-of-its-kind solution focuses on two core components: Identifying, analyzing, and remediating vulnerabilities on machine learning systems such as Large Language Models (LLMs) and providing grounded advice and real-world guidance to ensure security is considered from ideation to implementation. 

As adoption of ML and AI accelerates, organizations must understand the unique threats that accompany this technology to better identify areas of weakness and build more secure models. NetSPI’s testing methodology is rooted in adversarial machine learning – the study of adversarial attacks on ML and corresponding defenses. With this foundational research, the company’s offensive security experts have the knowledge to better understand and mitigate vulnerabilities within ML models by putting them to the test against real adversarial attack techniques. 

Click here to read the full story on insideBIGDATA.