In this live webinar, NetSPI Principal AI Researcher Kurtis Shelton and Defy Security Solutions Architect John Tarn will break down how modern security teams are approaching LLM security – without sacrificing too much functionality.

You’ll hear expert insights from one of the minds behind NetSPI’s Open LLM Security Benchmark, explore real-world trade-offs between technical, regulatory, and compliance-based approaches, and walk away with a tactical framework for benchmarking LLMs in your own company.

Whether you’re implementing GenAI models, building your own internally, or guiding product teams, this session will help you ask the right questions and better secure your business.

 

Register now to learn:

  • Common vulnerabilities in today’s LLMs (and how they’re being exploited)
  • What “usable” vs “secure” actually means in practice
  • How to build a repeatable framework for LLM benchmarking
  • Our top three pieces of advice on benchmarking large language models