The AI Balancing Act:
Benchmarking LLMs for Usability vs Security

Security or usability? When it comes to large language models (LLMs), it's not always possible to have both. Watch on-demand now to get our experts' tips on tackling this dilemma.

The AI Balancing Act: Benchmarking LLMs for Usability vs Security

During this on-demand webinar, NetSPI Principal AI Researcher Kurtis Shelton and Defy Security Solutions Architect John Tarn break down how modern security teams are approaching LLM security - without sacrificing too much functionality.

 

You'll hear expert insights from one of the minds behind NetSPI's Open LLM Security Benchmark, explore real-world trade-offs between technical, regulatory, and compliance-based approaches, and walk away with a tactical framework for benchmarking LLMs in your own company.

 

Whether you're implementing GenAI models, building your own internally, or guiding product teams, this session will help you ask the right questions and better secure your business.

 

 

Watch now to learn:

  • Common vulnerabilities in today's LLMs (and how they're being exploited)

  • What "usable" vs "secure" actually means in practice

  • How to build a repeatable framework for LLM benchmarking

  • Our top three pieces of advice on benchmarking large language models

     

     

     

 

 

 

 

 Speakers

Sample headshot

Kurtis Shelton

Principal AI Researcher

NetSPI
A recognized authority in AI/ML security, Kurtis regularly shares insights on pushing the boundaries on what's possible at the intersection of AI and security through technical research & collaboration with industry peers.

Sample headshot

John Tarn

Solutions Architect

Defy Security
With 20+ years of experience in email & network security, John's focus at Defy Security is helping clients to understand & secure their attack surface in a holistic way.

 

 

Want to See More?

18 results found