“Beyond Binary”: Understanding the Gradient of AI Vulnerabilities

Location: Classroom 1A 
Date: Thursday, March 6
Time: 1:30pm – 2:30pm MST

Kurtis Shelton

Kurtis Shelton
Principal AI Researcher

The landscape of AI vulnerabilities is evolving rapidly, demanding a nuanced approach beyond traditional binary assessments. This presentation dives into common vulnerabilities in AI systems, explores why vulnerabilities are gradients, not absolutes, and discusses benchmarking as a critical tool for assessing and improving model robustness.  

Using examples from generative models, this talk will illustrate how benchmarks can encompass both traditional and AI-specific challenges, including bias and usability. Attendees will leave with actionable insights into AI risk management and a call to collaborative innovation in AI security by learning: 

  1. The gradient nature of AI vulnerabilities
  2. The role of benchmarking in assessing AI robustness and security
  3. How to benchmark AI-specific vulnerabilities like bias and prompt injection
  4. Importance of collaboration in enhancing AI security across industries, and quick teaser on other fun tools that are residual of this understanding