Break the Bot: Red-Teaming Large Language Models

Date TBA

Red-teaming has long been a crucial component of a robust security toolkit for software systems. Now, companies developing large language models (LLMs) and other GenAI products are increasingly applying the technique to model outputs, as a means of uncovering harmful content that generative models may produce. Thus, developers can identify and mitigate issues before they occur in production.

In this session, join Numa Dhamani and Maggie Engler, co-authors of Introduction to Generative AI, to learn a complete workflow and arsenal of strategies for red-teaming LLMs.

Programming descriptions are generated by participants and do not necessarily reflect the opinions of SXSW.

photo of Numa Dhamani
photo of Maggie Engler

Maggie Engler

Microsoft

Primary Access
Platinum Badge
Interactive Badge
Secondary Access
Music Badge
Film & TV Badge