Talk Intermediate 17:00 - 17:30 August 08, 2025

Nnenna Ndukwe

It’s not enough to ask if your LLM app is working in production. You need to understand how it fails in a battle-tested environment. In this talk, we’ll dive into red teaming for Gen AI systems: adversarial prompts, model behavior probing, jailbreaks, and novel evasion strategies that mimic real-world threat actors. You’ll learn how to build an AI-specific adversarial testing playbook, simulate misuse scenarios, and embed red teaming into your SDLC. LLMs are unpredictable, but they can be systematically evaluated. We'll explore how to make AI apps testable, repeatable, and secure by design.

Nnenna Ndukwe

Principal Developer Advocate at Qodo AI

Nnenna Ndukwe is a Principal Developer Advocate and Software Engineer, enthusiastic about AI. With 8+ experience spanning across startups, media tech, cybersecurity, and AI, she's an active global AI/ML community architect championing engineers to build in emerging tech. She studied Computer Science at Boston University and is a proud member of Women Defining AI, Women Applying AI, and Reg.exe. Nnenna believes that AI should augment: enabling creativity, accelerating learning, and preserving the intuition and humanity of its users. She's an international speaker and serves communities through content creation, open-source contributions, and philanthropy.