Kate Holterhoff, analyst with Redmonk, and Simon Willison, founder of Dattasette, co-creator of Django, and expert in AI technologies, speak about the AI prompt injection vulnerability. Simon lays out what prompt injection is and why it is so difficult to mitigate. They also cover major industry players (OpenAI, Meta, Anthropic, Google), and the common mistake of confusing moderation, in the sense of not letting the model say bad things, with security, not letting an attack trigger the model into performing an action that leaks private data or triggers tools in the wrong way. Prompt injection is a security issue, and not one that can be solved through moderation alone.
This RedMonk Conversation was published in video form on December 20, 2023.