How to ensure security when building products with LLMs | Product Odyssey #10
Vazco Vazco
190 subscribers
95 views
11

 Published On Sep 3, 2024

In this episode of Product Odyssey, Marcin Kokott and Michał Weskida explore the critical challenges in AI security, focusing on Large Language Models (LLMs). From prompt injections to data poisoning, they discuss the real-world risks these models pose, with examples from companies like Google, Chevrolet, and Air Canada.

How can you prevent AI hallucinations and safeguard your models? Why is it crucial to keep a human in the loop?

Join us to uncover practical strategies for navigating the complexities of AI security.

Chapters 🎬
00:00 - Introduction and AI security overview
00:58 - Real-World examples of LLM security issues
05:44 - Understanding and mitigating LLM vulnerabilities
06:58 - Hallucinations – where is it coming from, why is it an issue?
13:16 - Prompt injection: direct vs. indirect threats
20:09 - Training data poisoning and over-reliance on AI
31:08 - Preventing data disclosure and infrastructure attacks
46:10 - Protecting AI models from theft
50:05 - Key takeaways and final thoughts

Product Odyssey on Spotify: https://open.spotify.com/show/1xDNkRL...

Product Odyssey on Apple Podcasts: https://podcasts.apple.com/fi/podcast...

Get in touch in the comments below, or let’s catch up on 👇

Marcin’s LinkedIn:   / marcinkokott  

Michał’s LinkedIn:   / michal-weskida  

Email: [email protected]

Want to know what we do on a daily basis?

Explore our website 👉 https://vazco.eu

Looking for an AI-powered voice assistant for product leaders?

Try CTO Compass now 👉 https://ctocompass.eu

show more

Share/Embed