INSANELY FAST Talking AI: Powered by Groq & Deepgram
Prompt Engineering Prompt Engineering
169K subscribers
11,237 views
348

 Published On Premiered May 20, 2024

Fastest Voice Chat Inference with Groq and DeepGram

In this video, I show how to achieve the fastest voice chat inference using Groq and DeepGram APIs. I compare their speeds to OpenAI’s Whisper and demonstrate how to set up and code the process. Learn about handling rate limits, buffering issues, and how to get started with these services. Stay tuned for future videos on local model implementations.

#groq #voicechat #whisper

🦾 Discord:   / discord  
☕ Buy me a Coffee: https://ko-fi.com/promptengineering
|🔴 Patreon:   / promptengineering  
💼Consulting: https://calendly.com/engineerprompt/c...
📧 Business Contact: [email protected]
Become Member: http://tinyurl.com/y5h28s6h

💻 Pre-configured localGPT VM: https://bit.ly/localGPT (use Code: PromptEngineering for 50% off).

Signup for Advanced RAG:
https://tally.so/r/3y9bb0

LINKS:
Updated code now released
Project Verbi: https://github.com/PromtEngineer/Verbi
Whisper on Groq vs OpenAI: https://tinyurl.com/5ea42yn4

00:00 Introduction to Advanced Voice Chat Inference
00:10 Meet Ada: Your AI Assistant
01:17 Exploring OpenAI's Implementation
01:45 Switching to Groq and Deepgram for Speed
02:43 Deep Dive into the New Implementation
06:22 Setting Up Your Environment
10:28 Understanding Rate Limits and Service Credits
11:19 Looking Ahead: Local Models and Community Engagement

All Interesting Videos:
Everything LangChain:    • LangChain  

Everything LLM:    • Large Language Models  

Everything Midjourney:    • MidJourney Tutorials  

AI Image Generation:    • AI Image Generation Tutorials  

show more

Share/Embed