Superfast RAG with Llama 3 and Groq
James Briggs James Briggs
67K subscribers
7,709 views
174

 Published On Jul 2, 2024

Groq API provides access to Language Processing Units (LPUs) that enable incredibly fast LLM inference. The service offers several LLMs including Meta's Llama 3. In this video, we'll implement a RAG pipeline using Llama 3 70B via Groq, an open source e5 encoder, and the Pinecone vector database.

📌 Code:
https://github.com/pinecone-io/exampl...

🌲 Subscribe for Latest Articles and Videos:
https://www.pinecone.io/newsletter-si...

👋🏼 AI Consulting:
https://aurelio.ai

👾 Discord:
  / discord  

Twitter:   / jamescalam  
LinkedIn:   / jamescalam  

#artificialintelligence #llama3 #groq

00:00 Groq and Llama 3 for RAG
00:37 Llama 3 in Python
04:25 Initializing e5 for Embeddings
05:56 Using Pinecone for RAG
07:24 Why We Concatenate Title and Content
10:15 Testing RAG Retrieval Performance
11:28 Initialize connection to Groq API
12:24 Generating RAG Answers with Llama 3 70B
14:37 Final Points on Why Groq Matters

show more

Share/Embed