
Ep67: Why RAG Fails LLMs – And How to Finally Fix It
Échec de l'ajout au panier.
Échec de l'ajout à la liste d'envies.
Échec de la suppression de la liste d’envies.
Échec du suivi du balado
Ne plus suivre le balado a échoué
-
Narrateur(s):
-
Auteur(s):
À propos de cet audio
AI is lying to you—here’s why. Retrieval-Augmented Generation (RAG) was supposed to fix AI hallucinations, but it’s failing. In this episode, we break down the limitations of naïve RAG, the rise of dense retrieval, and how new approaches like Agentic RAG, RePlug, and RAG Fusion are revolutionizing AI search accuracy.
🔍 Key Insights:
- Why naïve RAG fails and leads to bad retrieval
- How Contriever & Dense Retrieval improve accuracy
- RePlug’s approach to refining AI queries
- Why RAG Fusion is a game-changer for AI search
- The future of AI retrieval beyond vector databases
If you’ve ever wondered why LLMs still struggle with real knowledge retrieval, this is the episode you need!
🎧 Listen now and stay ahead in AI!
References:
[2005.11401] Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks
[2112.09118] Unsupervised Dense Information Retrieval with Contrastive Learning
[2301.12652] REPLUG: Retrieval-Augmented Black-Box Language Models
[2402.03367] RAG-Fusion: a New Take on Retrieval-Augmented Generation
[2312.10997] Retrieval-Augmented Generation for Large Language Models: A Survey