Fine-tuning LLMs encourages hallucinations
Vivek Haldar Vivek Haldar
6.43K subscribers
244 views
0

 Published On May 23, 2024

https://arxiv.org/abs//2405.05904


0:00 Introduction and recap of previous paper
0:29 Fine-tuning LLMs can lead to hallucination
1:18 Constructing an experiment to test the conjecture
1:51 Categorizing knowledge into four categories
2:59 Fine-tuning with different percentages of unknown examples
3:31 Impact of unknown items on fine-tuning accuracy
4:02 Fine-tuning improves utilization of pre-existing knowledge
4:19 Conclusion and wrap-up


Video on RAG vs fine-tuning:    • Fine-tuning or RAG?  

show more

Share/Embed