Fine-Tune Large LLMs with QLoRA (Free Colab Tutorial)
1littlecoder 1littlecoder
82.5K subscribers
56,010 views
1.3K

 Published On May 27, 2023

Making LLMs even more accessible with bitsandbytes, 4-bit quantization and QLoRA
https://huggingface.co/blog/4bit-tran...

Tim Dettmers Hugging Face Profile - https://huggingface.co/timdettmers

QLORA: Efficient Finetuning of Quantized LLMs Paper - https://arxiv.org/pdf/2305.14314.pdf

LoRA config parameters - https://huggingface.co/docs/peft/conc...

PEFT Supported Models - https://huggingface.co/docs/peft/inde...

Google Colab - https://colab.research.google.com/dri...


❤️ If you want to support the channel ❤️
Support here:
Patreon -   / 1littlecoder  
Ko-Fi - https://ko-fi.com/1littlecoder

show more

Share/Embed