jackaduma/Vicuna-LoRA-RLHF-PyTorch
A full pipeline to finetune Vicuna LLM with LoRA and RLHF on consumer hardware. Implementation of RLHF (Reinforcement Learning with Human Feedback) on top of the Vicuna architecture. Basically ChatGPT but with Vicuna
This project offers a complete workflow to adapt the Vicuna large language model to specific tasks using a technique called LoRA, and then refine its responses through Reinforcement Learning with Human Feedback (RLHF). You provide raw Vicuna weights and your own dataset of examples and preferences. The output is a customized Vicuna model that behaves more like ChatGPT, tailored to your data. This is for machine learning practitioners or researchers who want to fine-tune open-source LLMs.
221 stars. No commits in the last 6 months.
Use this if you need to create a specialized, instruction-following large language model from Vicuna, trained with your own data and preferences, on accessible consumer-grade GPUs.
Not ideal if you don't have a background in machine learning, prefer using pre-trained models without custom fine-tuning, or lack access to computational resources like a 2080Ti or similar GPU.
Stars
221
Forks
18
Language
Python
License
MIT
Category
Last pushed
May 20, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/jackaduma/Vicuna-LoRA-RLHF-PyTorch"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
agentscope-ai/Trinity-RFT
Trinity-RFT is a general-purpose, flexible and scalable framework designed for reinforcement...
OpenRLHF/OpenRLHF
An Easy-to-use, Scalable and High-performance Agentic RL Framework based on Ray (PPO & DAPO &...
zjunlp/EasyEdit
[ACL 2024] An Easy-to-use Knowledge Editing Framework for LLMs.
huggingface/alignment-handbook
Robust recipes to align language models with human and AI preferences
hyunwoongko/nanoRLHF
nanoRLHF: from-scratch journey into how LLMs and RLHF really work.