opendilab/LightRFT

LightRFT: Light, Efficient, Omni-modal & Reward-model Driven Reinforcement Fine-Tuning Framework

51
/ 100
Established

This framework helps AI practitioners improve the performance and behavior of Large Language Models (LLMs) and Vision-Language Models (VLMs). You feed in a pre-trained language or vision-language model along with human feedback or a reward model, and it outputs a fine-tuned model that better aligns with desired outcomes, like generating more accurate text or understanding multimodal data. It's designed for machine learning engineers and researchers working with advanced AI models.

208 stars. Available on PyPI.

Use this if you need an efficient and scalable way to fine-tune your LLMs or VLMs using reinforcement learning from human feedback, especially for multimodal tasks.

Not ideal if you are looking for a simple, out-of-the-box solution for basic model training without deep customization or advanced optimization.

large-language-models vision-language-models reinforcement-learning model-fine-tuning multimodal-ai
Maintenance 10 / 25
Adoption 10 / 25
Maturity 22 / 25
Community 9 / 25

How are scores calculated?

Stars

208

Forks

10

Language

Python

License

Apache-2.0

Last pushed

Mar 05, 2026

Commits (30d)

0

Dependencies

21

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/opendilab/LightRFT"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.