NiuTrans/Vision-LLM-Alignment

This repository contains the code for SFT, RLHF, and DPO, designed for vision-based LLMs, including the LLaVA models and the LLaMA-3.2-vision models.

31
/ 100
Emerging

This project helps machine learning researchers and engineers refine vision-based large language models (Vision-LLMs). It takes a pre-trained Vision-LLM (like LLaVA or LLaMA-3.2-Vision) and human preference data as input to improve the model's ability to follow instructions and generate helpful, trustworthy responses based on images and text. The output is an 'aligned' Vision-LLM that performs better on real-world visual-language tasks.

118 stars. No commits in the last 6 months.

Use this if you need to fine-tune existing Vision-LLMs to better align with human preferences for safety, helpfulness, and instruction-following, especially for multi-image prompts.

Not ideal if you are looking for a pre-trained, ready-to-use Vision-LLM or if your primary focus is on training a Vision-LLM from scratch without alignment.

AI model alignment Multimodal AI Generative AI fine-tuning Visual language models Machine learning research
No License Stale 6m No Package No Dependents
Maintenance 2 / 25
Adoption 10 / 25
Maturity 8 / 25
Community 11 / 25

How are scores calculated?

Stars

118

Forks

10

Language

Python

License

Last pushed

Jun 18, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/NiuTrans/Vision-LLM-Alignment"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.