TideDra/VL-RLHF

A RLHF Infrastructure for Vision-Language Models

35
/ 100
Emerging

This project provides a robust framework for refining how Vision-Language Models (VLMs) understand and generate responses based on human preferences. It allows researchers and AI developers to input raw VLM models and preference-based datasets, then output fine-tuned models that align better with desired human-like interactions and evaluations. The primary users are AI researchers and machine learning engineers focused on improving VLM performance and alignment.

198 stars. No commits in the last 6 months.

Use this if you are an AI researcher or machine learning engineer looking to fine-tune existing Vision-Language Models (VLMs) like LLaVA or Qwen-VL using methods like DPO to better align them with human preferences or specific task requirements.

Not ideal if you are an end-user without a background in AI development, as this is an infrastructure tool for model training, not a ready-to-use application.

Vision-Language Model Training AI Model Alignment Machine Learning Research Deep Learning Engineering Multimodal AI
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 9 / 25

How are scores calculated?

Stars

198

Forks

8

Language

Python

License

Apache-2.0

Last pushed

Nov 15, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/TideDra/VL-RLHF"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.