ys-zong/VLGuard

[ICML 2024] Safety Fine-Tuning at (Almost) No Cost: A Baseline for Vision Large Language Models.

25
/ 100
Experimental

VLGuard helps AI researchers and developers working with Vision Large Language Models (VLMs) to make them safer and more helpful. It provides a dataset and pre-trained model weights that can be used to fine-tune existing VLMs. This results in models that are less likely to generate harmful content while maintaining their ability to assist users.

No commits in the last 6 months.

Use this if you are building or deploying Vision Large Language Models and need to improve their safety performance efficiently without sacrificing helpfulness.

Not ideal if you are looking for a ready-to-use, end-user application rather than tools for VLM development and fine-tuning.

AI safety VLM development Model fine-tuning Responsible AI AI research
No License Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 9 / 25
Maturity 8 / 25
Community 8 / 25

How are scores calculated?

Stars

85

Forks

5

Language

Python

License

Last pushed

Jan 19, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/ys-zong/VLGuard"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.