PKU-Alignment/align-anything

Align Anything: Training All-modality Model with Feedback

53
/ 100
Established

This project helps researchers and developers fine-tune large language models to better align with human intentions across various data types like images, audio, and video, in addition to text. It takes a base multimodal model and human feedback data, then outputs a refined model that behaves more like humans expect. This is primarily for AI researchers and machine learning engineers who are building or improving large multimodal models.

4,635 stars.

Use this if you need to customize or improve the behavior of an existing large language model that processes multiple types of data (text, image, audio, video) to better match human preferences or specific task requirements.

Not ideal if you are an end-user looking for a ready-to-use application, as this is a framework for developing and aligning models, not a finished product.

AI-model-alignment multimodal-AI human-feedback-learning large-language-models AI-model-customization
No Package No Dependents
Maintenance 6 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 21 / 25

How are scores calculated?

Stars

4,635

Forks

510

Language

Python

License

Apache-2.0

Last pushed

Nov 27, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/PKU-Alignment/align-anything"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.