PKU-Alignment/align-anything
Align Anything: Training All-modality Model with Feedback
This project helps researchers and developers fine-tune large language models to better align with human intentions across various data types like images, audio, and video, in addition to text. It takes a base multimodal model and human feedback data, then outputs a refined model that behaves more like humans expect. This is primarily for AI researchers and machine learning engineers who are building or improving large multimodal models.
4,635 stars.
Use this if you need to customize or improve the behavior of an existing large language model that processes multiple types of data (text, image, audio, video) to better match human preferences or specific task requirements.
Not ideal if you are an end-user looking for a ready-to-use application, as this is a framework for developing and aligning models, not a finished product.
Stars
4,635
Forks
510
Language
Python
License
Apache-2.0
Category
Last pushed
Nov 27, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/PKU-Alignment/align-anything"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related models
agentscope-ai/Trinity-RFT
Trinity-RFT is a general-purpose, flexible and scalable framework designed for reinforcement...
OpenRLHF/OpenRLHF
An Easy-to-use, Scalable and High-performance Agentic RL Framework based on Ray (PPO & DAPO &...
zjunlp/EasyEdit
[ACL 2024] An Easy-to-use Knowledge Editing Framework for LLMs.
huggingface/alignment-handbook
Robust recipes to align language models with human and AI preferences
hyunwoongko/nanoRLHF
nanoRLHF: from-scratch journey into how LLMs and RLHF really work.