SteveKGYang/MetaAligner

Models, data, and codes for the paper: MetaAligner: Towards Generalizable Multi-Objective Alignment of Language Models

29
/ 100
Experimental

This project helps developers fine-tune large language models (LLMs) to better align with specific goals like harmlessness, helpfulness, or professionalism. It takes an existing LLM and a dataset of preferences, then outputs a more refined LLM that adheres to multiple objectives simultaneously. AI researchers and developers building conversational AI or specialized language applications would use this.

No commits in the last 6 months.

Use this if you need to quickly and efficiently adjust a large language model's behavior to meet several desired objectives without extensive retraining.

Not ideal if you are a non-developer seeking a ready-to-use application, as this project provides models and code for technical implementation.

large-language-models model-alignment natural-language-processing conversational-ai machine-learning-engineering
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 6 / 25
Maturity 16 / 25
Community 7 / 25

How are scores calculated?

Stars

24

Forks

2

Language

Python

License

MIT

Last pushed

Sep 26, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/SteveKGYang/MetaAligner"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.