uclaml/SPPO

The official implementation of Self-Play Preference Optimization (SPPO)

42
/ 100
Emerging

This project helps large language model (LLM) developers enhance their models' performance without relying on external, costly feedback like GPT-4 evaluations. It takes an existing LLM and improves its ability to generate high-quality, aligned responses. The main users are researchers and engineers who develop and fine-tune LLMs.

583 stars. No commits in the last 6 months.

Use this if you are a machine learning engineer or researcher looking to significantly improve the alignment and response quality of your large language models using an efficient self-play framework.

Not ideal if you are an end-user simply looking to apply an already optimized LLM without needing to perform the alignment process yourself.

large-language-model-development natural-language-processing model-alignment LLM-fine-tuning AI-research
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 16 / 25

How are scores calculated?

Stars

583

Forks

47

Language

Python

License

Apache-2.0

Last pushed

Jan 23, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/uclaml/SPPO"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.