dannylee1020/openpo

Building synthetic data for preference tuning

32
/ 100
Emerging

This project helps AI developers and researchers create high-quality synthetic datasets to fine-tune their large language models (LLMs). It takes in prompts and generates diverse responses from over 200 different LLMs. The output is a dataset of these responses, often paired with evaluations to indicate preference, which is crucial for training more helpful and accurate AI models.

No commits in the last 6 months. Available on PyPI.

Use this if you are a machine learning engineer or AI researcher who needs to generate and evaluate a large volume of synthetic text data from various LLMs for model training or research.

Not ideal if you're looking for a user-facing application to directly interact with or fine-tune LLMs without writing code, or if you only need to use a single LLM for basic text generation.

AI-model-training LLM-fine-tuning synthetic-data-generation AI-evaluation NLP-research
Stale 6m
Maintenance 0 / 25
Adoption 7 / 25
Maturity 25 / 25
Community 0 / 25

How are scores calculated?

Stars

27

Forks

Language

Python

License

Apache-2.0

Last pushed

Dec 26, 2024

Commits (30d)

0

Dependencies

9

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/dannylee1020/openpo"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.