SqueezeAILab/LLM2LLM

[ACL 2024] LLM2LLM: Boosting LLMs with Novel Iterative Data Enhancement

38
/ 100
Emerging

This project helps AI researchers and machine learning engineers enhance the performance of large language models (LLMs) on specific tasks. You provide an existing LLM (like LLaMA-2-7B) and a dataset for a task like grade-school math problems; the project iteratively refines the training data and the model, outputting a more capable version of the LLM for that task. This is for individuals who build, train, and fine-tune large language models.

194 stars. No commits in the last 6 months.

Use this if you are a researcher or engineer looking to improve the accuracy and capabilities of your large language models on specific datasets through iterative data enhancement.

Not ideal if you are an end-user simply looking to apply an LLM for general tasks without modifying its underlying training or architecture.

AI research Machine learning engineering Large language model training Model fine-tuning Data augmentation
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 12 / 25

How are scores calculated?

Stars

194

Forks

15

Language

Python

License

MIT

Last pushed

Mar 25, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/SqueezeAILab/LLM2LLM"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.