SqueezeAILab/LLM2LLM
[ACL 2024] LLM2LLM: Boosting LLMs with Novel Iterative Data Enhancement
This project helps AI researchers and machine learning engineers enhance the performance of large language models (LLMs) on specific tasks. You provide an existing LLM (like LLaMA-2-7B) and a dataset for a task like grade-school math problems; the project iteratively refines the training data and the model, outputting a more capable version of the LLM for that task. This is for individuals who build, train, and fine-tune large language models.
194 stars. No commits in the last 6 months.
Use this if you are a researcher or engineer looking to improve the accuracy and capabilities of your large language models on specific datasets through iterative data enhancement.
Not ideal if you are an end-user simply looking to apply an LLM for general tasks without modifying its underlying training or architecture.
Stars
194
Forks
15
Language
Python
License
MIT
Category
Last pushed
Mar 25, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/SqueezeAILab/LLM2LLM"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
ModelCloud/GPTQModel
LLM model quantization (compression) toolkit with hw acceleration support for Nvidia CUDA, AMD...
intel/auto-round
🎯An accuracy-first, highly efficient quantization toolkit for LLMs, designed to minimize quality...
pytorch/ao
PyTorch native quantization and sparsity for training and inference
bodaay/HuggingFaceModelDownloader
Simple go utility to download HuggingFace Models and Datasets
NVIDIA/kvpress
LLM KV cache compression made easy