easydistill and LLM-Distillery

easydistill
50
Established
LLM-Distillery
39
Emerging
Maintenance 10/25
Adoption 10/25
Maturity 15/25
Community 15/25
Maintenance 0/25
Adoption 9/25
Maturity 16/25
Community 14/25
Stars: 292
Forks: 31
Downloads:
Commits (30d): 0
Language: Python
License: Apache-2.0
Stars: 112
Forks: 14
Downloads:
Commits (30d): 0
Language: Python
License: Apache-2.0
No Package No Dependents
Stale 6m No Package No Dependents

About easydistill

modelscope/easydistill

a toolkit on knowledge distillation for large language models

This project helps AI researchers and industry practitioners make large language models (LLMs) more efficient. It takes an existing, powerful LLM and a smaller, target LLM, then trains the smaller model to mimic the performance of the larger one using various techniques. The output is a smaller, faster LLM that performs nearly as well as its much larger counterpart, ideal for deployment where computational resources are limited.

AI-efficiency NLP-deployment model-optimization resource-constrained-AI LLM-fine-tuning

About LLM-Distillery

golololologol/LLM-Distillery

A pipeline for LLM knowledge distillation

This tool helps developers make large language models (LLMs) smaller and more efficient without losing their core knowledge. You provide one or more larger, more capable 'teacher' LLMs and a dataset of instructions or text. The tool then produces a smaller 'student' LLM that has learned from the teachers, which is ideal for deployment in resource-constrained environments. This is for machine learning engineers and AI solution architects looking to optimize LLM performance and cost.

LLM-optimization model-compression AI-deployment machine-learning-engineering resource-efficiency

Scores updated daily from GitHub, PyPI, and npm data. How scores work