easydistill and LLM-Distillery
About easydistill
modelscope/easydistill
a toolkit on knowledge distillation for large language models
This project helps AI researchers and industry practitioners make large language models (LLMs) more efficient. It takes an existing, powerful LLM and a smaller, target LLM, then trains the smaller model to mimic the performance of the larger one using various techniques. The output is a smaller, faster LLM that performs nearly as well as its much larger counterpart, ideal for deployment where computational resources are limited.
About LLM-Distillery
golololologol/LLM-Distillery
A pipeline for LLM knowledge distillation
This tool helps developers make large language models (LLMs) smaller and more efficient without losing their core knowledge. You provide one or more larger, more capable 'teacher' LLMs and a dataset of instructions or text. The tool then produces a smaller 'student' LLM that has learned from the teachers, which is ideal for deployment in resource-constrained environments. This is for machine learning engineers and AI solution architects looking to optimize LLM performance and cost.
Scores updated daily from GitHub, PyPI, and npm data. How scores work