YJiangcm/Lion

[EMNLP 2023] Lion: Adversarial Distillation of Proprietary Large Language Models

39
/ 100
Emerging

This project helps machine learning engineers and researchers create smaller, more efficient large language models (LLMs) that closely mimic the performance of powerful, proprietary LLMs like ChatGPT. It takes an existing instruction-following dataset and a proprietary teacher model, then produces a distilled, compact LLM. This process is ideal for those who need to deploy performant LLMs with fewer computational resources or under stricter privacy constraints.

212 stars. No commits in the last 6 months.

Use this if you need to build a smaller, faster language model that can perform complex instruction-following tasks almost as well as a large proprietary model, but with reduced operational costs.

Not ideal if you're looking for a ready-to-use application or a no-code solution, as this project requires significant technical expertise in machine learning and GPU infrastructure to implement.

large-language-models model-distillation natural-language-processing machine-learning-ops ai-efficiency
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 13 / 25

How are scores calculated?

Stars

212

Forks

19

Language

Python

License

MIT

Last pushed

Feb 11, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/YJiangcm/Lion"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.