Adam-Mazur/Lazy-Llama

An implementation of LazyLLM token pruning for LLaMa 2 model family.

21
/ 100
Experimental

This project helps large language model practitioners speed up the text generation process, especially for very long inputs. It takes a LLaMa 2 model and your input text, then produces the generated output text faster by intelligently focusing on the most important parts of your input. This is designed for AI/ML engineers or researchers working with LLaMa 2 models who need more efficient inference.

No commits in the last 6 months.

Use this if you are working with LLaMa 2 models and want to accelerate text generation for long prompts without significant loss in output quality.

Not ideal if you are not using LLaMa 2 models, or if your primary concern is model accuracy or reducing training time rather than inference speed for long prompts.

LLM inference optimization large language models AI/ML engineering text generation computational efficiency
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 5 / 25
Maturity 16 / 25
Community 0 / 25

How are scores calculated?

Stars

13

Forks

Language

Python

License

MIT

Last pushed

Jan 06, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/Adam-Mazur/Lazy-Llama"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.