Adam-Mazur/Lazy-Llama
An implementation of LazyLLM token pruning for LLaMa 2 model family.
This project helps large language model practitioners speed up the text generation process, especially for very long inputs. It takes a LLaMa 2 model and your input text, then produces the generated output text faster by intelligently focusing on the most important parts of your input. This is designed for AI/ML engineers or researchers working with LLaMa 2 models who need more efficient inference.
No commits in the last 6 months.
Use this if you are working with LLaMa 2 models and want to accelerate text generation for long prompts without significant loss in output quality.
Not ideal if you are not using LLaMa 2 models, or if your primary concern is model accuracy or reducing training time rather than inference speed for long prompts.
Stars
13
Forks
—
Language
Python
License
MIT
Category
Last pushed
Jan 06, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/Adam-Mazur/Lazy-Llama"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
peremartra/optipfair
Structured pruning and bias visualization for Large Language Models. Tools for LLM optimization...
VainF/Torch-Pruning
[CVPR 2023] DepGraph: Towards Any Structural Pruning; LLMs, Vision Foundation Models, etc.
horseee/LLM-Pruner
[NeurIPS 2023] LLM-Pruner: On the Structural Pruning of Large Language Models. Support...
CASIA-LMC-Lab/FLAP
[AAAI 2024] Fluctuation-based Adaptive Structured Pruning for Large Language Models
princeton-nlp/LLM-Shearing
[ICLR 2024] Sheared LLaMA: Accelerating Language Model Pre-training via Structured Pruning