hsj576/GRIFFIN
Official Implementation of "GRIFFIN: Effective Token Alignment for Faster Speculative Decoding"[NeurIPS 2025]
This project dramatically speeds up how quickly large language models (LLMs) generate text, making your AI assistants and chatbots respond much faster. It takes an existing LLM (like LLaMA or Qwen) and optimizes its internal processes, delivering the same high-quality text output but at a significantly increased pace. AI developers and machine learning engineers who build and deploy LLM-powered applications will find this valuable.
No commits in the last 6 months.
Use this if you are a machine learning engineer or developer looking to accelerate the inference speed of your large language models, especially for applications requiring fast, real-time text generation.
Not ideal if you are an end-user without technical expertise in LLM deployment or model optimization, as this is a developer-focused tool.
Stars
18
Forks
2
Language
Python
License
Apache-2.0
Category
Last pushed
May 12, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/hsj576/GRIFFIN"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
vitali87/speculant-graph
Graph drafts, LLM verifies: a novel speculative decoding framework
Hambaobao/HCP-Coder
Hierarchical Context Pruning (HCP): A strategy to optimize real-world code completion with...
Geralt-Targaryen/Awesome-Speculative-Decoding
Reading notes on Speculative Decoding papers