hao-ai-lab/JacobiForcing

Jacobi Forcing: Fast and Accurate Diffusion-style Decoding

43
/ 100
Emerging

This project helps anyone working with Large Language Models (LLMs) who needs faster text generation. It takes an existing LLM, trains it with a new technique called Jacobi Forcing, and outputs a significantly quicker model. The end user is typically a developer or researcher deploying or fine-tuning LLMs for applications where speed is critical, such as chatbots or coding assistants.

143 stars.

Use this if you want to accelerate the text generation speed of your causal LLMs, especially for tasks like coding or mathematics, without sacrificing output quality.

Not ideal if you are not working with LLMs or if you prioritize maximum generation quality over speed improvements.

LLM-deployment text-generation-speed model-acceleration natural-language-processing AI-research
No Package No Dependents
Maintenance 10 / 25
Adoption 10 / 25
Maturity 15 / 25
Community 8 / 25

How are scores calculated?

Stars

143

Forks

6

Language

Python

License

Apache-2.0

Last pushed

Feb 20, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/hao-ai-lab/JacobiForcing"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.