czg1225/dParallel

[ICLR 2026] dParallel: Learnable Parallel Decoding for dLLMs

39
/ 100
Emerging

This project helps make large language models (LLMs) respond much faster when generating text, especially for complex tasks like solving math problems or writing code. It takes an existing LLM, applies a special training technique, and outputs a faster version of that same model. This is for AI developers, researchers, or MLOps engineers who need to deploy performant LLMs in their applications.

Use this if you need to speed up the text generation (decoding) process of existing large language models without sacrificing accuracy, especially for tasks requiring detailed reasoning.

Not ideal if you are looking for a pre-trained LLM for general use without needing to specifically optimize its decoding speed or conduct further model training.

large-language-models model-optimization inference-acceleration deep-learning generative-ai
No Package No Dependents
Maintenance 10 / 25
Adoption 8 / 25
Maturity 15 / 25
Community 6 / 25

How are scores calculated?

Stars

62

Forks

3

Language

Python

License

MIT

Last pushed

Feb 22, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/czg1225/dParallel"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.