czg1225/dParallel
[ICLR 2026] dParallel: Learnable Parallel Decoding for dLLMs
This project helps make large language models (LLMs) respond much faster when generating text, especially for complex tasks like solving math problems or writing code. It takes an existing LLM, applies a special training technique, and outputs a faster version of that same model. This is for AI developers, researchers, or MLOps engineers who need to deploy performant LLMs in their applications.
Use this if you need to speed up the text generation (decoding) process of existing large language models without sacrificing accuracy, especially for tasks requiring detailed reasoning.
Not ideal if you are looking for a pre-trained LLM for general use without needing to specifically optimize its decoding speed or conduct further model training.
Stars
62
Forks
3
Language
Python
License
MIT
Category
Last pushed
Feb 22, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/czg1225/dParallel"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
ZHZisZZ/dllm
dLLM: Simple Diffusion Language Modeling
pengzhangzhi/Open-dLLM
Open diffusion language model for code generation — releasing pretraining, evaluation,...
EnnengYang/Awesome-Model-Merging-Methods-Theories-Applications
Model Merging in LLMs, MLLMs, and Beyond: Methods, Theories, Applications and Opportunities. ACM...
THUDM/LongWriter
[ICLR 2025] LongWriter: Unleashing 10,000+ Word Generation from Long Context LLMs
AIoT-MLSys-Lab/SVD-LLM
[ICLR 2025🔥] SVD-LLM & [NAACL 2025🔥] SVD-LLM V2