SJTU-DENG-Lab/LightningRL
LightningRL: Breaking the Accuracy–Parallelism Trade-off of Block-wise dLLMs via Reinforcement Learning
This project helps AI engineers and researchers fine-tune large language models (LLMs) to generate more accurate responses faster. It takes an existing block-wise diffusion LLM and refines its behavior, resulting in a model that outputs high-quality text for tasks like math and code generation, at significantly increased speed. The end user is typically an AI developer working on deploying or improving LLMs for specific applications.
Use this if you need to optimize block-wise diffusion LLMs to achieve a better balance between the accuracy of generated content and the speed of generation, especially for math and coding tasks.
Not ideal if you are working with traditional autoregressive LLMs or do not have a pre-trained block-wise diffusion LLM.
Stars
23
Forks
—
Language
Python
License
MIT
Category
Last pushed
Mar 19, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/SJTU-DENG-Lab/LightningRL"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
ZHZisZZ/dllm
dLLM: Simple Diffusion Language Modeling
pengzhangzhi/Open-dLLM
Open diffusion language model for code generation — releasing pretraining, evaluation,...
EnnengYang/Awesome-Model-Merging-Methods-Theories-Applications
Model Merging in LLMs, MLLMs, and Beyond: Methods, Theories, Applications and Opportunities. ACM...
THUDM/LongWriter
[ICLR 2025] LongWriter: Unleashing 10,000+ Word Generation from Long Context LLMs
AIoT-MLSys-Lab/SVD-LLM
[ICLR 2025🔥] SVD-LLM & [NAACL 2025🔥] SVD-LLM V2