SJTU-DENG-Lab/LightningRL

LightningRL: Breaking the Accuracy–Parallelism Trade-off of Block-wise dLLMs via Reinforcement Learning

30
/ 100
Emerging

This project helps AI engineers and researchers fine-tune large language models (LLMs) to generate more accurate responses faster. It takes an existing block-wise diffusion LLM and refines its behavior, resulting in a model that outputs high-quality text for tasks like math and code generation, at significantly increased speed. The end user is typically an AI developer working on deploying or improving LLMs for specific applications.

Use this if you need to optimize block-wise diffusion LLMs to achieve a better balance between the accuracy of generated content and the speed of generation, especially for math and coding tasks.

Not ideal if you are working with traditional autoregressive LLMs or do not have a pre-trained block-wise diffusion LLM.

LLM-fine-tuning AI-model-optimization Generative-AI-deployment Computational-linguistics AI-research
No Package No Dependents
Maintenance 13 / 25
Adoption 6 / 25
Maturity 11 / 25
Community 0 / 25

How are scores calculated?

Stars

23

Forks

Language

Python

License

MIT

Last pushed

Mar 19, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/SJTU-DENG-Lab/LightningRL"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.