Gen-Verse/dLLM-RL

[ICLR 2026] Official code for TraceRL: Revolutionizing post-training for Diffusion LLMs, powering the SOTA TraDo series.

50
/ 100
Established

This project provides a comprehensive framework for post-training Diffusion Large Language Models (dLLMs) and multimodal dLLMs. It takes existing dLLM models and datasets as input, then applies specialized reinforcement learning and supervised fine-tuning techniques to produce more capable dLLMs, particularly for complex reasoning tasks like math and coding. It's designed for machine learning researchers and engineers who develop and fine-tune advanced language models.

459 stars.

Use this if you are a machine learning researcher or engineer looking to enhance the performance of existing Diffusion Large Language Models (dLLMs) on specific, challenging tasks through advanced post-training methods.

Not ideal if you are looking for an out-of-the-box LLM for general use without requiring deep customization or fine-tuning expertise.

Large Language Models Reinforcement Learning Model Fine-tuning AI Research Generative AI
No Package No Dependents
Maintenance 10 / 25
Adoption 10 / 25
Maturity 15 / 25
Community 15 / 25

How are scores calculated?

Stars

459

Forks

37

Language

Python

License

Apache-2.0

Last pushed

Jan 28, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/Gen-Verse/dLLM-RL"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.