Li-Jinsong/DAEDAL
[ICLR 2026] Official repository of "Beyond Fixed: Training-Free Variable-Length Denoising for Diffusion Large Language Models"
This project helps developers working with Diffusion Large Language Models (DLLMs) overcome a significant limitation: the need to pre-define the output length for every text generation task. Instead of manually guessing or setting a fixed length, this tool allows DLLMs to dynamically determine the appropriate response length on the fly. It takes a text prompt and generates a response that is precisely as long as needed, eliminating truncation for complex tasks and reducing unnecessary computation for simple ones.
162 stars.
Use this if you are a machine learning engineer or researcher developing with Diffusion Large Language Models and need to generate variable-length text outputs without performance trade-offs or manual length tuning.
Not ideal if you are working with Autoregressive Large Language Models or if your text generation tasks consistently require fixed-length outputs.
Stars
162
Forks
6
Language
Python
License
Apache-2.0
Category
Last pushed
Feb 16, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/diffusion/Li-Jinsong/DAEDAL"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
ljleb/sd-mecha
Executable State Dict Recipes
SJTU-DENG-Lab/Discrete-Diffusion-Forcing
Discrete Diffusion Forcing (D2F): dLLMs Can Do Faster-Than-AR Inference
declare-lab/tango
A family of diffusion models for text-to-audio generation.
SalesforceAIResearch/CoDA
Salesforce AI Research's open diffusion language model
ZhanqiuHu/flash-dlm-experimental
Implementation of Flash-DLM (paper: FlashDLM: Accelerating Diffusion Language Models via...