haoyangzheng-ai/didi-instruct

[ICLR 2026] Discrete Diffusion Divergence Instruct (DiDi-Instruct)

45
/ 100
Emerging

This project helps AI developers and researchers significantly speed up large language model (LLM) text generation while maintaining high quality. You provide a pre-trained diffusion LLM, and DiDi-Instruct distills it into a smaller, faster model. The output is a highly optimized student model capable of generating text much more quickly than its teacher or other standard LLMs. This is ideal for those building or deploying AI applications where real-time text generation speed is critical.

153 stars.

Use this if you are a machine learning engineer or researcher who needs to generate high-quality text from large language models with extreme speed, reducing latency in applications.

Not ideal if you are a non-technical user simply looking to use an off-the-shelf LLM for content creation, as this project requires technical expertise to set up and run.

AI development language model optimization real-time text generation machine learning research LLM deployment
No Package No Dependents
Maintenance 10 / 25
Adoption 10 / 25
Maturity 15 / 25
Community 10 / 25

How are scores calculated?

Stars

153

Forks

10

Language

Python

License

MIT

Last pushed

Mar 04, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/diffusion/haoyangzheng-ai/didi-instruct"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.