haoyangzheng-ai/didi-instruct
[ICLR 2026] Discrete Diffusion Divergence Instruct (DiDi-Instruct)
This project helps AI developers and researchers significantly speed up large language model (LLM) text generation while maintaining high quality. You provide a pre-trained diffusion LLM, and DiDi-Instruct distills it into a smaller, faster model. The output is a highly optimized student model capable of generating text much more quickly than its teacher or other standard LLMs. This is ideal for those building or deploying AI applications where real-time text generation speed is critical.
153 stars.
Use this if you are a machine learning engineer or researcher who needs to generate high-quality text from large language models with extreme speed, reducing latency in applications.
Not ideal if you are a non-technical user simply looking to use an off-the-shelf LLM for content creation, as this project requires technical expertise to set up and run.
Stars
153
Forks
10
Language
Python
License
MIT
Category
Last pushed
Mar 04, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/diffusion/haoyangzheng-ai/didi-instruct"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
UCSC-VLAA/story-iter
[ICLR 2026] A Training-free Iterative Framework for Long Story Visualization
PaddlePaddle/PaddleMIX
Paddle Multimodal Integration and eXploration, supporting mainstream multi-modal tasks,...
keivalya/mini-vla
a minimal, beginner-friendly VLA to show how robot policies can fuse images, text, and states to...
adobe-research/custom-diffusion
Custom Diffusion: Multi-Concept Customization of Text-to-Image Diffusion (CVPR 2023)
byliutao/1Prompt1Story
🔥ICLR 2025 (Spotlight) One-Prompt-One-Story: Free-Lunch Consistent Text-to-Image Generation...