OpenGVLab/SDLM

Sequential Diffusion Language Model (SDLM) enhances pre-trained autoregressive language models by adaptively determining generation length and maintaining KV-cache compatibility, achieving high efficiency and throughput.

36
/ 100
Emerging

This project offers an enhanced way to generate text using large language models (LLMs), allowing you to get answers faster without sacrificing quality. It takes your prompt (like a question or a request for code) and quickly produces a high-quality, relevant text response. Data scientists, machine learning engineers, and researchers working with LLMs will find this useful for speeding up text generation tasks.

Use this if you need to generate text with pre-trained autoregressive language models and want significantly faster decoding speeds while maintaining high accuracy.

Not ideal if you are primarily focused on training new base language models from scratch or if you require extreme fine-grained control over token-by-token generation for highly specialized research.

text-generation large-language-models natural-language-processing machine-learning-engineering AI-research
No Package No Dependents
Maintenance 6 / 25
Adoption 9 / 25
Maturity 15 / 25
Community 6 / 25

How are scores calculated?

Stars

92

Forks

3

Language

Python

License

MIT

Last pushed

Dec 27, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/diffusion/OpenGVLab/SDLM"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.