keshik6/grafting

[NeurIPS 2025 Oral] Official Code for Exploring Diffusion Transformer Designs via Grafting

34
/ 100
Emerging

This project offers a method called 'grafting' to efficiently explore new designs for diffusion transformer models. It takes existing, pretrained diffusion transformers and allows you to modify their internal components, such as attention mechanisms or MLPs, without the extensive computational cost of training from scratch. This is for machine learning researchers and engineers who want to quickly experiment with and evaluate novel generative AI architectures.

Use this if you are a researcher or engineer looking to rapidly prototype and test new Diffusion Transformer architectures and evaluate their impact on image generation quality and speed.

Not ideal if you are looking for a plug-and-play solution for generating images without needing to delve into model architecture modifications.

generative-AI diffusion-models architecture-search model-optimization image-synthesis
No Package No Dependents
Maintenance 6 / 25
Adoption 9 / 25
Maturity 15 / 25
Community 4 / 25

How are scores calculated?

Stars

72

Forks

2

Language

Jupyter Notebook

License

Apache-2.0

Last pushed

Jan 09, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/diffusion/keshik6/grafting"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.