huggingface/instruction-tuned-sd
Code for instruction-tuning Stable Diffusion.
This project helps artists, designers, and marketers customize image generation models to follow precise editing instructions. You input an image and a specific text instruction (like "apply a cartoon filter"), and the model generates an edited version of the image. It's for anyone who needs fine-grained control over how AI models transform images based on natural language commands.
249 stars. No commits in the last 6 months.
Use this if you need to train a Stable Diffusion model to perform specific image transformations like cartoonization or low-level adjustments based on text instructions, rather than just generating new images from scratch.
Not ideal if you are looking for a ready-to-use application for everyday image editing without any model training or technical setup.
Stars
249
Forks
18
Language
Python
License
Apache-2.0
Category
Last pushed
Feb 16, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/diffusion/huggingface/instruction-tuned-sd"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
NVlabs/Sana
SANA: Efficient High-Resolution Image Synthesis with Linear Diffusion Transformer
FoundationVision/VAR
[NeurIPS 2024 Best Paper Award][GPT beats diffusion🔥] [scaling laws in visual generation📈]...
nerdyrodent/VQGAN-CLIP
Just playing with getting VQGAN+CLIP running locally, rather than having to use colab.
huggingface/finetrainers
Scalable and memory-optimized training of diffusion models
AssemblyAI-Community/MinImagen
MinImagen: A minimal implementation of the Imagen text-to-image model