gordicaleksa/stable_diffusion_playground
Playing around with stable diffusion. Generated images are reproducible because I save the metadata and latent information. You can generate and then later interpolate between the images of your choice.
This tool helps artists and creatives experiment with AI-generated imagery. You provide text descriptions (prompts) and get back unique, high-resolution images. It also lets you create smooth visual transitions between two images or precisely recreate any image you've made before. Anyone looking to rapidly generate and explore visual concepts will find this useful.
204 stars. No commits in the last 6 months.
Use this if you want to generate a diverse range of images from text descriptions, blend concepts between existing images, or guarantee you can reproduce a specific image result later.
Not ideal if you don't have access to a powerful GPU with at least 8GB of VRAM.
Stars
204
Forks
23
Language
Python
License
MIT
Category
Last pushed
Sep 14, 2022
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/diffusion/gordicaleksa/stable_diffusion_playground"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
neggles/animatediff-cli
a CLI utility/library for AnimateDiff stable diffusion generation
sakalond/StableGen
Transform your 3D texturing workflow with the power of generative AI, directly within Blender!
victordibia/peacasso
UI interface for experimenting with multimodal (text, image) models (stable diffusion).
ai-forever/Kandinsky-2
Kandinsky 2 — multilingual text2image latent diffusion model
carefree0910/carefree-drawboard
🎨 Infinite Drawboard in Python