vicgalle/stable-diffusion-aesthetic-gradients
Personalization for Stable Diffusion via Aesthetic Gradients 🎨
This project helps artists, designers, and marketers who use Stable Diffusion to generate images. It allows you to guide the image generation process towards a specific visual style or 'aesthetic' that you define using a collection of example images. You provide a text prompt and your desired aesthetic (as an embedding file), and it produces images that not only match your prompt but also embody that chosen style, without needing complex text modifiers.
741 stars. No commits in the last 6 months.
Use this if you want to consistently generate images with a personalized artistic style or specific visual characteristics using Stable Diffusion, based on examples you provide, rather than relying solely on descriptive text prompts.
Not ideal if you prefer to control image generation purely through detailed text prompts or if you're not already comfortable with a command-line interface for Stable Diffusion.
Stars
741
Forks
62
Language
Jupyter Notebook
License
—
Category
Last pushed
Oct 21, 2022
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/diffusion/vicgalle/stable-diffusion-aesthetic-gradients"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
neggles/animatediff-cli
a CLI utility/library for AnimateDiff stable diffusion generation
sakalond/StableGen
Transform your 3D texturing workflow with the power of generative AI, directly within Blender!
victordibia/peacasso
UI interface for experimenting with multimodal (text, image) models (stable diffusion).
ai-forever/Kandinsky-2
Kandinsky 2 — multilingual text2image latent diffusion model
carefree0910/carefree-drawboard
🎨 Infinite Drawboard in Python