KurisuMakise004/MMD2depth
MMD2depth use MikuMikuDance model in Stable Diffusion 2.0 depth2img
This tool helps MikuMikuDance (MMD) animators and 3D artists convert their MMD models and motion files into sequences of depth images. You provide an MMD model (.pmx) and motion data (.vmd), and it generates corresponding depth images. These depth images can then be used to guide image generation in Stable Diffusion 2.0's depth2img feature, allowing artists to create new visual styles based on their MMD animations.
No commits in the last 6 months.
Use this if you want to leverage your existing MikuMikuDance animations to create unique image sequences or artistic renders with AI tools like Stable Diffusion.
Not ideal if you are looking for a general-purpose 3D rendering tool or if you need to generate images from scratch without an MMD model as a starting point.
Stars
29
Forks
3
Language
Jupyter Notebook
License
Unlicense
Category
Last pushed
Nov 27, 2022
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/diffusion/KurisuMakise004/MMD2depth"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
huggingface/diffusers
🤗 Diffusers: State-of-the-art diffusion models for image, video, and audio generation in PyTorch.
bghira/SimpleTuner
A general fine-tuning kit geared toward image/video/audio diffusion models.
mcmonkeyprojects/SwarmUI
SwarmUI (formerly StableSwarmUI), A Modular Stable Diffusion Web-User-Interface, with an...
nateraw/stable-diffusion-videos
Create 🔥 videos with Stable Diffusion by exploring the latent space and morphing between text prompts
TheDesignFounder/DreamLayer
Benchmark diffusion models faster. Automate evals, seeds, and metrics for reproducible results.