zengxianyu/PPD-examples
Code and Models for the paper NeuralRemaster with Phase-Preserving Diffusion
This project offers a way to re-render images and videos while keeping their original geometric structure intact. By taking an existing image or video and a text prompt, it generates new visuals that match your description but maintain the core shapes and object placements from the original. This is ideal for artists, game developers, or anyone creating visual content who needs consistent scene layouts across different generated styles or themes.
Use this if you need to create new visual content (images or videos) from existing ones, ensuring that the underlying structure and geometry remain perfectly consistent.
Not ideal if you want to generate completely new images or videos from scratch without any structural dependency on an input, or if you need to drastically alter the scene's geometry.
Stars
66
Forks
6
Language
Python
License
Apache-2.0
Category
Last pushed
Feb 06, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/diffusion/zengxianyu/PPD-examples"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
huggingface/diffusers
🤗 Diffusers: State-of-the-art diffusion models for image, video, and audio generation in PyTorch.
bghira/SimpleTuner
A general fine-tuning kit geared toward image/video/audio diffusion models.
mcmonkeyprojects/SwarmUI
SwarmUI (formerly StableSwarmUI), A Modular Stable Diffusion Web-User-Interface, with an...
nateraw/stable-diffusion-videos
Create 🔥 videos with Stable Diffusion by exploring the latent space and morphing between text prompts
TheDesignFounder/DreamLayer
Benchmark diffusion models faster. Automate evals, seeds, and metrics for reproducible results.