HKUST-LongGroup/Coarse-guided-Gen
[arXiv 2026] Official PyTorch Repository for "Coarse-Guided Visual Generation via Weighted h-Transform Sampling"
This tool helps creative professionals and researchers enhance visual media. By providing a rough sketch or low-quality image/video, you can generate a refined, high-quality version. It takes a 'coarse' visual input and produces a 'fine' output, suitable for tasks like image restoration or editing. This is ideal for designers, video editors, or scientists working with visual data.
Use this if you need to create high-quality images or videos from an initial, less-detailed visual input, for tasks like upscaling or guided content generation.
Not ideal if you need to generate visuals from scratch without any initial guiding image or video.
Stars
35
Forks
4
Language
Python
License
—
Category
Last pushed
Mar 16, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/diffusion/HKUST-LongGroup/Coarse-guided-Gen"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
UCSC-VLAA/story-iter
[ICLR 2026] A Training-free Iterative Framework for Long Story Visualization
PaddlePaddle/PaddleMIX
Paddle Multimodal Integration and eXploration, supporting mainstream multi-modal tasks,...
keivalya/mini-vla
a minimal, beginner-friendly VLA to show how robot policies can fuse images, text, and states to...
adobe-research/custom-diffusion
Custom Diffusion: Multi-Concept Customization of Text-to-Image Diffusion (CVPR 2023)
byliutao/1Prompt1Story
🔥ICLR 2025 (Spotlight) One-Prompt-One-Story: Free-Lunch Consistent Text-to-Image Generation...