MICV-yonsei/DragText
[WACV 2025 ORAL] Official Pytorch Code for DragText: Rethinking Text Embedding in Point-based Image Editing
This tool helps graphic designers, digital artists, and creative professionals precisely manipulate objects within existing images. You provide an image and a text prompt, then 'drag' specific points on the image to new locations. The output is a new image where the object has been moved or reshaped according to your drags, while maintaining visual quality.
No commits in the last 6 months.
Use this if you need to precisely adjust the position or shape of elements in an image using intuitive 'drag-and-drop' controls, rather than complex masking or manual redrawing.
Not ideal if you primarily work with generating entirely new images from text prompts or require advanced compositing and layering capabilities.
Stars
14
Forks
2
Language
Python
License
Apache-2.0
Category
Last pushed
Jan 22, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/diffusion/MICV-yonsei/DragText"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
huggingface/diffusers
🤗 Diffusers: State-of-the-art diffusion models for image, video, and audio generation in PyTorch.
bghira/SimpleTuner
A general fine-tuning kit geared toward image/video/audio diffusion models.
mcmonkeyprojects/SwarmUI
SwarmUI (formerly StableSwarmUI), A Modular Stable Diffusion Web-User-Interface, with an...
nateraw/stable-diffusion-videos
Create 🔥 videos with Stable Diffusion by exploring the latent space and morphing between text prompts
TheDesignFounder/DreamLayer
Benchmark diffusion models faster. Automate evals, seeds, and metrics for reproducible results.