YangLing0818/ContextDiff
[ICLR 2024] Contextualized Diffusion Models for Text-Guided Image and Video Generation
This tool helps creative professionals like digital artists, marketers, or content creators generate new images from text descriptions or edit existing videos by simply typing what changes they want. You provide a text prompt or an existing video and a text description of the desired edit, and it produces a high-quality, semantically aligned image or an edited video. This is for anyone looking to quickly generate or modify visual content using natural language.
No commits in the last 6 months.
Use this if you need to generate high-quality images from text or make specific, text-guided edits to videos with strong semantic accuracy.
Not ideal if you need fine-grained, pixel-level control over image and video editing that goes beyond semantic changes from text prompts.
Stars
73
Forks
4
Language
Python
License
—
Category
Last pushed
May 24, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/diffusion/YangLing0818/ContextDiff"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
UCSC-VLAA/story-iter
[ICLR 2026] A Training-free Iterative Framework for Long Story Visualization
PaddlePaddle/PaddleMIX
Paddle Multimodal Integration and eXploration, supporting mainstream multi-modal tasks,...
keivalya/mini-vla
a minimal, beginner-friendly VLA to show how robot policies can fuse images, text, and states to...
adobe-research/custom-diffusion
Custom Diffusion: Multi-Concept Customization of Text-to-Image Diffusion (CVPR 2023)
byliutao/1Prompt1Story
🔥ICLR 2025 (Spotlight) One-Prompt-One-Story: Free-Lunch Consistent Text-to-Image Generation...