rabiulcste/vismin
[NeurIPS24] VisMin: Visual Minimal-Change Understanding
This project helps researchers and developers create specialized image and text datasets where an image and its description are subtly altered to test how well AI models understand nuanced changes. You input an original image and caption, along with instructions for a minimal change (like adding an object or altering a detail). The system then generates a new image and caption reflecting that small edit, which is useful for benchmarking visual AI models. This is ideal for those building or evaluating advanced AI for image understanding.
No commits in the last 6 months.
Use this if you need to generate high-quality datasets of 'minimal-change' image-text pairs to rigorously test and train AI models on their ability to detect subtle visual or textual differences.
Not ideal if you're looking for a general-purpose image editing tool or a solution for large-scale, complex image generation without a focus on minimal changes for AI evaluation.
Stars
19
Forks
6
Language
Python
License
—
Category
Last pushed
Mar 03, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/diffusion/rabiulcste/vismin"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
UCSC-VLAA/story-iter
[ICLR 2026] A Training-free Iterative Framework for Long Story Visualization
PaddlePaddle/PaddleMIX
Paddle Multimodal Integration and eXploration, supporting mainstream multi-modal tasks,...
keivalya/mini-vla
a minimal, beginner-friendly VLA to show how robot policies can fuse images, text, and states to...
adobe-research/custom-diffusion
Custom Diffusion: Multi-Concept Customization of Text-to-Image Diffusion (CVPR 2023)
byliutao/1Prompt1Story
🔥ICLR 2025 (Spotlight) One-Prompt-One-Story: Free-Lunch Consistent Text-to-Image Generation...