AlaaLab/InstructCV
[ ICLR 2024 ] Official Codebase for "InstructCV: Instruction-Tuned Text-to-Image Diffusion Models as Vision Generalists"
This project helps computer vision researchers and practitioners conduct various image analysis tasks by simply providing natural language instructions. You input an image and a text description of the task (e.g., "segment the cat"), and it outputs an image that visually encodes the task's result, such as a segmented image or a depth map. This is ideal for those who need a unified, flexible approach to solve multiple vision problems without designing task-specific models.
461 stars. No commits in the last 6 months.
Use this if you need a flexible way to perform multiple computer vision tasks like segmentation, object detection, or depth estimation on images using simple text instructions, rather than needing a specialized model for each task.
Not ideal if you require extremely high performance or very specialized, domain-specific adaptations for a single computer vision task, as it prioritizes generality over hyper-specialization.
Stars
461
Forks
40
Language
Python
License
—
Category
Last pushed
Apr 27, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/diffusion/AlaaLab/InstructCV"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
UCSC-VLAA/story-iter
[ICLR 2026] A Training-free Iterative Framework for Long Story Visualization
PaddlePaddle/PaddleMIX
Paddle Multimodal Integration and eXploration, supporting mainstream multi-modal tasks,...
keivalya/mini-vla
a minimal, beginner-friendly VLA to show how robot policies can fuse images, text, and states to...
adobe-research/custom-diffusion
Custom Diffusion: Multi-Concept Customization of Text-to-Image Diffusion (CVPR 2023)
byliutao/1Prompt1Story
🔥ICLR 2025 (Spotlight) One-Prompt-One-Story: Free-Lunch Consistent Text-to-Image Generation...