ShareGPT4Omni/ShareGPT4V
[ECCV 2024] ShareGPT4V: Improving Large Multi-modal Models with Better Captions
This project helps people who work with images and want highly detailed, natural language descriptions of what's in them. You input images, and it outputs rich, descriptive captions that rival those written by advanced AI like GPT-4 Vision. It's designed for researchers, data scientists, or content managers who need to understand or label large visual datasets precisely.
251 stars. No commits in the last 6 months.
Use this if you need to automatically generate highly accurate and descriptive captions for a large collection of images or want to improve the performance of your existing large multi-modal models.
Not ideal if you only need simple, short labels for images or are not working with image-text data in a research or data-intensive context.
Stars
251
Forks
8
Language
Python
License
—
Category
Last pushed
Jul 01, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/ShareGPT4Omni/ShareGPT4V"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
jingyaogong/minimind-v
🚀 「大模型」1小时从0训练26M参数的视觉多模态VLM!🌏 Train a 26M-parameter VLM from scratch in just 1 hours!
roboflow/vision-ai-checkup
Take your LLM to the optometrist.
SkyworkAI/Skywork-R1V
Skywork-R1V is an advanced multimodal AI model series developed by Skywork AI, specializing in...
zai-org/GLM-TTS
GLM-TTS: Controllable & Emotion-Expressive Zero-shot TTS with Multi-Reward Reinforcement Learning
NExT-GPT/NExT-GPT
Code and models for ICML 2024 paper, NExT-GPT: Any-to-Any Multimodal Large Language Model