OpenGVLab/Multi-Modality-Arena
Chatbot Arena meets multi-modality! Multi-Modality Arena allows you to benchmark vision-language models side-by-side while providing images as inputs. Supports MiniGPT-4, LLaMA-Adapter V2, LLaVA, BLIP-2, and many more!
This platform helps researchers and practitioners evaluate how well different visual-language AI models understand and respond to questions about images. You provide an image and a question, and it shows you how various models perform, allowing for side-by-side comparisons of their answers. This is ideal for AI researchers, machine learning engineers, and data scientists who need to benchmark and select the best models for visual question answering tasks.
557 stars. No commits in the last 6 months.
Use this if you need to systematically compare the performance of multiple visual-language AI models on image-based question-answering tasks and understand their strengths and weaknesses.
Not ideal if you are looking for a tool to train new visual-language models or fine-tune existing ones, as this platform is focused solely on evaluation.
Stars
557
Forks
39
Language
Python
License
—
Category
Last pushed
Apr 21, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/OpenGVLab/Multi-Modality-Arena"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Featured in
Higher-rated alternatives
sierra-research/tau2-bench
τ²-Bench: Evaluating Conversational Agents in a Dual-Control Environment
xlang-ai/OSWorld
[NeurIPS 2024] OSWorld: Benchmarking Multimodal Agents for Open-Ended Tasks in Real Computer Environments
bigcode-project/bigcodebench
[ICLR'25] BigCodeBench: Benchmarking Code Generation Towards AGI
THUDM/AgentBench
A Comprehensive Benchmark to Evaluate LLMs as Agents (ICLR'24)
scicode-bench/SciCode
A benchmark that challenges language models to code solutions for scientific problems