AIFEG/BenchLMM
[ECCV 2024] BenchLMM: Benchmarking Cross-style Visual Capability of Large Multimodal Models
This project helps researchers and developers evaluate how well large multimodal models (LMMs) understand and answer questions about images that vary widely in style, like medical scans, cartoons, or infrared photos. You provide the LMM's text responses to visual questions across these diverse image styles, and the project outputs a performance score indicating the model's 'cross-style' visual understanding capability. This tool is for AI researchers and machine learning engineers developing or comparing LMMs.
No commits in the last 6 months.
Use this if you need to rigorously benchmark and compare the visual comprehension of different large multimodal models across a wide range of image styles and domains.
Not ideal if you are looking for a tool to train or fine-tune an LMM, or if you only work with a single, consistent image style.
Stars
86
Forks
7
Language
Python
License
Apache-2.0
Category
Last pushed
Aug 19, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/AIFEG/BenchLMM"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
stanfordnlp/axbench
Stanford NLP Python library for benchmarking the utility of LLM interpretability methods
aidatatools/ollama-benchmark
LLM Benchmark for Throughput via Ollama (Local LLMs)
LarHope/ollama-benchmark
Ollama based Benchmark with detail I/O token per second. Python with Deepseek R1 example.
qcri/LLMeBench
Benchmarking Large Language Models
THUDM/LongBench
LongBench v2 and LongBench (ACL 25'&24')