lupantech/MathVista
MathVista: data, code, and evaluation for Mathematical Reasoning in Visual Contexts
This project offers a comprehensive benchmark to evaluate how well AI models can solve math problems presented with visual information like charts, graphs, or diagrams. It takes a mathematical question combined with an image and assesses the model's ability to provide the correct numerical or logical answer. Scientists, researchers, and engineers working on advanced AI systems would use this to rigorously test and compare the mathematical reasoning capabilities of different AI models.
355 stars. No commits in the last 6 months.
Use this if you need to objectively measure and compare the performance of different large multimodal AI models on complex mathematical reasoning tasks involving visual data.
Not ideal if you are looking for an AI tool to solve your personal math problems or if you don't work with developing or evaluating AI models.
Stars
355
Forks
50
Language
Jupyter Notebook
License
CC-BY-SA-4.0
Category
Last pushed
Sep 29, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/lupantech/MathVista"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
MMMU-Benchmark/MMMU
This repo contains evaluation code for the paper "MMMU: A Massive Multi-discipline Multimodal...
pat-jj/DeepRetrieval
[COLM’25] DeepRetrieval — 🔥 Training Search Agent by RLVR with Retrieval Outcome
x66ccff/liveideabench
[𝐍𝐚𝐭𝐮𝐫𝐞 𝐂𝐨𝐦𝐦𝐮𝐧𝐢𝐜𝐚𝐭𝐢𝐨𝐧𝐬] 🤖💡 LiveIdeaBench: Evaluating LLMs' Scientific Creativity and Idea...
ise-uiuc/magicoder
[ICML'24] Magicoder: Empowering Code Generation with OSS-Instruct
sherryzyh/physical_reasoning_toolkit
A Python toolkit for physical reasoning in LLMs and VLMs. This toolkit streamlines access to...