DAMO-NLP-SG/M3Exam
Data and code for paper "M3Exam: A Multilingual, Multimodal, Multilevel Benchmark for Examining Large Language Models"
This project offers a specialized collection of real-world exam questions designed to rigorously test the capabilities of Large Language Models (LLMs). It takes in an LLM and outputs its performance scores across various subjects, languages, and difficulty levels, including questions that require image understanding. AI researchers and practitioners who develop or deploy LLMs would use this to understand their models' strengths and weaknesses.
103 stars. No commits in the last 6 months.
Use this if you need a comprehensive and diverse benchmark to evaluate how well your Large Language Models can answer complex, real-world exam questions across different languages, modalities (text and image), and difficulty levels.
Not ideal if you are looking for a dataset to train a new LLM from scratch or if your focus is on simple, single-modality tasks rather than comprehensive, human-level question answering.
Stars
103
Forks
13
Language
Python
License
—
Category
Last pushed
Jun 15, 2023
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/DAMO-NLP-SG/M3Exam"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
MMMU-Benchmark/MMMU
This repo contains evaluation code for the paper "MMMU: A Massive Multi-discipline Multimodal...
pat-jj/DeepRetrieval
[COLM’25] DeepRetrieval — 🔥 Training Search Agent by RLVR with Retrieval Outcome
lupantech/MathVista
MathVista: data, code, and evaluation for Mathematical Reasoning in Visual Contexts
x66ccff/liveideabench
[𝐍𝐚𝐭𝐮𝐫𝐞 𝐂𝐨𝐦𝐦𝐮𝐧𝐢𝐜𝐚𝐭𝐢𝐨𝐧𝐬] 🤖💡 LiveIdeaBench: Evaluating LLMs' Scientific Creativity and Idea...
ise-uiuc/magicoder
[ICML'24] Magicoder: Empowering Code Generation with OSS-Instruct