DAMO-NLP-SG/M3Exam

Data and code for paper "M3Exam: A Multilingual, Multimodal, Multilevel Benchmark for Examining Large Language Models"

31
/ 100
Emerging

This project offers a specialized collection of real-world exam questions designed to rigorously test the capabilities of Large Language Models (LLMs). It takes in an LLM and outputs its performance scores across various subjects, languages, and difficulty levels, including questions that require image understanding. AI researchers and practitioners who develop or deploy LLMs would use this to understand their models' strengths and weaknesses.

103 stars. No commits in the last 6 months.

Use this if you need a comprehensive and diverse benchmark to evaluate how well your Large Language Models can answer complex, real-world exam questions across different languages, modalities (text and image), and difficulty levels.

Not ideal if you are looking for a dataset to train a new LLM from scratch or if your focus is on simple, single-modality tasks rather than comprehensive, human-level question answering.

AI model evaluation natural language processing research multimodal AI linguistic model assessment machine learning benchmarking
No License Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 9 / 25
Maturity 8 / 25
Community 14 / 25

How are scores calculated?

Stars

103

Forks

13

Language

Python

License

Last pushed

Jun 15, 2023

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/DAMO-NLP-SG/M3Exam"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.