AceCHQ/MMIQ

This repo contains evaluation code for MM-IQ benchmark.

30
/ 100
Emerging

This evaluation code helps researchers assess how well multimodal AI models understand complex visual and textual information and perform abstract reasoning. It takes the responses from an AI model to a set of curated test items and produces a performance score, allowing AI researchers to quantify the model's cognitive capabilities. This is for AI researchers and cognitive scientists developing or evaluating advanced multimodal AI.

No commits in the last 6 months.

Use this if you are developing new multimodal AI models and need a standardized way to measure their core reasoning abilities, beyond simple recognition or classification tasks.

Not ideal if you are looking for a tool to apply existing AI models to specific business problems or for general-purpose AI model development without a focus on fundamental cognitive evaluation.

AI-evaluation multimodal-AI cognitive-AI reasoning-benchmarking AI-research
Stale 6m No Package No Dependents
Maintenance 2 / 25
Adoption 5 / 25
Maturity 16 / 25
Community 7 / 25

How are scores calculated?

Stars

10

Forks

1

Language

Jupyter Notebook

License

Apache-2.0

Last pushed

May 18, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/AceCHQ/MMIQ"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.