Cloud-CV/EvalAI

:cloud: :rocket: :bar_chart: :chart_with_upwards_trend: Evaluating state of the art in AI

70
/ 100
Verified

This platform helps researchers and challenge organizers effectively compare different machine learning and AI algorithms. You submit your algorithm's results or code, and it provides standardized, reproducible evaluations and leaderboards. It's designed for AI researchers, academic institutions, and challenge hosts who need to benchmark and share progress in various AI tasks.

2,013 stars. Available on PyPI.

Use this if you are an AI researcher or challenge organizer looking to host or participate in a competition that requires standardized, scalable, and reproducible evaluation of machine learning models.

Not ideal if you need a simple tool for personal, one-off model performance checks without the need for public leaderboards or large-scale comparative analysis.

AI research machine learning challenges algorithm benchmarking model evaluation reproducible science
Maintenance 10 / 25
Adoption 10 / 25
Maturity 25 / 25
Community 25 / 25

How are scores calculated?

Stars

2,013

Forks

989

Language

Python

License

Last pushed

Mar 12, 2026

Commits (30d)

0

Dependencies

11

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/Cloud-CV/EvalAI"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.