Cloud-CV/EvalAI
:cloud: :rocket: :bar_chart: :chart_with_upwards_trend: Evaluating state of the art in AI
This platform helps researchers and challenge organizers effectively compare different machine learning and AI algorithms. You submit your algorithm's results or code, and it provides standardized, reproducible evaluations and leaderboards. It's designed for AI researchers, academic institutions, and challenge hosts who need to benchmark and share progress in various AI tasks.
2,013 stars. Available on PyPI.
Use this if you are an AI researcher or challenge organizer looking to host or participate in a competition that requires standardized, scalable, and reproducible evaluation of machine learning models.
Not ideal if you need a simple tool for personal, one-off model performance checks without the need for public leaderboards or large-scale comparative analysis.
Stars
2,013
Forks
989
Language
Python
License
—
Category
Last pushed
Mar 12, 2026
Commits (30d)
0
Dependencies
11
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/Cloud-CV/EvalAI"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Compare
Related frameworks
fireindark707/Python-Schema-Matching
A python tool using XGboost and sentence-transformers to perform schema matching task on tables.
graphbookai/graphbook
Visual AI development framework for training and inference of ML models, scaling pipelines, and...
visual-layer/fastdup
fastdup is a powerful, free tool designed to rapidly generate valuable insights from image and...
github/CodeSearchNet
Datasets, tools, and benchmarks for representation learning of code.
tthtlc/awesome-source-analysis
Source code understanding via Machine Learning techniques