aai-institute/nnbench

A small framework for benchmarking machine learning models.

44
/ 100
Emerging

This framework helps machine learning engineers systematically evaluate their models. You provide your trained models and a set of custom evaluation functions. It then runs these benchmarks, collecting performance metrics and other relevant data, which can be sent to various experiment tracking tools. This allows ML engineers to compare models, track performance over time, and organize their experimental results more effectively.

No commits in the last 6 months. Available on PyPI.

Use this if you need a structured way to compare different versions of your machine learning models or track their performance against specific benchmarks.

Not ideal if you are looking for a general-purpose testing framework for non-ML code or a solution for hyperparameter tuning.

machine-learning-engineering model-evaluation ML-experiment-tracking performance-benchmarking
Stale 6m
Maintenance 2 / 25
Adoption 6 / 25
Maturity 25 / 25
Community 11 / 25

How are scores calculated?

Stars

21

Forks

3

Language

Python

License

Apache-2.0

Last pushed

Jun 06, 2025

Commits (30d)

0

Dependencies

3

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/aai-institute/nnbench"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.