aai-institute/nnbench
A small framework for benchmarking machine learning models.
This framework helps machine learning engineers systematically evaluate their models. You provide your trained models and a set of custom evaluation functions. It then runs these benchmarks, collecting performance metrics and other relevant data, which can be sent to various experiment tracking tools. This allows ML engineers to compare models, track performance over time, and organize their experimental results more effectively.
No commits in the last 6 months. Available on PyPI.
Use this if you need a structured way to compare different versions of your machine learning models or track their performance against specific benchmarks.
Not ideal if you are looking for a general-purpose testing framework for non-ML code or a solution for hyperparameter tuning.
Stars
21
Forks
3
Language
Python
License
Apache-2.0
Category
Last pushed
Jun 06, 2025
Commits (30d)
0
Dependencies
3
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/aai-institute/nnbench"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
opentensor/bittensor
Internet-scale Neural Networks
trailofbits/fickling
A Python pickling decompiler and static analyzer
benchopt/benchopt
A framework for reproducible, comparable benchmarks
BiomedSciAI/fuse-med-ml
A python framework accelerating ML based discovery in the medical field by encouraging code...
mosaicml/streaming
A Data Streaming Library for Efficient Neural Network Training