KevinMusgrave/powerful-benchmarker
A library for ML benchmarking. It's powerful.
This tool helps machine learning engineers and researchers systematically evaluate the performance of different unsupervised domain adaptation or metric learning algorithms. It takes in experimental data from your model training runs and produces benchmarks and performance rankings. This allows you to compare various models and validation methods to understand which approaches work best for your specific challenges.
439 stars. No commits in the last 6 months. Available on PyPI.
Use this if you need to rigorously benchmark and compare multiple machine learning models or validation techniques, especially in unsupervised domain adaptation or metric learning scenarios.
Not ideal if you are looking for a simple, out-of-the-box solution for general model evaluation without needing deep customizability or large-scale comparative analysis.
Stars
439
Forks
42
Language
Jupyter Notebook
License
—
Category
Last pushed
Jan 10, 2024
Commits (30d)
0
Dependencies
14
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/KevinMusgrave/powerful-benchmarker"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
opentensor/bittensor
Internet-scale Neural Networks
trailofbits/fickling
A Python pickling decompiler and static analyzer
benchopt/benchopt
A framework for reproducible, comparable benchmarks
BiomedSciAI/fuse-med-ml
A python framework accelerating ML based discovery in the medical field by encouraging code...
mosaicml/streaming
A Data Streaming Library for Efficient Neural Network Training