vgarciasc/hundred-hammers

Quickly try out several ML models on a given dataset

48
/ 100
Emerging

This tool helps data scientists and machine learning practitioners quickly compare how different machine learning models perform on a given dataset. You input your dataset, and it automatically trains and evaluates various models, outputting a clear report with performance metrics and helpful visualizations. It's designed for anyone who needs to select the best model for their predictive task.

Available on PyPI.

Use this if you need to rapidly benchmark multiple classification or regression models to find the most suitable one for your data without writing extensive boilerplate code.

Not ideal if you need to build highly customized machine learning pipelines with advanced feature engineering or custom model architectures not readily available in popular libraries.

machine-learning-model-selection predictive-analytics data-science-workflow model-benchmarking algorithm-comparison
Maintenance 10 / 25
Adoption 5 / 25
Maturity 25 / 25
Community 8 / 25

How are scores calculated?

Stars

9

Forks

1

Language

Python

License

MIT

Last pushed

Jan 29, 2026

Commits (30d)

0

Dependencies

11

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/vgarciasc/hundred-hammers"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.