benjarison/eval-metrics

Evaluation metrics for machine learning

39
/ 100
Emerging

When building machine learning models, you need to understand how well they perform. This project helps you assess your model's quality by taking its predicted scores and the actual outcomes, then calculates standard performance measures. It's for data scientists, machine learning engineers, and researchers who need to rigorously evaluate their predictive models.

Use this if you need to calculate standard metrics like Accuracy, Precision, Recall, F-1, AUC, or RMSE for your classification or regression machine learning models.

Not ideal if you are looking for advanced model interpretability tools or methods to train your machine learning models.

machine-learning model-evaluation data-science predictive-analytics statistical-analysis
No Package No Dependents
Maintenance 10 / 25
Adoption 13 / 25
Maturity 16 / 25
Community 0 / 25

How are scores calculated?

Stars

15

Forks

Language

Rust

License

Last pushed

Jan 14, 2026

Monthly downloads

989

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/benjarison/eval-metrics"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.