benjarison/eval-metrics
Evaluation metrics for machine learning
When building machine learning models, you need to understand how well they perform. This project helps you assess your model's quality by taking its predicted scores and the actual outcomes, then calculates standard performance measures. It's for data scientists, machine learning engineers, and researchers who need to rigorously evaluate their predictive models.
Use this if you need to calculate standard metrics like Accuracy, Precision, Recall, F-1, AUC, or RMSE for your classification or regression machine learning models.
Not ideal if you are looking for advanced model interpretability tools or methods to train your machine learning models.
Stars
15
Forks
—
Language
Rust
License
—
Category
Last pushed
Jan 14, 2026
Monthly downloads
989
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/benjarison/eval-metrics"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
SomeB1oody/RustyML
A high-performance machine learning library in pure Rust, offering statistical utilities, ML...
smartcorelib/smartcore
A comprehensive library for machine learning and numerical computing. Apply Machine Learning...
open-spaced-repetition/fsrs-rs
FSRS for Rust, including Optimizer and Scheduler
open-spaced-repetition/fsrs-optimizer
FSRS Optimizer Package
rust-ml/linfa
A Rust machine learning framework.