hendersontrent/correctR

R package for computing corrected test statistics for comparing machine learning models on correlated samples

33
/ 100
Emerging

When comparing different machine learning models, you often generate multiple performance metrics (like accuracy) using methods like k-fold cross-validation. This tool helps you statistically compare two models based on these metrics, accounting for the fact that these performance measurements are related, not independent. It's for data scientists or researchers who need to rigorously determine if one model truly outperforms another.

No commits in the last 6 months.

Use this if you need to statistically compare the performance of two machine learning models after evaluating them using resampling or cross-validation techniques.

Not ideal if you need to compare more than two machine learning models or if your performance metrics are truly independent.

machine-learning-evaluation model-comparison statistical-testing data-science-research predictive-modeling
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 6 / 25
Maturity 16 / 25
Community 11 / 25

How are scores calculated?

Stars

23

Forks

3

Language

R

License

Last pushed

Feb 05, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/hendersontrent/correctR"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.