hendersontrent/correctR
R package for computing corrected test statistics for comparing machine learning models on correlated samples
When comparing different machine learning models, you often generate multiple performance metrics (like accuracy) using methods like k-fold cross-validation. This tool helps you statistically compare two models based on these metrics, accounting for the fact that these performance measurements are related, not independent. It's for data scientists or researchers who need to rigorously determine if one model truly outperforms another.
No commits in the last 6 months.
Use this if you need to statistically compare the performance of two machine learning models after evaluating them using resampling or cross-validation techniques.
Not ideal if you need to compare more than two machine learning models or if your performance metrics are truly independent.
Stars
23
Forks
3
Language
R
License
—
Category
Last pushed
Feb 05, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/hendersontrent/correctR"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
laresbernardo/lares
Analytics & Machine Learning R Sidekick
lucasmaystre/choix
Inference algorithms for models based on Luce's choice axiom
TheAlgorithms/R
Collection of various algorithms implemented in R.
easystats/performance
:muscle: Models' quality and performance metrics (R2, ICC, LOO, AIC, BF, ...)
mlr-org/mlr
Machine Learning in R