canagnos/mcp

Tools for Measuring Classification Performance for R, Python and Spark

36
/ 100
Emerging

When you're evaluating how well a classification model performs, these tools help you accurately measure its effectiveness. You provide the model's predictions and the actual outcomes, and it calculates key performance metrics. This is for data scientists, analysts, or anyone who builds and assesses classification models.

No commits in the last 6 months.

Use this if you need to thoroughly understand and quantify the accuracy and reliability of your classification models.

Not ideal if you are looking for tools to build or train classification models, rather than just evaluate them.

model-evaluation data-science machine-learning-performance classification-metrics
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 5 / 25
Maturity 16 / 25
Community 15 / 25

How are scores calculated?

Stars

13

Forks

4

Language

License

GPL-3.0

Category

mlr3-ecosystem

Last pushed

Jun 05, 2018

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/canagnos/mcp"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.