AmirhosseinHonardoust/KPI-Trap-Lab

A hands-on lab showing how “improving” a single metric (AUC/accuracy/F1) can worsen real-world outcomes. Includes metric audits, slice checks, cost-sensitive evaluation, threshold tuning, and decision policies you can defend, so dashboards don’t quietly ship bad decisions.

26
/ 100
Experimental

This lab helps data science and analytics professionals avoid the 'KPI Trap,' where a model's performance on a single metric (like AUC or accuracy) improves, but real-world business outcomes worsen. It provides practical methods for auditing model metrics, checking performance on specific data segments, and incorporating business costs and decision policies. The output is a robust evaluation framework and defensible decision policies for your models, preventing unexpected issues in production.

Use this if you are a data scientist, analyst, or machine learning engineer responsible for deploying models and need to ensure that metric improvements translate to positive real-world impact and prevent hidden failures.

Not ideal if you are solely focused on early-stage model development and research metrics without considering the model's operational performance and business implications.

model-evaluation data-science-governance ml-operations risk-management decision-intelligence
No Package No Dependents
Maintenance 10 / 25
Adoption 5 / 25
Maturity 11 / 25
Community 0 / 25

How are scores calculated?

Stars

10

Forks

Language

License

MIT

Last pushed

Feb 22, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/mlops/AmirhosseinHonardoust/KPI-Trap-Lab"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.