carloscinelli/dml.sensemakr
Sensitivity analysis tools for causal ML
When analyzing the impact of a program or policy using Debiased Machine Learning, it's crucial to know if your findings are robust to unobserved factors. This tool takes your DML analysis results and calculates 'robustness values.' These values tell you how strong an unmeasured confounder would need to be to invalidate your conclusions, helping economists, social scientists, and policy analysts confidently interpret causal effects.
Use this if you need to assess how sensitive your causal effect estimates from Debiased Machine Learning are to potential unobserved variables.
Not ideal if your causal analysis does not involve Debiased Machine Learning or if you are looking for an initial causal inference method rather than a sensitivity analysis tool.
Stars
20
Forks
5
Language
R
License
—
Category
Last pushed
Feb 11, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/carloscinelli/dml.sensemakr"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
py-why/dowhy
DoWhy is a Python library for causal inference that supports explicit modeling and testing of...
py-why/EconML
ALICE (Automated Learning and Intelligence for Causation and Economics) is a Microsoft Research...
uber/causalml
Uplift modeling and causal inference with machine learning algorithms
cdt15/lingam
Python package for causal discovery based on LiNGAM.
andrewtavis/causeinfer
Machine learning based causal inference/uplift in Python