nikivanstein/GSAreport
Global Sensitivity reporting for Explainable AI
This tool helps scientists, engineers, and researchers understand how different input parameters affect the outcomes of their simulations, models, or real-world processes. You provide data on your model's inputs and outputs, and it generates a visual report showing which inputs are most important and how they interact. This is ideal for anyone who uses complex models and needs to explain their behavior.
No commits in the last 6 months.
Use this if you need to understand which variables or features are most influential in your scientific models, engineering simulations, or machine learning predictions, and want a clear, visual report without extensive coding.
Not ideal if you only need a simple correlation analysis or already have highly specialized tools for sensitivity analysis within your specific domain.
Stars
14
Forks
—
Language
Python
License
MIT
Last pushed
Nov 11, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/nikivanstein/GSAreport"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
obss/sahi
Framework agnostic sliced/tiled inference + interactive ui + error analysis plots
tensorflow/tcav
Code for the TCAV ML interpretability project
MAIF/shapash
🔅 Shapash: User-friendly Explainability and Interpretability to Develop Reliable and Transparent...
TeamHG-Memex/eli5
A library for debugging/inspecting machine learning classifiers and explaining their predictions
csinva/imodels
Interpretable ML package 🔍 for concise, transparent, and accurate predictive modeling...