interpretml/gam-changer
Editing machine learning models to reflect human knowledge and values
This tool helps data scientists and domain experts collaboratively refine machine learning models to align with real-world knowledge and ethical considerations. You input an existing Generalized Additive Model (GAM) and sample data, then interactively adjust the model's behavior. The output is an edited GAM that better reflects human values and expertise, ready for deployment or further analysis.
128 stars. No commits in the last 6 months.
Use this if you need to fine-tune a machine learning model to incorporate expert knowledge or specific ethical rules, rather than relying solely on patterns found in data.
Not ideal if you are working with complex, non-interpretable models like deep neural networks or if your primary goal is to build a model from scratch without human intervention.
Stars
128
Forks
11
Language
JavaScript
License
MIT
Last pushed
Oct 03, 2023
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/interpretml/gam-changer"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
obss/sahi
Framework agnostic sliced/tiled inference + interactive ui + error analysis plots
tensorflow/tcav
Code for the TCAV ML interpretability project
MAIF/shapash
🔅 Shapash: User-friendly Explainability and Interpretability to Develop Reliable and Transparent...
TeamHG-Memex/eli5
A library for debugging/inspecting machine learning classifiers and explaining their predictions
csinva/imodels
Interpretable ML package 🔍 for concise, transparent, and accurate predictive modeling...