viadee/javaAnchorExplainer
Explains machine learning models fast using the Anchor algorithm originally proposed by marcotcr in 2018
This project helps data scientists and machine learning engineers understand why a specific prediction was made by a 'black box' machine learning model. It takes a data instance and the model's prediction as input, then outputs a simple, human-understandable rule (an 'anchor') that explains why the model made that particular prediction for that instance. This is especially useful for models deployed in Java environments or those that need to integrate with Java-based systems.
Use this if you need to explain individual predictions from any machine learning model, regardless of its internal complexity, especially when working within a Java ecosystem.
Not ideal if you are looking for a Python-native solution or need to explain the model's overall behavior rather than specific predictions.
Stars
15
Forks
3
Language
Java
License
BSD-3-Clause
Last pushed
Dec 19, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/viadee/javaAnchorExplainer"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
obss/sahi
Framework agnostic sliced/tiled inference + interactive ui + error analysis plots
tensorflow/tcav
Code for the TCAV ML interpretability project
MAIF/shapash
🔅 Shapash: User-friendly Explainability and Interpretability to Develop Reliable and Transparent...
TeamHG-Memex/eli5
A library for debugging/inspecting machine learning classifiers and explaining their predictions
csinva/imodels
Interpretable ML package 🔍 for concise, transparent, and accurate predictive modeling...