salesforce/OmniXAI
OmniXAI: A Library for eXplainable AI
This helps data scientists, machine learning researchers, and practitioners understand why their AI models make specific predictions. It takes various data types—like customer transaction records, images, or text—along with your trained machine learning model, and outputs clear explanations about how the model arrived at its decision. This is for anyone who needs to trust and validate the output of their AI models.
963 stars. No commits in the last 6 months. Available on PyPI.
Use this if you need to explain the decisions of your machine learning models, whether they process tabular data, images, text, or time-series information, to satisfy regulatory requirements, build trust, or debug model behavior.
Not ideal if you are looking for a simple API for basic model predictions without needing deep insights into their decision-making process.
Stars
963
Forks
106
Language
Jupyter Notebook
License
BSD-3-Clause
Last pushed
Jul 23, 2024
Commits (30d)
0
Dependencies
19
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/salesforce/OmniXAI"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related frameworks
obss/sahi
Framework agnostic sliced/tiled inference + interactive ui + error analysis plots
tensorflow/tcav
Code for the TCAV ML interpretability project
MAIF/shapash
🔅 Shapash: User-friendly Explainability and Interpretability to Develop Reliable and Transparent...
TeamHG-Memex/eli5
A library for debugging/inspecting machine learning classifiers and explaining their predictions
csinva/imodels
Interpretable ML package 🔍 for concise, transparent, and accurate predictive modeling...