filipnaudot/llmSHAP
llmSHAP: a multi-threaded explainability framework using Shapley values for LLM-based outputs.
This tool helps AI practitioners understand why an AI model generated a specific output, whether it's text or multimodal data (text and images). You input your AI model's prompt and output, and it shows you which parts of the input contributed most to that particular output. This is useful for data scientists, machine learning engineers, and AI product managers who need to debug, validate, or explain AI model behavior.
Available on PyPI.
Use this if you need to explain the reasoning behind a large language model's output by identifying the most influential words, sentences, or even images in the input.
Not ideal if you are looking for a tool to train or fine-tune AI models, or if you only need basic performance metrics for your AI.
Stars
16
Forks
2
Language
Python
License
MIT
Category
Last pushed
Mar 12, 2026
Commits (30d)
0
Dependencies
1
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/filipnaudot/llmSHAP"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related tools
microsoft/automated-brain-explanations
Generating and validating natural-language explanations for the brain.
CAS-SIAT-XinHai/CPsyCoun
[ACL 2024] CPsyCoun: A Report-based Multi-turn Dialogue Reconstruction and Evaluation Framework...
wesg52/universal-neurons
Universal Neurons in GPT2 Language Models
ICTMCG/LLM-for-misinformation-research
Paper list of misinformation research using (multi-modal) large language models, i.e., (M)LLMs.
marcusm117/IdentityChain
[ICLR 2024] Beyond Accuracy: Evaluating Self-Consistency of Code Large Language Models with IdentityChain