filipnaudot/llmSHAP

llmSHAP: a multi-threaded explainability framework using Shapley values for LLM-based outputs.

49
/ 100
Emerging

This tool helps AI practitioners understand why an AI model generated a specific output, whether it's text or multimodal data (text and images). You input your AI model's prompt and output, and it shows you which parts of the input contributed most to that particular output. This is useful for data scientists, machine learning engineers, and AI product managers who need to debug, validate, or explain AI model behavior.

Available on PyPI.

Use this if you need to explain the reasoning behind a large language model's output by identifying the most influential words, sentences, or even images in the input.

Not ideal if you are looking for a tool to train or fine-tune AI models, or if you only need basic performance metrics for your AI.

AI-explainability large-language-models model-debugging AI-auditing multimodal-AI
Maintenance 10 / 25
Adoption 6 / 25
Maturity 24 / 25
Community 9 / 25

How are scores calculated?

Stars

16

Forks

2

Language

Python

License

MIT

Last pushed

Mar 12, 2026

Commits (30d)

0

Dependencies

1

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/filipnaudot/llmSHAP"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.