Wang-ML-Lab/interpretable-foundation-models
[ICML 2024] Probabilistic Conceptual Explainers (PACE): Trustworthy Conceptual Explanations for Vision Foundation Models
This project helps AI researchers and practitioners understand why a vision AI model makes certain predictions. It takes a trained vision transformer model and a dataset of images, then outputs automatically discovered visual concepts (like 'petals' or 'sky') that explain the model's behavior at a dataset, image, and even specific image-patch level. This is for AI developers, machine learning engineers, and researchers who need to debug or validate their vision AI models.
No commits in the last 6 months.
Use this if you need to gain trustworthy insights into how your vision transformer models 'think' by identifying the underlying visual concepts driving their decisions.
Not ideal if you are looking to interpret traditional machine learning models or want explanations for language-based AI models, as this is specifically for vision transformers.
Stars
18
Forks
4
Language
Python
License
—
Category
Last pushed
Sep 25, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/Wang-ML-Lab/interpretable-foundation-models"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
filipnaudot/llmSHAP
llmSHAP: a multi-threaded explainability framework using Shapley values for LLM-based outputs.
microsoft/automated-brain-explanations
Generating and validating natural-language explanations for the brain.
CAS-SIAT-XinHai/CPsyCoun
[ACL 2024] CPsyCoun: A Report-based Multi-turn Dialogue Reconstruction and Evaluation Framework...
wesg52/universal-neurons
Universal Neurons in GPT2 Language Models
ICTMCG/LLM-for-misinformation-research
Paper list of misinformation research using (multi-modal) large language models, i.e., (M)LLMs.