AI4LIFE-GROUP/LLM_Explainer

Code for paper: Are Large Language Models Post Hoc Explainers?

35
/ 100
Emerging

This project helps machine learning practitioners understand why their classification models make certain decisions. It takes a trained model and a dataset, then uses large language models (LLMs) to generate human-readable explanations for individual predictions. The goal is to evaluate if LLMs can effectively explain complex model behavior, providing insights to data scientists or domain experts.

No commits in the last 6 months.

Use this if you are a machine learning researcher or data scientist investigating the interpretability of your classification models, especially when exploring how large language models can generate post-hoc explanations.

Not ideal if you need a user-friendly, out-of-the-box explainability tool for immediate deployment in a business application, as this project is research-focused and requires a technical understanding of ML pipelines and LLM prompting.

AI-explainability model-interpretability machine-learning-research classification-auditing
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 7 / 25
Maturity 16 / 25
Community 12 / 25

How are scores calculated?

Stars

34

Forks

5

Language

Jupyter Notebook

License

MIT

Last pushed

Jul 22, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/AI4LIFE-GROUP/LLM_Explainer"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.