poloclub/LLM-Attributor
LLM Attributor: Attribute LLM's Generated Text to Training Data
When your large language model generates text, this tool helps you understand which parts of its training data were most influential for specific phrases. You input your LLM's generated text and its training dataset, and it outputs an interactive visualization showing the links between generated phrases and their source data. This is for AI researchers and practitioners who fine-tune and evaluate LLMs.
No commits in the last 6 months.
Use this if you need to debug or explain why your LLM produced a particular output by tracing it back to its training data.
Not ideal if you are looking for a tool to attribute generated text to external sources or evaluate factual accuracy against real-world knowledge.
Stars
76
Forks
10
Language
Jupyter Notebook
License
MIT
Category
Last pushed
Sep 17, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/poloclub/LLM-Attributor"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
MadryLab/context-cite
Attribute (or cite) statements generated by LLMs back to in-context information.
microsoft/augmented-interpretable-models
Interpretable and efficient predictors using pre-trained language models. Scikit-learn compatible.
Trustworthy-ML-Lab/CB-LLMs
[ICLR 25] A novel framework for building intrinsically interpretable LLMs with...
THUDM/LongCite
LongCite: Enabling LLMs to Generate Fine-grained Citations in Long-context QA
UKPLab/5pils
Code associated with the EMNLP 2024 Main paper: "Image, tell me your story!" Predicting the...