MadryLab/context-cite
Attribute (or cite) statements generated by LLMs back to in-context information.
This tool helps you understand exactly where an AI's response comes from when it uses provided documents. You feed it a document or set of documents, ask the AI a question, and it shows you precisely which sentences or phrases in your documents support each part of the AI's answer. This is ideal for researchers, analysts, or anyone who relies on AI for information and needs to verify its claims.
325 stars. No commits in the last 6 months. Available on PyPI.
Use this if you need to ensure the factual accuracy of AI-generated text by tracing every statement back to its original source in your reference materials.
Not ideal if you are looking for a tool to generate creative content without needing explicit source verification, or if your AI is not grounded in specific documents.
Stars
325
Forks
25
Language
Jupyter Notebook
License
MIT
Category
Last pushed
Oct 08, 2024
Commits (30d)
0
Dependencies
9
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/MadryLab/context-cite"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related models
microsoft/augmented-interpretable-models
Interpretable and efficient predictors using pre-trained language models. Scikit-learn compatible.
Trustworthy-ML-Lab/CB-LLMs
[ICLR 25] A novel framework for building intrinsically interpretable LLMs with...
poloclub/LLM-Attributor
LLM Attributor: Attribute LLM's Generated Text to Training Data
THUDM/LongCite
LongCite: Enabling LLMs to Generate Fine-grained Citations in Long-context QA
UKPLab/5pils
Code associated with the EMNLP 2024 Main paper: "Image, tell me your story!" Predicting the...