Human-Centric-Machine-Learning/counterfactual-llms
Code for "Counterfactual Token Generation in Large Language Models", Arxiv 2024.
This project helps you understand how a large language model's output would change if it had made a different choice earlier in its generation process. You input an existing story or text generated by an LLM, and it outputs alternative versions showing what might have happened if specific early tokens were different. This is useful for researchers and analysts who want to examine the underlying reasoning and potential biases within LLM-generated content.
No commits in the last 6 months.
Use this if you need to explore 'what-if' scenarios in text generation to understand an LLM's causal dependencies or to identify biases by seeing how outputs shift with small initial changes.
Not ideal if you are looking for a tool to simply improve the quality or factual accuracy of LLM outputs directly, as its primary purpose is analytical exploration.
Stars
32
Forks
3
Language
Jupyter Notebook
License
—
Category
Last pushed
Nov 07, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/Human-Centric-Machine-Learning/counterfactual-llms"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
MadryLab/context-cite
Attribute (or cite) statements generated by LLMs back to in-context information.
microsoft/augmented-interpretable-models
Interpretable and efficient predictors using pre-trained language models. Scikit-learn compatible.
Trustworthy-ML-Lab/CB-LLMs
[ICLR 25] A novel framework for building intrinsically interpretable LLMs with...
poloclub/LLM-Attributor
LLM Attributor: Attribute LLM's Generated Text to Training Data
THUDM/LongCite
LongCite: Enabling LLMs to Generate Fine-grained Citations in Long-context QA