parameterlab/apricot
Source code of "Calibrating Large Language Models Using Their Generations Only", ACL2024
APRICOT helps AI researchers and developers assess how confident a large language model (LLM) is in its answers, especially when they only have access to the text it generates. You provide the model's text input and output, and it tells you if the LLM is likely to be incorrect. This is useful for anyone integrating LLMs into applications where accuracy and trust are critical.
No commits in the last 6 months.
Use this if you need to evaluate the trustworthiness of an LLM's responses without needing to access its internal workings or modify its generation process.
Not ideal if you are a general LLM user simply looking to chat with an AI, as this is a technical tool for evaluating model performance.
Stars
22
Forks
3
Language
Jupyter Notebook
License
MIT
Category
Last pushed
Nov 20, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/parameterlab/apricot"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
MadryLab/context-cite
Attribute (or cite) statements generated by LLMs back to in-context information.
microsoft/augmented-interpretable-models
Interpretable and efficient predictors using pre-trained language models. Scikit-learn compatible.
Trustworthy-ML-Lab/CB-LLMs
[ICLR 25] A novel framework for building intrinsically interpretable LLMs with...
poloclub/LLM-Attributor
LLM Attributor: Attribute LLM's Generated Text to Training Data
THUDM/LongCite
LongCite: Enabling LLMs to Generate Fine-grained Citations in Long-context QA