parameterlab/apricot

Source code of "Calibrating Large Language Models Using Their Generations Only", ACL2024

33
/ 100
Emerging

APRICOT helps AI researchers and developers assess how confident a large language model (LLM) is in its answers, especially when they only have access to the text it generates. You provide the model's text input and output, and it tells you if the LLM is likely to be incorrect. This is useful for anyone integrating LLMs into applications where accuracy and trust are critical.

No commits in the last 6 months.

Use this if you need to evaluate the trustworthiness of an LLM's responses without needing to access its internal workings or modify its generation process.

Not ideal if you are a general LLM user simply looking to chat with an AI, as this is a technical tool for evaluating model performance.

AI evaluation LLM reliability Natural Language Processing model calibration AI research
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 6 / 25
Maturity 16 / 25
Community 11 / 25

How are scores calculated?

Stars

22

Forks

3

Language

Jupyter Notebook

License

MIT

Last pushed

Nov 20, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/parameterlab/apricot"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.