wschella/llm-reliability

Code for the paper "Larger and more instructable language models become less reliable"

29
/ 100
Experimental

This project offers tools to evaluate how consistently Large Language Models (LLMs) respond to instructions, especially as they become larger and more complex. It takes in benchmark datasets and LLM outputs, then provides graded results showing how reliable the models are in tasks like addition, anagrams, and understanding locality. LLM developers, researchers, and product managers can use this to assess and improve the dependability of their models.

No commits in the last 6 months.

Use this if you need to rigorously test and understand the reliability of large language models across different tasks and identify persistent issues like prompt sensitivity.

Not ideal if you are looking for an off-the-shelf solution for fine-tuning or deploying LLMs, as this is primarily an evaluation and research toolkit.

LLM evaluation AI reliability model benchmarking natural language processing research AI quality assurance
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 7 / 25
Maturity 16 / 25
Community 6 / 25

How are scores calculated?

Stars

31

Forks

2

Language

Jupyter Notebook

License

MIT

Last pushed

Oct 09, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/wschella/llm-reliability"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.