ai8hyf/llm_split_recall_test

Split and Recall: A simple and efficient benchmark to evaluate in-context recall performance of Large Language Models (LLMs)

35
/ 100
Emerging

This project helps evaluate how well large language models (LLMs) can find and recall specific sentences from a given text, especially within longer documents. It takes a document (like a research paper) and outputs a performance score indicating the model's accuracy in identifying and reproducing sentences. This is useful for AI developers, researchers, or data scientists working on or comparing different LLMs.

No commits in the last 6 months.

Use this if you need to benchmark and compare the 'in-context recall' ability of various large language models, particularly their precision in extracting specific sentences from a paragraph or a longer document.

Not ideal if you need a benchmark for aspects of LLM performance other than sentence-level recall, or if your evaluation data is significantly different from academic paper abstracts.

LLM evaluation NLP benchmarking AI model comparison Text extraction Language model development
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 5 / 25
Maturity 16 / 25
Community 14 / 25

How are scores calculated?

Stars

9

Forks

3

Language

Python

License

MIT

Last pushed

Mar 31, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/ai8hyf/llm_split_recall_test"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.