bigai-nlco/LooGLE
ACL 2024 | LooGLE: Long Context Evaluation for Long-Context Language Models
This benchmark helps you assess how well large language models (LLMs) understand extremely long documents, some over 100,000 words. It takes an LLM's responses to questions about these documents and provides detailed evaluations. Anyone who builds, integrates, or uses LLMs and needs to verify their performance on complex, lengthy texts, such as researchers, AI engineers, or product managers, would find this useful.
195 stars. No commits in the last 6 months.
Use this if you need a comprehensive, systematic way to evaluate the long-context comprehension and reasoning abilities of various large language models using realistic, extensive documents and diverse question types.
Not ideal if you are looking to evaluate LLMs on short, simple texts or if you primarily need a tool for fine-tuning models rather than evaluating their inherent long-context understanding.
Stars
195
Forks
6
Language
Python
License
MIT
Category
Last pushed
Oct 08, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/bigai-nlco/LooGLE"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
MMMU-Benchmark/MMMU
This repo contains evaluation code for the paper "MMMU: A Massive Multi-discipline Multimodal...
pat-jj/DeepRetrieval
[COLM’25] DeepRetrieval — 🔥 Training Search Agent by RLVR with Retrieval Outcome
lupantech/MathVista
MathVista: data, code, and evaluation for Mathematical Reasoning in Visual Contexts
x66ccff/liveideabench
[𝐍𝐚𝐭𝐮𝐫𝐞 𝐂𝐨𝐦𝐦𝐮𝐧𝐢𝐜𝐚𝐭𝐢𝐨𝐧𝐬] 🤖💡 LiveIdeaBench: Evaluating LLMs' Scientific Creativity and Idea...
ise-uiuc/magicoder
[ICML'24] Magicoder: Empowering Code Generation with OSS-Instruct