snap-stanford/stark

(NeurIPS D&B 2024) STaRK: Benchmarking LLM Retrieval on Textual and Relational Knowledge Bases

56
/ 100
Established

This project helps developers and researchers evaluate how well Large Language Models (LLMs) can find information from different kinds of knowledge bases, combining both text and structured data. It takes a specific LLM and a dataset of queries (like product searches or scientific paper inquiries) and outputs a score indicating the LLM's retrieval accuracy. This is designed for AI researchers and developers who are building or improving LLM-powered information retrieval systems.

330 stars.

Use this if you are developing or benchmarking an LLM-based retrieval system and need a comprehensive way to test its ability to find relevant information from complex, semi-structured data sources.

Not ideal if you are an end-user simply looking to use an existing LLM for general information retrieval, rather than evaluating or developing its underlying capabilities.

LLM development information retrieval knowledge base search model benchmarking AI research
No Package No Dependents
Maintenance 10 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 20 / 25

How are scores calculated?

Stars

330

Forks

51

Language

Python

License

MIT

Last pushed

Feb 06, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/embeddings/snap-stanford/stark"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.