snap-stanford/stark
(NeurIPS D&B 2024) STaRK: Benchmarking LLM Retrieval on Textual and Relational Knowledge Bases
This project helps developers and researchers evaluate how well Large Language Models (LLMs) can find information from different kinds of knowledge bases, combining both text and structured data. It takes a specific LLM and a dataset of queries (like product searches or scientific paper inquiries) and outputs a score indicating the LLM's retrieval accuracy. This is designed for AI researchers and developers who are building or improving LLM-powered information retrieval systems.
330 stars.
Use this if you are developing or benchmarking an LLM-based retrieval system and need a comprehensive way to test its ability to find relevant information from complex, semi-structured data sources.
Not ideal if you are an end-user simply looking to use an existing LLM for general information retrieval, rather than evaluating or developing its underlying capabilities.
Stars
330
Forks
51
Language
Python
License
MIT
Category
Last pushed
Feb 06, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/embeddings/snap-stanford/stark"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related tools
cefriel/competence-kg
A tutorial on Knowledge Graphs discussing how to model the employee competences within a company
TIGER-AI-Lab/KB-BINDER
"Few-shot In-context Learning for Knowledge Base Question Answering" [ACL2023]
HKUST-KnowComp/FolkScope
Codes and Datasets for the ACL2023 Findings Paper: FolkScope: Intention Knowledge Graph...
pat-jj/KARE
[ICLR'25] Reasoning-Enhanced Healthcare Predictions with Knowledge Graph Community Retrieval
rsinghlab/K-Paths
Official Implementation of K-Paths: Reasoning over Graph Paths for Drug Repurposing and Drug...