target-benchmark/target

TARGET is a benchmark for evaluating Table Retrieval for Generative Tasks such as Fact Verification and Text-to-SQL

42
/ 100
Emerging

This project helps evaluate how well different table retrieval systems perform when asked to find relevant tables for answering questions or generating code. You input a question and a collection of tables, and it outputs scores indicating how accurately the system retrieved the correct tables and generated the right answer. This is useful for researchers and data scientists working on advanced AI systems that interact with tabular data.

No commits in the last 6 months.

Use this if you are developing or comparing different methods for finding specific tables within large datasets to answer natural language questions or convert text to SQL.

Not ideal if you are looking for a tool to directly perform table retrieval or answer generation in a production environment, as this is a benchmark for evaluating such systems.

table-retrieval natural-language-processing text-to-sql fact-verification generative-ai
Stale 6m No Package No Dependents
Maintenance 2 / 25
Adoption 7 / 25
Maturity 16 / 25
Community 17 / 25

How are scores calculated?

Stars

28

Forks

11

Language

Jupyter Notebook

License

Apache-2.0

Last pushed

Jul 14, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/rag/target-benchmark/target"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.