soco-ai/SF-QA

Evaluation framework for open-domain question answering.

37
/ 100
Emerging

Evaluating open-domain question answering (QA) systems typically involves significant setup for data indexing and pipeline construction. This tool simplifies that process, allowing researchers to quickly assess their QA models. You provide a question answering model and it provides a benchmarked evaluation of its performance, helping determine how well it retrieves and answers questions from a large knowledge base like Wikipedia. This is for AI/NLP researchers developing new question answering systems.

No commits in the last 6 months.

Use this if you are developing or comparing open-domain question answering models and need a quick, standardized way to evaluate their accuracy and efficiency without building a complex evaluation pipeline from scratch.

Not ideal if you are a business user looking for a pre-built Q&A system for customer support or internal knowledge management, as this is an evaluation tool for model development, not an end-user application.

natural-language-processing question-answering-evaluation AI-model-benchmarking information-retrieval NLP-research
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 6 / 25
Maturity 16 / 25
Community 15 / 25

How are scores calculated?

Stars

20

Forks

5

Language

Python

License

Apache-2.0

Last pushed

May 16, 2021

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/nlp/soco-ai/SF-QA"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.