gkiril/benchie

Comprehensive evaluation framework for Open Information Extraction.

40
/ 100
Emerging

This tool helps researchers and developers evaluate the performance of Open Information Extraction (OIE) systems. It takes manually annotated 'gold standard' extractions and extractions from various OIE systems as input. It then outputs precision, recall, and F1 scores, allowing users to compare how well different OIE systems extract factual information from text. This is designed for natural language processing (NLP) researchers and engineers developing or utilizing OIE technologies.

No commits in the last 6 months.

Use this if you need to objectively measure and compare the accuracy of different Open Information Extraction (OIE) models on your textual data.

Not ideal if you are looking for an OIE system itself rather than a tool to evaluate existing ones, or if your evaluation needs go beyond precision, recall, and F1 scores.

natural-language-processing information-extraction model-evaluation text-analytics computational-linguistics
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 7 / 25
Maturity 16 / 25
Community 17 / 25

How are scores calculated?

Stars

40

Forks

10

Language

Python

License

Last pushed

Jun 21, 2022

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/nlp/gkiril/benchie"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.