jantrienes/nereval

Evaluation script for named entity recognition (NER) systems based on entity-level F1 score.

46
/ 100
Emerging

This tool helps you assess how well your automated system identifies and categorizes specific entities within text, like product names or dates. You input a list of the entities your system found and compare it against a 'ground truth' list of what should have been found. The output is a clear F1-score, indicating the accuracy of your system's entity recognition. It's ideal for data scientists, NLP researchers, or anyone building systems that automatically extract structured information from unstructured text.

No commits in the last 6 months. Available on PyPI.

Use this if you need to objectively measure the performance of a Named Entity Recognition (NER) model, evaluating both the correctness of the entity type and its exact boundaries in text.

Not ideal if you are evaluating a general text classification system or a simple keyword extraction tool, as it's specifically designed for granular entity-level assessment.

natural-language-processing information-extraction text-analytics machine-learning-evaluation
Stale 6m No Dependents
Maintenance 0 / 25
Adoption 8 / 25
Maturity 25 / 25
Community 13 / 25

How are scores calculated?

Stars

69

Forks

8

Language

Python

License

MIT

Last pushed

Apr 20, 2021

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/nlp/jantrienes/nereval"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.