davidsbatista/NER-Evaluation

An implementation of a full named-entity evaluation metrics based on SemEval'13 Task 9 - not at tag/token level but considering all the tokens that are part of the named-entity

48
/ 100
Emerging

This helps evaluate how accurately a Named Entity Recognition (NER) system identifies and categorizes specific entities like names, locations, or organizations in text. It takes a list of correct entities (the 'gold standard') and the entities identified by your system, then outputs a detailed breakdown of correct, incorrect, and missing identifications. Anyone working with text analysis or information extraction, such as data scientists, computational linguists, or NLP engineers, would use this to assess their NER models.

222 stars. No commits in the last 6 months.

Use this if you need to thoroughly evaluate the performance of your Named Entity Recognition model beyond simple token-level matching, considering partial matches and incorrect entity types.

Not ideal if you only need a basic, quick evaluation of individual token tagging accuracy rather than a comprehensive entity-level assessment.

Named Entity Recognition NLP evaluation Information Extraction Text Analytics Computational Linguistics
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 22 / 25

How are scores calculated?

Stars

222

Forks

48

Language

Python

License

MIT

Last pushed

Jul 02, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/nlp/davidsbatista/NER-Evaluation"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.