nervaluate and NER-Evaluation

These are **competitors**—both implement identical SemEval'13-based full named-entity evaluation metrics (as opposed to token-level scoring), with nervaluate being the actively maintained choice given its substantial monthly download activity versus zero downloads for the alternative implementation.

nervaluate
62
Established
NER-Evaluation
48
Emerging
Maintenance 10/25
Adoption 10/25
Maturity 25/25
Community 17/25
Maintenance 0/25
Adoption 10/25
Maturity 16/25
Community 22/25
Stars: 206
Forks: 27
Downloads:
Commits (30d): 0
Language: Python
License: MIT
Stars: 222
Forks: 48
Downloads:
Commits (30d): 0
Language: Python
License: MIT
No Dependents
Stale 6m No Package No Dependents

About nervaluate

MantisAI/nervaluate

Full named-entity (i.e., not tag/token) evaluation metrics based on SemEval’13

When you're building systems that identify specific entities like people, organizations, or locations in text, it's crucial to accurately measure how well your system performs. This tool helps you evaluate your named entity recognition (NER) models by comparing your system's output against a set of known correct labels. It goes beyond simple word-by-word checks to tell you if the system got the whole entity right, partially right, or made a specific type of mistake. This is for anyone who needs to assess the quality of their text analysis systems, such as a computational linguist, data scientist, or researcher working with natural language processing.

natural-language-processing text-analysis information-extraction computational-linguistics ai-model-evaluation

About NER-Evaluation

davidsbatista/NER-Evaluation

An implementation of a full named-entity evaluation metrics based on SemEval'13 Task 9 - not at tag/token level but considering all the tokens that are part of the named-entity

This helps evaluate how accurately a Named Entity Recognition (NER) system identifies and categorizes specific entities like names, locations, or organizations in text. It takes a list of correct entities (the 'gold standard') and the entities identified by your system, then outputs a detailed breakdown of correct, incorrect, and missing identifications. Anyone working with text analysis or information extraction, such as data scientists, computational linguists, or NLP engineers, would use this to assess their NER models.

Named Entity Recognition NLP evaluation Information Extraction Text Analytics Computational Linguistics

Related comparisons

Scores updated daily from GitHub, PyPI, and npm data. How scores work