MantisAI/nervaluate
Full named-entity (i.e., not tag/token) evaluation metrics based on SemEval’13
When you're building systems that identify specific entities like people, organizations, or locations in text, it's crucial to accurately measure how well your system performs. This tool helps you evaluate your named entity recognition (NER) models by comparing your system's output against a set of known correct labels. It goes beyond simple word-by-word checks to tell you if the system got the whole entity right, partially right, or made a specific type of mistake. This is for anyone who needs to assess the quality of their text analysis systems, such as a computational linguist, data scientist, or researcher working with natural language processing.
206 stars. Available on PyPI.
Use this if you need a detailed and nuanced understanding of how accurately your named entity recognition (NER) model identifies specific entities in text, beyond just individual words.
Not ideal if you only need a basic, token-level accuracy report for your text classification tasks.
Stars
206
Forks
27
Language
Python
License
MIT
Category
Last pushed
Mar 12, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/nlp/MantisAI/nervaluate"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related tools
dice-group/gerbil
GERBIL - General Entity annotatoR Benchmark
bltlab/seqscore
SeqScore: Scoring for named entity recognition and other sequence labeling tasks
syuoni/eznlp
Easy Natural Language Processing
LHNCBC/metamaplite
A near real-time named-entity recognizer
OpenJarbas/simple_NER
simple rule based named entity recognition