anthonywchen/AmbER-Sets
The official repository for "Evaluating Entity Disambiguation and the Role of Popularity in Retrieval-Based NLP" published in ACL-IJNLP 2021.
This project provides specialized datasets called AmbER Sets to help researchers and practitioners evaluate how well their information retrieval systems can distinguish between entities that share the same name. You input your retrieval system's predictions for ambiguous queries, and it outputs performance metrics related to entity disambiguation. This is designed for natural language processing researchers, machine learning engineers, and data scientists working on search, recommendation, or knowledge graph systems.
No commits in the last 6 months.
Use this if you need to rigorously test how accurately your retrieval model identifies the correct entity when multiple entities have identical names.
Not ideal if you are looking for a general-purpose dataset for training or evaluating retrieval systems without a specific focus on entity disambiguation.
Stars
20
Forks
1
Language
Python
License
—
Category
Last pushed
Apr 22, 2022
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/nlp/anthonywchen/AmbER-Sets"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
chakki-works/seqeval
A Python framework for sequence labeling evaluation(named-entity recognition, pos tagging, etc...)
Hironsan/anago
Bidirectional LSTM-CRF and ELMo for Named-Entity Recognition, Part-of-Speech Tagging and so on.
jbesomi/texthero
Text preprocessing, representation and visualization from zero to hero.
hamelsmu/ktext
Utilities for preprocessing text for deep learning with Keras
asahi417/tner
Language model fine-tuning on NER with an easy interface and cross-domain evaluation. "T-NER: An...