anthonywchen/AmbER-Sets

The official repository for "Evaluating Entity Disambiguation and the Role of Popularity in Retrieval-Based NLP" published in ACL-IJNLP 2021.

19
/ 100
Experimental

This project provides specialized datasets called AmbER Sets to help researchers and practitioners evaluate how well their information retrieval systems can distinguish between entities that share the same name. You input your retrieval system's predictions for ambiguous queries, and it outputs performance metrics related to entity disambiguation. This is designed for natural language processing researchers, machine learning engineers, and data scientists working on search, recommendation, or knowledge graph systems.

No commits in the last 6 months.

Use this if you need to rigorously test how accurately your retrieval model identifies the correct entity when multiple entities have identical names.

Not ideal if you are looking for a general-purpose dataset for training or evaluating retrieval systems without a specific focus on entity disambiguation.

information-retrieval natural-language-processing entity-disambiguation search-evaluation machine-learning-research
No License Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 6 / 25
Maturity 8 / 25
Community 5 / 25

How are scores calculated?

Stars

20

Forks

1

Language

Python

License

Last pushed

Apr 22, 2022

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/nlp/anthonywchen/AmbER-Sets"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.