zhengyima/Anchors
Source code of CIKM2021 Paper 'Pre-training for Ad-hoc Retrieval: Hyperlink is Also You Need'
This project helps information retrieval researchers improve the quality of ad-hoc search results. It takes a large text corpus, especially one with hyperlinks like Wikipedia, processes it to understand relationships, and then outputs a pre-trained language model. This model can then be used by researchers to enhance search relevance and ranking in their retrieval systems.
No commits in the last 6 months.
Use this if you are an information retrieval researcher working on ad-hoc retrieval and want to leverage hyperlink structures for pre-training language models to improve search relevance.
Not ideal if you are looking for an out-of-the-box search engine or a solution for general text classification, as this requires significant technical setup and understanding of pre-training.
Stars
16
Forks
1
Language
Python
License
—
Category
Last pushed
Aug 30, 2021
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/nlp/zhengyima/Anchors"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
ymcui/cmrc2018
A Span-Extraction Dataset for Chinese Machine Reading Comprehension (CMRC 2018)
princeton-nlp/DensePhrases
[ACL 2021] Learning Dense Representations of Phrases at Scale; EMNLP'2021: Phrase Retrieval...
thunlp/MultiRD
Code and data of the AAAI-20 paper "Multi-channel Reverse Dictionary Model"
IndexFziQ/KMRC-Papers
A list of recent papers regarding knowledge-based machine reading comprehension.
danqi/rc-cnn-dailymail
CNN/Daily Mail Reading Comprehension Task