ict-bigdatalab/CorpusBrain
CIKM 2022: CorpusBrain: Pre-train a Generative Retrieval Model for Knowledge-Intensive Language Tasks
This project helps developers build advanced natural language processing systems that need to quickly find specific information within a large body of text, like Wikipedia. Instead of complex, multi-step search pipelines, it offers a single generative model that takes a query and directly outputs relevant knowledge, such as Wikipedia titles. It's for machine learning engineers and researchers working on knowledge-intensive language tasks.
No commits in the last 6 months.
Use this if you are developing AI applications that require retrieving specific facts or entities from a vast corpus efficiently, without needing to maintain external search indexes.
Not ideal if your retrieval needs are simple keyword searches or if you are not comfortable working with pre-trained generative models and fine-tuning.
Stars
34
Forks
3
Language
Python
License
Apache-2.0
Category
Last pushed
Aug 31, 2022
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/nlp/ict-bigdatalab/CorpusBrain"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
ymcui/cmrc2018
A Span-Extraction Dataset for Chinese Machine Reading Comprehension (CMRC 2018)
thunlp/MultiRD
Code and data of the AAAI-20 paper "Multi-channel Reverse Dictionary Model"
princeton-nlp/DensePhrases
[ACL 2021] Learning Dense Representations of Phrases at Scale; EMNLP'2021: Phrase Retrieval...
IndexFziQ/KMRC-Papers
A list of recent papers regarding knowledge-based machine reading comprehension.
danqi/rc-cnn-dailymail
CNN/Daily Mail Reading Comprehension Task