GeekDream-x/IDOL
Repo for paper "IDOL: Indicator-oriented Logic Pre-training for Logical Reasoning" accepted to the Findings of ACL 2023
This project helps natural language processing (NLP) researchers and engineers create language models that are better at logical reasoning. It takes a base language model (like BERT or RoBERTa) and a specially prepared logical reasoning dataset to produce an enhanced model. This improved model can then be fine-tuned for specific tasks requiring deeper logical understanding, such as multiple-choice reading comprehension questions.
No commits in the last 6 months.
Use this if you need to build language models that excel at complex logical reasoning tasks and can understand nuanced relationships in text.
Not ideal if you are looking for a ready-to-use application or a solution for general text understanding tasks that don't heavily rely on logical deduction.
Stars
22
Forks
5
Language
Python
License
Apache-2.0
Category
Last pushed
Nov 07, 2023
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/nlp/GeekDream-x/IDOL"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
ymcui/cmrc2018
A Span-Extraction Dataset for Chinese Machine Reading Comprehension (CMRC 2018)
princeton-nlp/DensePhrases
[ACL 2021] Learning Dense Representations of Phrases at Scale; EMNLP'2021: Phrase Retrieval...
thunlp/MultiRD
Code and data of the AAAI-20 paper "Multi-channel Reverse Dictionary Model"
IndexFziQ/KMRC-Papers
A list of recent papers regarding knowledge-based machine reading comprehension.
danqi/rc-cnn-dailymail
CNN/Daily Mail Reading Comprehension Task