uma-pi1/OPIEC
Reading the data from OPIEC - an Open Information Extraction corpus
This project provides access to a massive dataset derived from Wikipedia, designed for researchers and analysts working with text. It takes raw Wikipedia content and outputs two datasets: one with linguistic annotations like part-of-speech tags and named entities, and another with extracted factual triples (subject-relation-object statements). This is for computational linguists, natural language processing researchers, or data scientists studying information extraction or knowledge graphs.
No commits in the last 6 months.
Use this if you need a pre-processed, large-scale dataset of English Wikipedia with extensive NLP annotations or automatically extracted facts for your research or application.
Not ideal if you need to perform real-time information extraction on new text, as this is a static corpus rather than a processing tool.
Stars
38
Forks
6
Language
Java
License
GPL-3.0
Category
Last pushed
Jun 12, 2019
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/nlp/uma-pi1/OPIEC"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
acl-org/acl-anthology
Data and software for building the ACL Anthology.
anoopkunchukuttan/indic_nlp_library
Resources and tools for Indian language Natural Language Processing
CLUEbenchmark/CLUECorpus2020
Large-scale Pre-training Corpus for Chinese 100G 中文预训练语料
KennethEnevoldsen/scandinavian-embedding-benchmark
A Scandinavian Benchmark for sentence embeddings
Separius/awesome-sentence-embedding
A curated list of pretrained sentence and word embedding models