uma-pi1/OPIEC

Reading the data from OPIEC - an Open Information Extraction corpus

37
/ 100
Emerging

This project provides access to a massive dataset derived from Wikipedia, designed for researchers and analysts working with text. It takes raw Wikipedia content and outputs two datasets: one with linguistic annotations like part-of-speech tags and named entities, and another with extracted factual triples (subject-relation-object statements). This is for computational linguists, natural language processing researchers, or data scientists studying information extraction or knowledge graphs.

No commits in the last 6 months.

Use this if you need a pre-processed, large-scale dataset of English Wikipedia with extensive NLP annotations or automatically extracted facts for your research or application.

Not ideal if you need to perform real-time information extraction on new text, as this is a static corpus rather than a processing tool.

natural-language-processing information-extraction computational-linguistics knowledge-graphs text-analytics
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 7 / 25
Maturity 16 / 25
Community 14 / 25

How are scores calculated?

Stars

38

Forks

6

Language

Java

License

GPL-3.0

Last pushed

Jun 12, 2019

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/nlp/uma-pi1/OPIEC"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.