Hironsan/wiki-article-dataset
Wikipedia article dataset
This project provides a large collection of Japanese Wikipedia articles, pre-processed into individual sentences. Each sentence is already broken down into its constituent words, making it ready for analysis. It's designed for researchers, data scientists, or NLP practitioners working with Japanese text.
No commits in the last 6 months.
Use this if you need a readily available, tokenized dataset of Japanese text to train natural language processing models, especially for understanding sentence relationships.
Not ideal if you need raw, untokenized Japanese text or a dataset focused on specific domains outside of general encyclopedic knowledge.
Stars
12
Forks
1
Language
Jupyter Notebook
License
MIT
Category
Last pushed
May 10, 2019
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/nlp/Hironsan/wiki-article-dataset"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
acl-org/acl-anthology
Data and software for building the ACL Anthology.
anoopkunchukuttan/indic_nlp_library
Resources and tools for Indian language Natural Language Processing
CLUEbenchmark/CLUECorpus2020
Large-scale Pre-training Corpus for Chinese 100G 中文预训练语料
KennethEnevoldsen/scandinavian-embedding-benchmark
A Scandinavian Benchmark for sentence embeddings
Separius/awesome-sentence-embedding
A curated list of pretrained sentence and word embedding models