t-systems-on-site-services-gmbh/german-wikipedia-text-corpus

This is a german text corpus from Wikipedia. It is cleaned, preprocessed and sentence splitted. It's purpose is to train NLP embeddings like fastText or ELMo Deep contextualized word representations.

36
/ 100
Emerging

This is a German text corpus derived from Wikipedia, including both article content and discussion comments. It takes raw German Wikipedia data and provides a cleaned, preprocessed, and sentence-split text corpus. Data scientists, NLP researchers, or machine learning engineers working with German language models would use this to improve the quality of their downstream tasks.

No commits in the last 6 months.

Use this if you need a large, diverse German text dataset to train natural language processing (NLP) models, especially if your application involves processing conversational or less formal German text.

Not ideal if you need a very clean, strictly formal German text corpus without any discussion or comment content, or if you require a different language.

German-language-processing NLP-model-training text-corpus-creation machine-learning-data computational-linguistics
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 6 / 25
Maturity 16 / 25
Community 14 / 25

How are scores calculated?

Stars

23

Forks

4

Language

License

Last pushed

Feb 22, 2022

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/nlp/t-systems-on-site-services-gmbh/german-wikipedia-text-corpus"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.