shjwudp/c4-dataset-script

Inspired by google c4, here is a series of colossal clean data cleaning scripts focused on CommonCrawl data processing. Including Chinese data processing and cleaning methods in MassiveText.

41
/ 100
Emerging

This project helps data scientists and machine learning engineers create massive, clean text datasets from raw Common Crawl web archives. It takes raw web-extracted text (WET) files or Common Crawl indices as input and produces high-quality, deduplicated, and filtered text corpora suitable for training large language models. This is ideal for those building large-scale natural language processing applications.

135 stars. No commits in the last 6 months.

Use this if you need to build a large, clean text corpus from web data for training language models, especially if you need to process Chinese web content with advanced cleaning techniques.

Not ideal if you're looking for a simple, desktop-based text cleaning tool or if your data sources are not Common Crawl web archives.

natural-language-processing large-language-models web-data-cleaning text-corpus-creation big-data-processing
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 15 / 25

How are scores calculated?

Stars

135

Forks

17

Language

Python

License

MIT

Last pushed

Jun 07, 2023

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/nlp/shjwudp/c4-dataset-script"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.