ilinguistics/common_crawl_corpus

Scripts for building a geo-located web corpus using Common Crawl data

31
/ 100
Emerging

This project helps natural language processing (NLP) researchers and computational linguists build large, clean, and geographically tagged text datasets. It takes raw web data from the Common Crawl project and processes it to create a structured corpus of text, optionally identifying the language of each segment. This is for professionals who need to develop or test language models, conduct linguistic research, or create language-specific applications.

Use this if you need to create a large-scale, clean, and geo-located text corpus from web data for your NLP research or applications.

Not ideal if you need a small, highly curated dataset or don't have access to AWS S3 for storage and processing.

natural-language-processing computational-linguistics text-corpus-creation language-model-development linguistic-data-analysis
No Package No Dependents
Maintenance 10 / 25
Adoption 5 / 25
Maturity 16 / 25
Community 0 / 25

How are scores calculated?

Stars

11

Forks

Language

Python

License

GPL-3.0

Category

scraper

Last pushed

Jan 18, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/perception/ilinguistics/common_crawl_corpus"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.