ZenRows/scaling-to-distributed-crawling

Repository for the Mastering Web Scraping in Python: Scaling to Distributed Crawling blogpost with the final code.

41
/ 100
Emerging

This project helps developers and data engineers efficiently collect large amounts of data from websites. It takes a list of web pages or URLs as input and, using a distributed system, systematically scrapes the content from these pages. The output is the extracted web data, allowing for high-volume data collection without being blocked or slowed down.

No commits in the last 6 months.

Use this if you need to reliably scrape data from many thousands or millions of web pages at scale.

Not ideal if you only need to scrape a small number of pages or prefer a simple, single-machine scraping solution.

web-scraping data-collection distributed-systems data-engineering web-crawling
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 8 / 25
Maturity 16 / 25
Community 17 / 25

How are scores calculated?

Stars

46

Forks

9

Language

HTML

License

MIT

Category

scraper

Last pushed

Oct 29, 2021

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/perception/ZenRows/scaling-to-distributed-crawling"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.