Insutanto/scrapy-distributed

A series of distributed components for Scrapy. Including RabbitMQ-based components, Kafka-based components, and RedisBloom-based components for Scrapy.

43
/ 100
Emerging

This project helps Python developers build powerful web crawlers that can collect data from many websites at the same time. It takes a standard Scrapy web scraping project and uses message queues like RabbitMQ or Kafka to coordinate multiple crawlers. The output is a highly scalable system for gathering web data efficiently.

No commits in the last 6 months. Available on PyPI.

Use this if you are a Python developer needing to scale up your Scrapy web scraping projects to handle large volumes of data collection across many sources.

Not ideal if you are looking for a pre-built data extraction tool or a non-technical user without Python and Scrapy development experience.

web-scraping data-collection distributed-systems Python-development crawler-architecture
No License Stale 6m
Maintenance 2 / 25
Adoption 8 / 25
Maturity 17 / 25
Community 16 / 25

How are scores calculated?

Stars

60

Forks

11

Language

Python

License

Category

scraper

Last pushed

Aug 24, 2025

Commits (30d)

0

Dependencies

5

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/perception/Insutanto/scrapy-distributed"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.