Insutanto/scrapy-distributed
A series of distributed components for Scrapy. Including RabbitMQ-based components, Kafka-based components, and RedisBloom-based components for Scrapy.
This project helps Python developers build powerful web crawlers that can collect data from many websites at the same time. It takes a standard Scrapy web scraping project and uses message queues like RabbitMQ or Kafka to coordinate multiple crawlers. The output is a highly scalable system for gathering web data efficiently.
No commits in the last 6 months. Available on PyPI.
Use this if you are a Python developer needing to scale up your Scrapy web scraping projects to handle large volumes of data collection across many sources.
Not ideal if you are looking for a pre-built data extraction tool or a non-technical user without Python and Scrapy development experience.
Stars
60
Forks
11
Language
Python
License
—
Category
Last pushed
Aug 24, 2025
Commits (30d)
0
Dependencies
5
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/perception/Insutanto/scrapy-distributed"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
scrapy/scrapy
Scrapy, a fast high-level web crawling & scraping framework for Python.
Altimis/Scweet
A simple and unlimited twitter scraper : scrape tweets, likes, retweets, following, followers,...
lexiforest/curl_cffi
Python binding for curl-impersonate fork via cffi. A http client that can impersonate browser...
plabayo/rama
modular service framework to move and transform network packets
scrapinghub/spidermon
Scrapy Extension for monitoring spiders execution.