crawlab-team/crawlab
Distributed web crawler admin platform for spiders management regardless of languages and frameworks. 分布式爬虫管理平台,支持任何语言和框架
Crawlab is a platform for teams that need to collect data from websites using automated web crawlers. It allows you to upload and manage your crawling scripts, schedule when they run, and view the collected data and logs through a web interface. It's designed for data analysts, researchers, or business intelligence professionals who rely on large-scale web data.
12,177 stars.
Use this if you need to run, monitor, and scale multiple web scraping projects across several machines, and want a centralized system to manage them.
Not ideal if you only need to run a single, simple web scraping script on your local machine occasionally.
Stars
12,177
Forks
1,882
Language
Go
License
BSD-3-Clause
Category
Last pushed
Feb 10, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/perception/crawlab-team/crawlab"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related tools
scrapy/scrapy
Scrapy, a fast high-level web crawling & scraping framework for Python.
Altimis/Scweet
A simple and unlimited twitter scraper : scrape tweets, likes, retweets, following, followers,...
lexiforest/curl_cffi
Python binding for curl-impersonate fork via cffi. A http client that can impersonate browser...
plabayo/rama
modular service framework to move and transform network packets
scrapinghub/spidermon
Scrapy Extension for monitoring spiders execution.