crawlab-team/crawlab

Distributed web crawler admin platform for spiders management regardless of languages and frameworks. 分布式爬虫管理平台,支持任何语言和框架

59
/ 100
Established

Crawlab is a platform for teams that need to collect data from websites using automated web crawlers. It allows you to upload and manage your crawling scripts, schedule when they run, and view the collected data and logs through a web interface. It's designed for data analysts, researchers, or business intelligence professionals who rely on large-scale web data.

12,177 stars.

Use this if you need to run, monitor, and scale multiple web scraping projects across several machines, and want a centralized system to manage them.

Not ideal if you only need to run a single, simple web scraping script on your local machine occasionally.

web-scraping data-collection business-intelligence market-research data-operations
No Package No Dependents
Maintenance 10 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 23 / 25

How are scores calculated?

Stars

12,177

Forks

1,882

Language

Go

License

BSD-3-Clause

Category

scraper

Last pushed

Feb 10, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/perception/crawlab-team/crawlab"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.