apache/stormcrawler

A scalable, mature and versatile web crawler based on Apache Storm

74
/ 100
Verified

This project helps build web crawlers that can collect information from websites efficiently and at a large scale. It takes a list of starting web addresses and configuration details, then systematically visits pages, extracts content, and follows links to generate a structured dataset of web content. This is for developers or data engineers who need to create powerful and customized web scraping solutions.

972 stars. Actively maintained with 42 commits in the last 30 days.

Use this if you need to build a bespoke web crawler for large-scale data collection, such as for competitive intelligence, market research, or content aggregation, and require fine-grained control over the crawling process.

Not ideal if you just need to scrape a few pages or prefer a simple, off-the-shelf scraping tool without requiring deep technical configuration.

web-scraping data-acquisition information-extraction large-scale-data internet-monitoring
No Package No Dependents
Maintenance 23 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 25 / 25

How are scores calculated?

Stars

972

Forks

274

Language

Java

License

Apache-2.0

Category

scraper

Last pushed

Mar 28, 2026

Commits (30d)

42

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/perception/apache/stormcrawler"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.