janhq/OpenCrawl
🌐 OpenCrawl: An ethical, high-performance web crawler built for scale A powerful web crawling library that respects robots.txt and rate limits while leveraging Kafka for high-throughput data processing. Built with ethics and efficiency in mind.
This project helps data professionals, researchers, and analysts gather information from websites ethically and efficiently. You provide a list of URLs, and it returns structured data like titles, topics, and summaries extracted from the web pages, adhering strictly to website rules like `robots.txt` and rate limits. It's designed for anyone needing to collect and analyze large volumes of web content for research, market intelligence, or data pipelines.
No commits in the last 6 months.
Use this if you need to systematically collect and analyze web content from multiple websites, requiring structured data extraction and adherence to ethical crawling practices.
Not ideal if you need to interact with websites that block automated access or require complex human-like browsing behavior for data collection.
Stars
20
Forks
1
Language
Python
License
Apache-2.0
Category
Last pushed
Apr 03, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/perception/janhq/OpenCrawl"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
scrapy/scrapy
Scrapy, a fast high-level web crawling & scraping framework for Python.
Altimis/Scweet
A simple and unlimited twitter scraper : scrape tweets, likes, retweets, following, followers,...
lexiforest/curl_cffi
Python binding for curl-impersonate fork via cffi. A http client that can impersonate browser...
plabayo/rama
modular service framework to move and transform network packets
scrapinghub/spidermon
Scrapy Extension for monitoring spiders execution.