Boomslet/Web_Crawler
Open-source web crawler
This tool helps you automatically visit websites and extract their content, saving it directly to a file as you go. It starts with one or more website addresses you provide and then systematically explores new pages it finds. This is ideal for researchers, marketers, or data analysts who need to collect large amounts of text or information from various public websites.
No commits in the last 6 months.
Use this if you need to gather data, like product descriptions, news articles, or public contact information, from many pages across one or more websites.
Not ideal if you only need to download a few specific files or if the websites you're interested in require logins or have strict anti-bot measures.
Stars
9
Forks
6
Language
Python
License
MIT
Category
Last pushed
Jul 21, 2018
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/perception/Boomslet/Web_Crawler"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
scrapy/scrapy
Scrapy, a fast high-level web crawling & scraping framework for Python.
Altimis/Scweet
A simple and unlimited twitter scraper : scrape tweets, likes, retweets, following, followers,...
lexiforest/curl_cffi
Python binding for curl-impersonate fork via cffi. A http client that can impersonate browser...
plabayo/rama
modular service framework to move and transform network packets
scrapinghub/spidermon
Scrapy Extension for monitoring spiders execution.