Cirice/Krawler
A complete multi-threaded web-crawler in Python3
Quickly gather information from a specific website by systematically exploring all its pages. You provide a website address and specify how many simultaneous connections to use, and it returns a file containing the URLs and content found. This tool is ideal for researchers, data analysts, or marketers needing to extract data from a single domain.
No commits in the last 6 months.
Use this if you need to thoroughly explore all public pages within a specific website and collect their content or links.
Not ideal if you need to crawl a large number of diverse websites across the internet or require advanced data extraction and parsing capabilities beyond simple page content.
Stars
12
Forks
4
Language
Python
License
MIT
Category
Last pushed
May 24, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/perception/Cirice/Krawler"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
scrapy/scrapy
Scrapy, a fast high-level web crawling & scraping framework for Python.
Altimis/Scweet
A simple and unlimited twitter scraper : scrape tweets, likes, retweets, following, followers,...
lexiforest/curl_cffi
Python binding for curl-impersonate fork via cffi. A http client that can impersonate browser...
plabayo/rama
modular service framework to move and transform network packets
scrapinghub/spidermon
Scrapy Extension for monitoring spiders execution.