lein3000zzz/project-arachne
web-crawler project with kafka queues, redis cache support, neo4j map builder, strict robots.txt compliance, one of a kind static js parser with goja, some cool features from rod library and more! No updates for a while, but the work will be resumed soon
This tool helps gather information from websites, especially those that display content dynamically using JavaScript. You provide a list of websites to explore, and it systematically visits pages, extracts links and content, and stores the interconnected data in a graph database for easy analysis of relationships. It's ideal for market researchers, data scientists, or competitive intelligence analysts who need to collect and map large amounts of web data.
Use this if you need to extract data from modern, JavaScript-heavy websites and want to understand how pages link together, storing this information for complex querying.
Not ideal if you only need to extract data from static HTML pages or if you don't require advanced relational analysis of the collected web data.
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/perception/lein3000zzz/project-arachne"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
scrapy/scrapy
Scrapy, a fast high-level web crawling & scraping framework for Python.
Altimis/Scweet
A simple and unlimited twitter scraper : scrape tweets, likes, retweets, following, followers,...
lexiforest/curl_cffi
Python binding for curl-impersonate fork via cffi. A http client that can impersonate browser...
plabayo/rama
modular service framework to move and transform network packets
scrapinghub/spidermon
Scrapy Extension for monitoring spiders execution.