internetarchive/Zeno
State-of-the-art web crawler 🔱
This tool helps preserve web content by creating archives of websites, from a single page to a broad collection. You provide a web address or a list of addresses, and it generates Web Archive (WARC) files that store the web pages and their assets. Digital archivists, researchers, and anyone needing to save specific web content for future reference would find this useful.
393 stars.
Use this if you need to reliably capture and store web content exactly as it appeared at a given time, whether for historical preservation, research, or compliance.
Not ideal if you're looking for a general-purpose tool to extract data from websites for analytics or competitive intelligence, as its primary focus is archival.
Stars
393
Forks
55
Language
Go
License
AGPL-3.0
Category
Last pushed
Mar 24, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/perception/internetarchive/Zeno"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related tools
scrapy/scrapy
Scrapy, a fast high-level web crawling & scraping framework for Python.
Altimis/Scweet
A simple and unlimited twitter scraper : scrape tweets, likes, retweets, following, followers,...
lexiforest/curl_cffi
Python binding for curl-impersonate fork via cffi. A http client that can impersonate browser...
plabayo/rama
modular service framework to move and transform network packets
scrapinghub/spidermon
Scrapy Extension for monitoring spiders execution.