hamzaskhan/HTTracker-Alternate
A python script that does what HTTrack does. Since this is in script form, you can also make Docker images and even improve. Please make it adaptable to mac I don't have a machine so can't really test it.
This tool helps you create a complete, browsable offline copy of any website. You provide a website address and how deep into its links you want to go, and it downloads all the pages, images, and other files. The output is a local folder containing the entire site, ready for you to browse without an internet connection, along with a map of the site's structure.
No commits in the last 6 months.
Use this if you need to access a website's content reliably even when you don't have internet access, or if you want to archive a website's state at a specific point in time.
Not ideal if you only need to download a few specific files or if the website requires user login or complex interactions, as it's designed for static site capture.
Stars
9
Forks
1
Language
Python
License
MIT
Category
Last pushed
Aug 14, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/perception/hamzaskhan/HTTracker-Alternate"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
scrapy/scrapy
Scrapy, a fast high-level web crawling & scraping framework for Python.
Altimis/Scweet
A simple and unlimited twitter scraper : scrape tweets, likes, retweets, following, followers,...
lexiforest/curl_cffi
Python binding for curl-impersonate fork via cffi. A http client that can impersonate browser...
plabayo/rama
modular service framework to move and transform network packets
scrapinghub/spidermon
Scrapy Extension for monitoring spiders execution.