tednaleid/ganda
fast cmd-line app that quickly requests millions of urls and can save/echo the results
Need to quickly fetch data from many web addresses? This tool helps you efficiently make HTTP/HTTPS requests to hundreds or millions of URLs. You provide a list of URLs, and it fetches the web page content or API responses, either printing them directly or saving them to a directory for your analysis. It's perfect for anyone who needs to gather large amounts of web data, like data analysts, researchers, or SEO specialists.
Use this if you need to rapidly make many HTTP requests to unique URLs and process the responses, such as scraping data, checking API endpoints, or bulk downloading specific content.
Not ideal if you need to test the performance or stress-test a single web service with repeated requests, as it's not designed for load testing.
Stars
64
Forks
9
Language
Go
License
Apache-2.0
Category
Last pushed
Apr 02, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/perception/tednaleid/ganda"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related tools
scrapy/scrapy
Scrapy, a fast high-level web crawling & scraping framework for Python.
Altimis/Scweet
A simple and unlimited twitter scraper : scrape tweets, likes, retweets, following, followers,...
lexiforest/curl_cffi
Python binding for curl-impersonate fork via cffi. A http client that can impersonate browser...
plabayo/rama
modular service framework to move and transform network packets
scrapinghub/spidermon
Scrapy Extension for monitoring spiders execution.