naiba/proxy-in-a-box
Automatic proxy pool for web scraping - crawls, validates, and rotates proxies with rate limiting and MITM support
This project helps web scrapers and data extraction specialists reliably gather information from websites without being blocked. It takes lists of potential proxy servers from various online sources, automatically checks if they work, and then uses them to route your web requests. The result is that your data collection tools can operate more smoothly and consistently, appearing to originate from different locations and reducing the likelihood of detection.
Use this if you need to perform web scraping at scale and frequently encounter issues with IP blocking, rate limits, or sophisticated anti-bot measures that detect traditional proxy usage.
Not ideal if your web scraping needs are minimal, or if you only require a single, static proxy that doesn't need to be rotated or automatically managed.
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/perception/naiba/proxy-in-a-box"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
scrapy/scrapy
Scrapy, a fast high-level web crawling & scraping framework for Python.
Altimis/Scweet
A simple and unlimited twitter scraper : scrape tweets, likes, retweets, following, followers,...
lexiforest/curl_cffi
Python binding for curl-impersonate fork via cffi. A http client that can impersonate browser...
plabayo/rama
modular service framework to move and transform network packets
scrapinghub/spidermon
Scrapy Extension for monitoring spiders execution.