dchrostowski/autoproxy
Public proxy farm that automatically records and queues suitable proxy servers for web crawling
This project helps web crawlers, data miners, or market researchers reliably gather data from websites without being blocked. It takes publicly available proxy servers, tests their performance and reliability, and then provides a curated list of the most effective proxies for your crawling tasks. It's designed for anyone performing web scraping who needs to maintain anonymity or bypass anti-bot measures.
No commits in the last 6 months.
Use this if you are regularly scraping websites and need a dependable, automatically managed supply of effective proxy servers to avoid being detected or blocked.
Not ideal if you only occasionally scrape a few pages or primarily rely on private, paid proxy services rather than public ones.
Stars
17
Forks
5
Language
Python
License
MIT
Category
Last pushed
Nov 04, 2022
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/perception/dchrostowski/autoproxy"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
scrapy/scrapy
Scrapy, a fast high-level web crawling & scraping framework for Python.
Altimis/Scweet
A simple and unlimited twitter scraper : scrape tweets, likes, retweets, following, followers,...
lexiforest/curl_cffi
Python binding for curl-impersonate fork via cffi. A http client that can impersonate browser...
plabayo/rama
modular service framework to move and transform network packets
scrapinghub/spidermon
Scrapy Extension for monitoring spiders execution.