68publishers/crawler
:spider_web: Awesome scenario based crawler
Crawler helps you automatically gather information from websites by defining specific steps and patterns. You tell it what data you want and where to find it on different web pages, and it systematically collects that information. This tool is designed for developers or technical users who need to automate large-scale web data extraction for various applications.
No commits in the last 6 months.
Use this if you need a robust, self-hosted solution to programmatically scrape data from many web pages based on custom, repeatable scenarios.
Not ideal if you prefer a no-code or low-code solution for web scraping, or if you only need to extract data from a few pages manually.
Stars
10
Forks
2
Language
JavaScript
License
MIT
Category
Last pushed
Mar 06, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/perception/68publishers/crawler"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
scrapy/scrapy
Scrapy, a fast high-level web crawling & scraping framework for Python.
Altimis/Scweet
A simple and unlimited twitter scraper : scrape tweets, likes, retweets, following, followers,...
lexiforest/curl_cffi
Python binding for curl-impersonate fork via cffi. A http client that can impersonate browser...
plabayo/rama
modular service framework to move and transform network packets
scrapinghub/spidermon
Scrapy Extension for monitoring spiders execution.