jonathanhefner/grubby
Fail-fast web scraping
This tool helps Ruby developers reliably extract data from web pages and JSON APIs, transforming raw web content into structured data. It takes a URL or local file as input and, with defined scraping rules, produces organized information like headlines, links, or product details. This is for Ruby developers building data collection tools who need immediate feedback on structural changes in the data source.
No commits in the last 6 months.
Use this if you are a Ruby developer and need to build web scrapers that fail immediately when the structure of the target website or API changes, preventing corrupted data from being processed.
Not ideal if you need a no-code solution for web scraping or are working in a programming language other than Ruby.
Stars
13
Forks
—
Language
Ruby
License
MIT
Category
Last pushed
May 14, 2023
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/perception/jonathanhefner/grubby"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
scrapy/scrapy
Scrapy, a fast high-level web crawling & scraping framework for Python.
Altimis/Scweet
A simple and unlimited twitter scraper : scrape tweets, likes, retweets, following, followers,...
lexiforest/curl_cffi
Python binding for curl-impersonate fork via cffi. A http client that can impersonate browser...
plabayo/rama
modular service framework to move and transform network packets
scrapinghub/spidermon
Scrapy Extension for monitoring spiders execution.