AlexMathew/scrapple
A framework for creating semi-automatic web content extractors
This tool helps non-technical users extract specific information and data tables from websites without writing any code. You provide the web page addresses and specify what content you want, and the tool outputs the structured data you need. This is ideal for researchers, marketers, or anyone who regularly needs to gather data from many web pages.
501 stars.
Use this if you need to regularly collect structured data like product details, news articles, or competitor pricing from multiple web pages or entire websites.
Not ideal if you only need to copy-paste data occasionally or if the websites you're targeting have highly complex, frequently changing structures that are hard to define with simple rules.
Stars
501
Forks
41
Language
Python
License
MIT
Category
Last pushed
Jan 16, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/perception/AlexMathew/scrapple"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related tools
scrapy/scrapy
Scrapy, a fast high-level web crawling & scraping framework for Python.
Altimis/Scweet
A simple and unlimited twitter scraper : scrape tweets, likes, retweets, following, followers,...
lexiforest/curl_cffi
Python binding for curl-impersonate fork via cffi. A http client that can impersonate browser...
plabayo/rama
modular service framework to move and transform network packets
scrapinghub/spidermon
Scrapy Extension for monitoring spiders execution.