AstraBert/scpr
Web scraper CLI and MCP built for human and coding agents
This tool helps you quickly gather information from websites by converting entire web pages or linked content into easy-to-read Markdown files. It takes a web page URL as input and outputs structured text files, which is useful for content analysis, research, or building knowledge bases. Anyone needing to extract and save web content for later review or processing would find this helpful.
Use this if you need to extract human-readable content from one or multiple related web pages and save it in a common, easily parsable format like Markdown.
Not ideal if you need to extract highly specific data from complex tables or forms, or interact with JavaScript-heavy sites that require advanced browser automation.
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/perception/AstraBert/scpr"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
scrapy/scrapy
Scrapy, a fast high-level web crawling & scraping framework for Python.
Altimis/Scweet
A simple and unlimited twitter scraper : scrape tweets, likes, retweets, following, followers,...
lexiforest/curl_cffi
Python binding for curl-impersonate fork via cffi. A http client that can impersonate browser...
plabayo/rama
modular service framework to move and transform network packets
scrapinghub/spidermon
Scrapy Extension for monitoring spiders execution.