resulumit/scrp_workshop
Slides for a workshop on automated web scraping with R
This workshop helps researchers and analysts gather publicly available data from websites efficiently. It teaches you how to programmatically extract information like text, links, or tables from web pages, providing a structured dataset for your analysis. Anyone who needs to collect specific data points from many web pages, like a market researcher tracking competitor pricing or an academic collecting public records, would find this useful.
No commits in the last 6 months.
Use this if you need to systematically collect data from numerous web pages for research, analysis, or monitoring purposes.
Not ideal if you only need to collect data from a few web pages manually, or if the data you need is available through an official API.
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/perception/resulumit/scrp_workshop"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
scrapy/scrapy
Scrapy, a fast high-level web crawling & scraping framework for Python.
Altimis/Scweet
A simple and unlimited twitter scraper : scrape tweets, likes, retweets, following, followers,...
lexiforest/curl_cffi
Python binding for curl-impersonate fork via cffi. A http client that can impersonate browser...
plabayo/rama
modular service framework to move and transform network packets
scrapinghub/spidermon
Scrapy Extension for monitoring spiders execution.