resulumit/scrp_workshop

Slides for a workshop on automated web scraping with R

36
/ 100
Emerging

This workshop helps researchers and analysts gather publicly available data from websites efficiently. It teaches you how to programmatically extract information like text, links, or tables from web pages, providing a structured dataset for your analysis. Anyone who needs to collect specific data points from many web pages, like a market researcher tracking competitor pricing or an academic collecting public records, would find this useful.

No commits in the last 6 months.

Use this if you need to systematically collect data from numerous web pages for research, analysis, or monitoring purposes.

Not ideal if you only need to collect data from a few web pages manually, or if the data you need is available through an official API.

data-collection market-research competitive-intelligence academic-research public-data-gathering
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 5 / 25
Maturity 16 / 25
Community 15 / 25

How are scores calculated?

Stars

11

Forks

5

Language

HTML

License

Category

scraper

Last pushed

Jun 11, 2022

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/perception/resulumit/scrp_workshop"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.