gonzalezcortes/scraping_news_articles
Python Scripts for Academic Web Scraping of WSJ Articles: Database Setup, Crawl, and Scrape.
This tool helps academic researchers gather specific news articles from the Wall Street Journal for their studies. It takes a target year and WSJ website pages as input, then extracts article links, headlines, publication times, and full text. The output is a structured SQLite database containing the scraped content and metadata, ready for research analysis.
No commits in the last 6 months.
Use this if you are an academic researcher who needs to systematically collect and organize Wall Street Journal articles for a specific research project.
Not ideal if you need to scrape data from websites other than the Wall Street Journal or require a solution for non-academic, commercial purposes.
Stars
9
Forks
2
Language
Python
License
MIT
Category
Last pushed
Oct 12, 2023
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/perception/gonzalezcortes/scraping_news_articles"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
scrapy/scrapy
Scrapy, a fast high-level web crawling & scraping framework for Python.
Altimis/Scweet
A simple and unlimited twitter scraper : scrape tweets, likes, retweets, following, followers,...
lexiforest/curl_cffi
Python binding for curl-impersonate fork via cffi. A http client that can impersonate browser...
plabayo/rama
modular service framework to move and transform network packets
scrapinghub/spidermon
Scrapy Extension for monitoring spiders execution.