louisguitton/disqus-crawler
Crawl DISQUS comments from a blog into a local MongoDB database
This tool helps researchers and marketers collect comments from blog posts that use the DISQUS commenting system. It takes the URL of a blog page as input and saves all associated DISQUS comments into a local database. It is designed for analysts, researchers, or social media managers who need to gather public feedback or conduct sentiment analysis from specific online discussions.
No commits in the last 6 months.
Use this if you need to systematically collect and store DISQUS comments from a specific blog for analysis, especially when the content relies on JavaScript rendering.
Not ideal if you need to collect comments from platforms other than DISQUS or require a ready-to-use solution without setting up a database and web-rendering service.
Stars
13
Forks
1
Language
Python
License
MIT
Category
Last pushed
Oct 19, 2019
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/perception/louisguitton/disqus-crawler"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
scrapy/scrapy
Scrapy, a fast high-level web crawling & scraping framework for Python.
Altimis/Scweet
A simple and unlimited twitter scraper : scrape tweets, likes, retweets, following, followers,...
lexiforest/curl_cffi
Python binding for curl-impersonate fork via cffi. A http client that can impersonate browser...
plabayo/rama
modular service framework to move and transform network packets
scrapinghub/spidermon
Scrapy Extension for monitoring spiders execution.