SakuraPuare/ZhiHu_Spider
知乎内容爬虫 | Web scraper for Zhihu content extraction
This tool helps you gather information from Zhihu by extracting content like topics, questions, answers, and comments. You input the specific Zhihu pages or profiles you're interested in, and it outputs organized data ready for your analysis. It's designed for researchers, marketers, or anyone needing to collect large amounts of public content from Zhihu for insights or trend tracking.
No commits in the last 6 months.
Use this if you need to systematically collect a high volume of public data from Zhihu, such as all answers to a specific question or comments on a particular topic.
Not ideal if you only need to extract a few pieces of information manually, or if you require data from private user accounts.
Stars
34
Forks
7
Language
Python
License
—
Category
Last pushed
Apr 12, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/nlp/SakuraPuare/ZhiHu_Spider"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
flairNLP/fundus
A very simple news crawler with a funny name
fhamborg/news-please
news-please - an integrated web crawler and information extractor for news that just works
affjljoo3581/canrevan
대량의 네이버 뉴스 기사를 수집하는 라이브러리입니다.
FreeDiscovery/FreeDiscovery
Web Service for E-Discovery Analytics
tirthajyoti/Web-Database-Analytics
Web scrapping and related analytics using Python tools