let4be/crusty-core
A small library for building fast and highly customizable web crawlers
This library helps Rust developers build highly customized and efficient web crawlers. It takes a starting URL or list of URLs and, based on user-defined rules, navigates websites, extracts specific data like page titles or links, and outputs the collected information. It's designed for engineers who need fine-grained control over their crawling process and aim for speed and scalability.
No commits in the last 6 months.
Use this if you are a Rust developer needing to programmatically collect specific information from a large number of websites, where performance and custom logic for filtering and data extraction are critical.
Not ideal if you're looking for a ready-to-use web scraping application without writing code, or if you need to build a crawler in a language other than Rust.
Stars
16
Forks
1
Language
Rust
License
GPL-3.0
Category
Last pushed
Jan 04, 2023
Monthly downloads
195
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/perception/let4be/crusty-core"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
scrapy/scrapy
Scrapy, a fast high-level web crawling & scraping framework for Python.
Altimis/Scweet
A simple and unlimited twitter scraper : scrape tweets, likes, retweets, following, followers,...
lexiforest/curl_cffi
Python binding for curl-impersonate fork via cffi. A http client that can impersonate browser...
plabayo/rama
modular service framework to move and transform network packets
scrapinghub/spidermon
Scrapy Extension for monitoring spiders execution.