let4be/crusty-core

A small library for building fast and highly customizable web crawlers

32
/ 100
Emerging

This library helps Rust developers build highly customized and efficient web crawlers. It takes a starting URL or list of URLs and, based on user-defined rules, navigates websites, extracts specific data like page titles or links, and outputs the collected information. It's designed for engineers who need fine-grained control over their crawling process and aim for speed and scalability.

No commits in the last 6 months.

Use this if you are a Rust developer needing to programmatically collect specific information from a large number of websites, where performance and custom logic for filtering and data extraction are critical.

Not ideal if you're looking for a ready-to-use web scraping application without writing code, or if you need to build a crawler in a language other than Rust.

Web Scraping Data Collection Web Automation API Development Information Retrieval
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 11 / 25
Maturity 16 / 25
Community 5 / 25

How are scores calculated?

Stars

16

Forks

1

Language

Rust

License

GPL-3.0

Category

scraper

Last pushed

Jan 04, 2023

Monthly downloads

195

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/perception/let4be/crusty-core"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.