spider-rs/spider-example-archiver

Crawl and archive websites in a VSCode-like code viewer. Powered by Spider Cloud.

39
/ 100
Emerging

This tool helps you gather and save website content by 'crawling' a site and collecting all its pages. You input a website URL, and it provides a browsable list of all discovered pages, letting you inspect their underlying HTML code. It's ideal for anyone who needs to capture and review the exact content of a website at a specific point in time, like content strategists, SEO specialists, or compliance officers.

Use this if you need to comprehensively archive a website's content, examine its HTML structure, or download all pages for offline analysis.

Not ideal if you only need to check for broken links, assess accessibility, or monitor content changes over time.

website-archiving content-capture web-crawling SEO-analysis digital-preservation
No Package No Dependents
Maintenance 10 / 25
Adoption 4 / 25
Maturity 16 / 25
Community 9 / 25

How are scores calculated?

Stars

7

Forks

1

Language

TypeScript

License

MIT

Category

scraper

Last pushed

Mar 02, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/perception/spider-rs/spider-example-archiver"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.