webrecorder/browsertrix-crawler

Run a high-fidelity browser-based web archiving crawler in a single Docker container

68
/ 100
Established

This tool helps preserve websites exactly as they appear and function, capturing everything from text and images to interactive elements. You provide a list of website addresses, and it produces a complete, browseable archive of those sites for long-term storage or offline access. Digital archivists, memory institutions, and researchers who need to preserve online content will find this useful.

1,007 stars. Actively maintained with 12 commits in the last 30 days.

Use this if you need to create high-fidelity, interactive archives of websites for historical preservation, legal evidence, or research, ensuring all dynamic content is captured.

Not ideal if you only need to download static web pages or simple content for quick analysis without preserving full interactivity.

web archiving digital preservation internet history digital forensics content capture
No Package No Dependents
Maintenance 20 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 22 / 25

How are scores calculated?

Stars

1,007

Forks

132

Language

TypeScript

License

AGPL-3.0

Category

scraper

Last pushed

Mar 27, 2026

Commits (30d)

12

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/perception/webrecorder/browsertrix-crawler"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.