Boomslet/Web_Crawler

Open-source web crawler

37
/ 100
Emerging

This tool helps you automatically visit websites and extract their content, saving it directly to a file as you go. It starts with one or more website addresses you provide and then systematically explores new pages it finds. This is ideal for researchers, marketers, or data analysts who need to collect large amounts of text or information from various public websites.

No commits in the last 6 months.

Use this if you need to gather data, like product descriptions, news articles, or public contact information, from many pages across one or more websites.

Not ideal if you only need to download a few specific files or if the websites you're interested in require logins or have strict anti-bot measures.

data-collection market-research content-scraping competitive-analysis research-data-gathering
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 5 / 25
Maturity 16 / 25
Community 16 / 25

How are scores calculated?

Stars

9

Forks

6

Language

Python

License

MIT

Category

scraper

Last pushed

Jul 21, 2018

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/perception/Boomslet/Web_Crawler"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.