Norconex/crawlers
Norconex Crawlers (or spiders) are flexible web and filesystem crawlers for collecting, parsing, and manipulating data from the web or filesystem to various data repositories such as search engines.
This tool helps you collect information from websites or local computer files automatically. You tell it which websites or folders to look at, and it gathers that data, processes it, and sends it to a storage system like a search engine or database. Content managers, researchers, or data analysts who need to collect large amounts of structured and unstructured data would use this.
200 stars.
Use this if you need to systematically collect and process data from the web or your file systems and feed it into a search engine or another data repository.
Not ideal if you only need to retrieve a few specific pieces of information or if you are not comfortable with command-line tools or integrating Java applications.
Stars
200
Forks
70
Language
Java
License
Apache-2.0
Category
Last pushed
Apr 03, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/perception/Norconex/crawlers"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related tools
scrapy/scrapy
Scrapy, a fast high-level web crawling & scraping framework for Python.
Altimis/Scweet
A simple and unlimited twitter scraper : scrape tweets, likes, retweets, following, followers,...
lexiforest/curl_cffi
Python binding for curl-impersonate fork via cffi. A http client that can impersonate browser...
plabayo/rama
modular service framework to move and transform network packets
scrapinghub/spidermon
Scrapy Extension for monitoring spiders execution.