freelawproject/juriscraper
An API to scrape American court websites for metadata.
This tool helps legal professionals, researchers, and journalists gather judicial opinions, oral arguments, and PACER data from American court websites. It takes the URLs of various federal and state court sites as input and provides structured metadata and document links as output. It's designed for anyone needing to systematically collect public court records.
557 stars. Actively maintained with 86 commits in the last 30 days.
Use this if you need to programmatically collect and analyze legal documents and metadata from a wide range of US federal and state court websites.
Not ideal if you're looking for a simple point-and-click interface for occasional document retrieval rather than a programmatic data collection solution.
Stars
557
Forks
147
Language
HTML
License
BSD-2-Clause
Category
Last pushed
Mar 27, 2026
Commits (30d)
86
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/perception/freelawproject/juriscraper"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related tools
scrapy/scrapy
Scrapy, a fast high-level web crawling & scraping framework for Python.
Altimis/Scweet
A simple and unlimited twitter scraper : scrape tweets, likes, retweets, following, followers,...
lexiforest/curl_cffi
Python binding for curl-impersonate fork via cffi. A http client that can impersonate browser...
plabayo/rama
modular service framework to move and transform network packets
scrapinghub/spidermon
Scrapy Extension for monitoring spiders execution.