tinaponting/ai-robots-scrapers
AI robots.txt, AI scrapers block ai scrapers
This project helps website owners and content creators prevent AI crawlers and scrapers from accessing and using their online content without permission. It provides configuration files (like .htaccess and robots.txt) and plugins that you can upload to your website to block unwanted AI bots. The goal is to protect your unique articles, blog posts, and data from being scraped and potentially reused by AI models.
Use this if you publish content online and want to prevent AI bots and scrapers from accessing, indexing, or utilizing your text and data without your consent.
Not ideal if you are looking for a fully automated, hands-off solution, as it requires some manual setup and ongoing updates.
Stars
11
Forks
1
Language
—
License
—
Category
Last pushed
Mar 07, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/agents/tinaponting/ai-robots-scrapers"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
vakra-dev/reader
Open-source, production-grade web scraping engine built for LLMs. Scrape and crawl the entire...
joaobenedetmachado/scrapit
A (really) easy way to web scrape
firecrawl/open-scouts
🔥 AI-powered web monitoring platform. Create automated scouts that search the web and send email...
BrowserCash/teracrawl
High-performance web crawler API optimized for LLMs. Turn any search or website into clean...
memvid/maw
Crawl any website into a single searchable file. Query it forever, offline.