pollinations/chucknorris

⚡ C̷h̷u̷c̷k̷N̷o̷r̷r̷i̷s̷ MCP server: Helping LLMs break limits. Provides enhancement prompts inspired by elder-plinius' L1B3RT4S

43
/ 100
Emerging

This is a specialized tool for developers and security researchers working with AI. It helps you test the security and robustness of Large Language Models (LLMs) by providing 'jailbreak' prompts that aim to bypass their safety mechanisms. You provide an LLM, and the tool attempts to make it generate responses it was designed to avoid, helping you identify vulnerabilities. This is for AI developers, security engineers, and researchers exploring LLM limitations.

No commits in the last 6 months. Available on npm.

Use this if you are developing or securing LLMs and need to evaluate their resistance to 'jailbreak' attempts.

Not ideal if you are an end-user looking for an AI assistant or a tool to simply 'improve' your LLM's general output.

LLM security AI safety vulnerability research prompt engineering AI development
No License Stale 6m
Maintenance 0 / 25
Adoption 8 / 25
Maturity 17 / 25
Community 18 / 25

How are scores calculated?

Stars

57

Forks

13

Language

JavaScript

License

Last pushed

Apr 11, 2025

Commits (30d)

0

Dependencies

2

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/mcp/pollinations/chucknorris"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.