RubensZimbres/CyberBotLLM

4 chatbots with memory made with Langchain, VertexAI and Gemini, as a cyber challenge to capture and expose RAG content.

26
/ 100
Experimental

This project helps cybersecurity professionals and educators understand and demonstrate prompt injection vulnerabilities in AI chatbots. It takes a custom 'Retrieval Augmented Generation' (RAG) document, which can be poisoned with sensitive information, and outputs conversation flows with memory, revealing how different chatbot configurations (regular, expert, hardened expert, cloud expert) respond to direct and indirect prompt injection attempts. Security analysts, penetration testers, and cybersecurity trainers would use this.

No commits in the last 6 months.

Use this if you need a hands-on environment to test and demonstrate prompt injection attacks and sensitive information disclosure in AI chatbots.

Not ideal if you are looking for a pre-built, production-ready secure chatbot, or if you don't have a Google Cloud environment and familiarity with its setup.

cybersecurity-training penetration-testing AI-security prompt-engineering vulnerability-demonstration
No License Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 5 / 25
Maturity 8 / 25
Community 13 / 25

How are scores calculated?

Stars

9

Forks

2

Language

Python

License

Last pushed

Jan 17, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/rag/RubensZimbres/CyberBotLLM"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.