user1342/Oversight

A Completely Modular LLM Reverse Engineering, Red Teaming, and Vulnerability Research Framework.

29
/ 100
Experimental

This framework helps security researchers and AI safety engineers evaluate Large Language Models (LLMs) for vulnerabilities. You can load an LLM (currently from HuggingFace) and then run various tests like prompt fuzzing or jailbreaking bypasses. The output is a detailed report showing the model's behavior and potential weaknesses, helping you understand and mitigate risks.

No commits in the last 6 months.

Use this if you need to systematically test LLMs for security vulnerabilities, unwanted behaviors, or to understand their internal workings for safety and robustness.

Not ideal if you are looking for a tool to develop or fine-tune LLMs, or if you don't have access to Nvidia CUDA for local execution.

AI Safety LLM Security Testing Red Teaming Vulnerability Research Adversarial AI
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 8 / 25
Maturity 16 / 25
Community 5 / 25

How are scores calculated?

Stars

54

Forks

2

Language

Python

License

GPL-3.0

Last pushed

Nov 09, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/user1342/Oversight"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.