LazaUK/DeepLearningAI-Giskard-RedTeaming

Practical Jupyter notebooks from Andrew Ng and Giskard team's "Red Teaming LLM Applications" course on DeepLearning.AI.

38
/ 100
Emerging

This project provides practical, adaptable code examples to help you test large language models (LLMs) for weaknesses and harmful outputs. It takes common LLM applications and shows you how to identify potential problems like bias, data leakage, and hallucinations. AI safety engineers, product managers, and anyone responsible for deploying secure and ethical LLM applications will find this useful.

No commits in the last 6 months.

Use this if you need to thoroughly test your LLM applications for vulnerabilities and ensure they behave as expected, especially when using Azure OpenAI.

Not ideal if you are looking for a completely automated, out-of-the-box security solution without needing to engage with code examples.

LLM security AI safety prompt engineering model testing responsible AI
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 6 / 25
Maturity 16 / 25
Community 16 / 25

How are scores calculated?

Stars

23

Forks

6

Language

Jupyter Notebook

License

MIT

Last pushed

Apr 08, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/LazaUK/DeepLearningAI-Giskard-RedTeaming"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.