GURPREETKAURJETHRA/LLM-SECURITY

Securing LLM's Against Top 10 OWASP Large Language Model Vulnerabilities 2024

36
/ 100
Emerging

This project helps security professionals and developers understand and mitigate risks associated with Large Language Models (LLMs). It compiles articles, official guidance, and research papers focused on the OWASP Top 10 LLM Vulnerabilities, providing insights into prompt injection, data poisoning, and other security threats. Anyone building, deploying, or securing applications that use LLMs would find this a valuable resource for staying informed on AI security.

No commits in the last 6 months.

Use this if you need a curated collection of resources to understand and defend against security vulnerabilities in Large Language Model applications.

Not ideal if you are looking for an automated tool or code library to directly implement LLM security measures.

AI security LLM vulnerability management prompt injection defense AI governance cybersecurity research
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 6 / 25
Maturity 16 / 25
Community 14 / 25

How are scores calculated?

Stars

22

Forks

4

Language

License

MIT

Last pushed

May 10, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/GURPREETKAURJETHRA/LLM-SECURITY"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.