TrustAI-laboratory/LMAP

LMAP (large language model mapper) is like NMAP for LLM, is an LLM Vulnerability Scanner and Zero-day Vulnerability Fuzzer.

37
/ 100
Emerging

This tool helps you evaluate the safety and security of your large language models (LLMs) and the applications built on them. It takes your LLM applications or models as input and provides detailed reports on potential vulnerabilities like 'jailbreaks' and other adversarial attacks. This is ideal for AI system owners, compliance teams, and developers who need to manage risks associated with LLM deployments.

No commits in the last 6 months.

Use this if you need to rigorously test your LLM applications or models for security vulnerabilities and safety issues before or after deployment.

Not ideal if you are a general user looking for a simple LLM playground without specific security or compliance testing needs.

AI Safety LLM Security Compliance Testing Adversarial AI AI Risk Management
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 7 / 25
Maturity 16 / 25
Community 14 / 25

How are scores calculated?

Stars

29

Forks

5

Language

License

Apache-2.0

Last pushed

Oct 16, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/TrustAI-laboratory/LMAP"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.