balavenkatesh3322/guardrails-demo

LLM Security Project with Llama Guard

13
/ 100
Experimental

This project helps you test the safety of Large Language Models (LLMs) by checking if user inputs or AI responses contain harmful or risky content. You can input text prompts and get back a safety assessment, or integrate it with a Llama 2 model to see real-time safety checks. It's designed for AI developers and researchers who are building or evaluating LLM applications.

No commits in the last 6 months.

Use this if you are an AI developer or researcher who needs to quickly set up and test a defensive framework to prevent your LLM applications from generating unsafe content or being exploited by risky prompts.

Not ideal if you are an end-user simply looking to use an existing safe LLM application, rather than build or test one.

AI Safety LLM Development Content Moderation Application Security AI Ethics
No License Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 5 / 25
Maturity 8 / 25
Community 0 / 25

How are scores calculated?

Stars

10

Forks

Language

Python

License

Last pushed

Feb 18, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/generative-ai/balavenkatesh3322/guardrails-demo"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.