Babelscape/ALERT

Official repository for the paper "ALERT: A Comprehensive Benchmark for Assessing Large Language Models’ Safety through Red Teaming"

39
/ 100
Emerging

This project helps AI safety researchers and developers evaluate how safely their Large Language Models (LLMs) respond to potentially harmful prompts. It takes a list of prompts, runs them through your LLM, and then uses a separate safety model (Llama Guard) to assess how safe or unsafe the LLM's responses are. The output includes detailed safety scores, highlighting specific weaknesses.

No commits in the last 6 months.

Use this if you are developing or deploying Large Language Models and need a rigorous, fine-grained way to test their safety against a wide range of harmful prompts.

Not ideal if you are looking for a general-purpose LLM evaluation tool that isn't focused specifically on safety or 'red-teaming' scenarios.

AI Safety LLM Evaluation Red Teaming Responsible AI Content Moderation
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 8 / 25
Maturity 16 / 25
Community 15 / 25

How are scores calculated?

Stars

57

Forks

9

Language

Python

License

Last pushed

Sep 20, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/Babelscape/ALERT"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.