leondz/lm_risk_cards

Risks and targets for assessing LLMs & LLM vulnerabilities

32
/ 100
Emerging

These Language Model Risk Cards help you systematically identify potential problems and vulnerabilities in how you plan to use a large language model. You'll choose a specific use case, model, and interface, then select relevant cards to guide your testing. The outcome is a detailed assessment report based on your efforts to provoke specific risky behaviors from the LLM. This is for anyone responsible for safely and effectively deploying large language models, such as product managers, AI ethics officers, or compliance specialists.

No commits in the last 6 months.

Use this if you need a structured way to uncover potential failures or risks before deploying a large language model into a real-world application.

Not ideal if you are looking for an automated tool to run tests or a technical library for developers to integrate into their code.

LLM deployment AI risk management model evaluation AI safety product management
No License Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 7 / 25
Maturity 8 / 25
Community 17 / 25

How are scores calculated?

Stars

34

Forks

10

Language

Python

License

Last pushed

May 27, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/leondz/lm_risk_cards"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.