Giskard-AI/awesome-ai-safety

📚 A curated list of papers & technical articles on AI Quality & Safety

45
/ 100
Emerging

This is a curated collection of research papers and technical articles focused on ensuring the quality and safety of AI models. It helps AI practitioners and researchers understand and address critical issues like ethical biases, errors, privacy leaks, and robustness problems in their AI systems. The resource organizes information by common machine learning tasks, providing insights into various AI risk types.

205 stars. No commits in the last 6 months.

Use this if you are an AI practitioner, researcher, or ethicist looking for comprehensive resources to improve the safety, fairness, and reliability of your AI models across different applications.

Not ideal if you are looking for ready-to-use code, tools, or software to directly implement AI safety measures.

AI ethics machine learning quality model robustness algorithmic bias AI research
Stale 6m No Package No Dependents
Maintenance 2 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 17 / 25

How are scores calculated?

Stars

205

Forks

27

Language

License

Apache-2.0

Last pushed

Apr 14, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/Giskard-AI/awesome-ai-safety"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.