Giskard-AI/awesome-ai-safety
📚 A curated list of papers & technical articles on AI Quality & Safety
This is a curated collection of research papers and technical articles focused on ensuring the quality and safety of AI models. It helps AI practitioners and researchers understand and address critical issues like ethical biases, errors, privacy leaks, and robustness problems in their AI systems. The resource organizes information by common machine learning tasks, providing insights into various AI risk types.
205 stars. No commits in the last 6 months.
Use this if you are an AI practitioner, researcher, or ethicist looking for comprehensive resources to improve the safety, fairness, and reliability of your AI models across different applications.
Not ideal if you are looking for ready-to-use code, tools, or software to directly implement AI safety measures.
Stars
205
Forks
27
Language
—
License
Apache-2.0
Category
Last pushed
Apr 14, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/Giskard-AI/awesome-ai-safety"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
CryptoAILab/Awesome-LM-SSP
A reading list for large models safety, security, and privacy (including Awesome LLM Security,...
liu673/Awesome-LLM4Security
This project aims to consolidate and share high-quality resources and tools across the...
ElNiak/awesome-ai-cybersecurity
Welcome to the ultimate list of resources for AI in cybersecurity. This repository aims to...
anmolksachan/AI-ML-Free-Resources-for-Security-and-Prompt-Injection
AI/ML Pentesting Roadmap for Beginners
Ashfaaq98/awesome-genai-cyberhub
A curated list of LLM driven Cyber security Resources