romanlutz/ResponsibleAI
A collection of news articles, books, and papers on Responsible AI cases. The purpose is to study these cases and learn from them to avoid repeating the failures of the past.
This is a curated collection of real-world case studies, news articles, and papers highlighting instances where artificial intelligence and algorithmic systems have led to problematic or discriminatory outcomes. It serves as a resource for professionals designing or implementing technology solutions to learn from past failures. Input consists of various documents and outputs are insights into potential risks and ethical considerations.
No commits in the last 6 months.
Use this if you are a decision-maker, engineer, or data scientist involved in building technology solutions and want to understand common pitfalls and biases to design more responsible systems.
Not ideal if you are looking for technical solutions or code to directly mitigate bias in AI systems.
Stars
70
Forks
10
Language
—
License
—
Category
Last pushed
Feb 08, 2021
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/romanlutz/ResponsibleAI"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
fairlearn/fairlearn
A Python package to assess and improve fairness of machine learning models.
Trusted-AI/AIF360
A comprehensive set of fairness metrics for datasets and machine learning models, explanations...
microsoft/responsible-ai-toolbox
Responsible AI Toolbox is a suite of tools providing model and data exploration and assessment...
holistic-ai/holisticai
This is an open-source tool to assess and improve the trustworthiness of AI systems.
EFS-OpenSource/Thetis
Service to examine data processing pipelines (e.g., machine learning or deep learning pipelines)...