romanlutz/ResponsibleAI

A collection of news articles, books, and papers on Responsible AI cases. The purpose is to study these cases and learn from them to avoid repeating the failures of the past.

31
/ 100
Emerging

This is a curated collection of real-world case studies, news articles, and papers highlighting instances where artificial intelligence and algorithmic systems have led to problematic or discriminatory outcomes. It serves as a resource for professionals designing or implementing technology solutions to learn from past failures. Input consists of various documents and outputs are insights into potential risks and ethical considerations.

No commits in the last 6 months.

Use this if you are a decision-maker, engineer, or data scientist involved in building technology solutions and want to understand common pitfalls and biases to design more responsible systems.

Not ideal if you are looking for technical solutions or code to directly mitigate bias in AI systems.

Ethical AI Algorithmic Bias Fairness in Technology Responsible Innovation Risk Management
No License Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 9 / 25
Maturity 8 / 25
Community 14 / 25

How are scores calculated?

Stars

70

Forks

10

Language

License

Last pushed

Feb 08, 2021

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/romanlutz/ResponsibleAI"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.