ResponsiblyAI/responsibly
Toolkit for Auditing and Mitigating Bias and Fairness of Machine Learning Systems 🔎🤖🧰
This toolkit helps data scientists, machine learning practitioners, and researchers evaluate and address unfairness in their AI systems. You input your machine learning models and data, and it outputs insights into potential biases and offers tools to make your models fairer. It's especially useful for those building classification models or working with natural language processing.
100 stars. No commits in the last 6 months.
Use this if you need to systematically check your machine learning models for bias and fairness issues and apply algorithmic solutions to improve them.
Not ideal if you are looking for a no-code solution or tools for bias detection outside of binary classification and word embeddings.
Stars
100
Forks
22
Language
Python
License
MIT
Category
Last pushed
Nov 17, 2023
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/ResponsiblyAI/responsibly"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
fairlearn/fairlearn
A Python package to assess and improve fairness of machine learning models.
Trusted-AI/AIF360
A comprehensive set of fairness metrics for datasets and machine learning models, explanations...
holistic-ai/holisticai
This is an open-source tool to assess and improve the trustworthiness of AI systems.
microsoft/responsible-ai-toolbox
Responsible AI Toolbox is a suite of tools providing model and data exploration and assessment...
datamllab/awesome-fairness-in-ai
A curated list of awesome Fairness in AI resources