DetoxAI/detoxai
Python toolkit for debiasing neural networks in image classification tasks
When developing or deploying AI models that use images, you might find that your model makes unfair decisions based on sensitive attributes like gender or race, even if you didn't intend for it to. This project helps you fix these 'biased' image classification models. You provide your existing image classification model and data, and it gives you a corrected version that is more fair and balanced. This is for AI practitioners, researchers, and developers who work with computer vision models and want to ensure their models are equitable.
No commits in the last 6 months.
Use this if you have a neural network that classifies images, and you're concerned it might be making biased decisions based on protected attributes present in your image data.
Not ideal if your problem isn't related to image classification, or if you're not working with neural networks.
Stars
81
Forks
2
Language
Python
License
MIT
Category
Last pushed
Oct 13, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/DetoxAI/detoxai"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
fairlearn/fairlearn
A Python package to assess and improve fairness of machine learning models.
Trusted-AI/AIF360
A comprehensive set of fairness metrics for datasets and machine learning models, explanations...
microsoft/responsible-ai-toolbox
Responsible AI Toolbox is a suite of tools providing model and data exploration and assessment...
holistic-ai/holisticai
This is an open-source tool to assess and improve the trustworthiness of AI systems.
EFS-OpenSource/Thetis
Service to examine data processing pipelines (e.g., machine learning or deep learning pipelines)...