debiai/DebiAI
Bias detection and contextual evaluation tool for your AI projects
This web application helps data scientists and machine learning engineers develop better AI models by identifying hidden biases and errors in their project data and model results. It takes in raw input data, contextual information, ground truth labels, and model predictions, then visualizes these to help you spot issues. The output is a clearer understanding of data quality and model performance, enabling more reliable AI systems.
Use this if you need to thoroughly analyze your machine learning data and model performance, especially to detect biases or errors that could impact your AI's fairness and accuracy.
Not ideal if you are looking for a simple model deployment tool or if your project doesn't involve complex data analysis or bias detection needs for AI.
Stars
30
Forks
5
Language
Vue
License
Apache-2.0
Category
Last pushed
Oct 31, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/debiai/DebiAI"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
fairlearn/fairlearn
A Python package to assess and improve fairness of machine learning models.
Trusted-AI/AIF360
A comprehensive set of fairness metrics for datasets and machine learning models, explanations...
microsoft/responsible-ai-toolbox
Responsible AI Toolbox is a suite of tools providing model and data exploration and assessment...
holistic-ai/holisticai
This is an open-source tool to assess and improve the trustworthiness of AI systems.
EFS-OpenSource/Thetis
Service to examine data processing pipelines (e.g., machine learning or deep learning pipelines)...