holistic-ai/holisticai
This is an open-source tool to assess and improve the trustworthiness of AI systems.
This tool helps data scientists and AI ethics professionals evaluate and improve the reliability of their AI models. You input your model's predictions and performance data, and it outputs detailed reports and visualizations on aspects like bias, explainability, and robustness. It's designed for anyone building or deploying AI systems who needs to ensure they are fair, transparent, and secure.
104 stars. Available on PyPI.
Use this if you need to comprehensively assess your AI model's trustworthiness, understand its decision-making, and mitigate issues like bias or privacy risks.
Not ideal if you are only concerned with basic model accuracy and do not require in-depth analysis of ethical considerations or internal model workings.
Stars
104
Forks
31
Language
Jupyter Notebook
License
Apache-2.0
Category
Last pushed
Jan 26, 2026
Commits (30d)
0
Dependencies
5
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/holistic-ai/holisticai"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related frameworks
fairlearn/fairlearn
A Python package to assess and improve fairness of machine learning models.
Trusted-AI/AIF360
A comprehensive set of fairness metrics for datasets and machine learning models, explanations...
microsoft/responsible-ai-toolbox
Responsible AI Toolbox is a suite of tools providing model and data exploration and assessment...
EFS-OpenSource/Thetis
Service to examine data processing pipelines (e.g., machine learning or deep learning pipelines)...
IBM/inFairness
PyTorch package to train and audit ML models for Individual Fairness