DataResponsibly/Virny
An in-depth performance profiling library for machine learning models
This tool helps machine learning engineers and data scientists thoroughly evaluate how their AI models perform, especially concerning fairness and stability across different user groups. You provide your trained models and datasets, and it outputs detailed performance metrics, visualizations, and 'nutritional labels' to guide responsible model selection. This is for anyone building and deploying AI who needs to ensure their models are robust and unbiased.
No commits in the last 6 months. Available on PyPI.
Use this if you need to deeply understand your machine learning model's performance, identify biases across various user subgroups, and compare multiple models to select the most responsible one.
Not ideal if you are looking for a tool to automatically fix model biases without needing to analyze performance dimensions yourself.
Stars
17
Forks
3
Language
Python
License
BSD-3-Clause
Category
Last pushed
Apr 07, 2025
Commits (30d)
0
Dependencies
14
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/DataResponsibly/Virny"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
fairlearn/fairlearn
A Python package to assess and improve fairness of machine learning models.
Trusted-AI/AIF360
A comprehensive set of fairness metrics for datasets and machine learning models, explanations...
microsoft/responsible-ai-toolbox
Responsible AI Toolbox is a suite of tools providing model and data exploration and assessment...
holistic-ai/holisticai
This is an open-source tool to assess and improve the trustworthiness of AI systems.
EFS-OpenSource/Thetis
Service to examine data processing pipelines (e.g., machine learning or deep learning pipelines)...