DataResponsibly/Virny

An in-depth performance profiling library for machine learning models

44
/ 100
Emerging

This tool helps machine learning engineers and data scientists thoroughly evaluate how their AI models perform, especially concerning fairness and stability across different user groups. You provide your trained models and datasets, and it outputs detailed performance metrics, visualizations, and 'nutritional labels' to guide responsible model selection. This is for anyone building and deploying AI who needs to ensure their models are robust and unbiased.

No commits in the last 6 months. Available on PyPI.

Use this if you need to deeply understand your machine learning model's performance, identify biases across various user subgroups, and compare multiple models to select the most responsible one.

Not ideal if you are looking for a tool to automatically fix model biases without needing to analyze performance dimensions yourself.

AI ethics model evaluation responsible AI machine learning fairness data science workflow
Stale 6m
Maintenance 0 / 25
Adoption 6 / 25
Maturity 25 / 25
Community 13 / 25

How are scores calculated?

Stars

17

Forks

3

Language

Python

License

BSD-3-Clause

Last pushed

Apr 07, 2025

Commits (30d)

0

Dependencies

14

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/DataResponsibly/Virny"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.