cylynx/verifyml
Open-source toolkit to help companies implement responsible AI workflows.
This toolkit helps companies ensure their AI models are fair and transparent. It takes your model development data and results, then automatically creates detailed 'model cards' and runs fairness tests. This is ideal for data scientists, compliance officers, and product managers who need to document, validate, and communicate the ethical considerations and performance of their machine learning models.
No commits in the last 6 months. Available on PyPI.
Use this if you need to systematically document the development and performance of your AI models, especially regarding fairness and explainability, and generate easy-to-understand reports for diverse stakeholders.
Not ideal if you are looking for a general-purpose machine learning library for model building or deployment, as its primary focus is on responsible AI documentation and testing.
Stars
23
Forks
3
Language
Python
License
Apache-2.0
Category
Last pushed
Feb 07, 2022
Commits (30d)
0
Dependencies
13
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/cylynx/verifyml"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
fairlearn/fairlearn
A Python package to assess and improve fairness of machine learning models.
Trusted-AI/AIF360
A comprehensive set of fairness metrics for datasets and machine learning models, explanations...
microsoft/responsible-ai-toolbox
Responsible AI Toolbox is a suite of tools providing model and data exploration and assessment...
holistic-ai/holisticai
This is an open-source tool to assess and improve the trustworthiness of AI systems.
EFS-OpenSource/Thetis
Service to examine data processing pipelines (e.g., machine learning or deep learning pipelines)...