fairlearn and fairmind

Fairlearn is a Python package providing tools for fairness assessment and improvement in machine learning models, while Fairmind is an open-source ethical AI governance platform that could potentially integrate or utilize tools like Fairlearn for its bias detection and fairness testing functionalities, making them complements where Fairmind could leverage Fairlearn's low-level capabilities.

fairlearn
78
Verified
fairmind
38
Emerging
Maintenance 13/25
Adoption 15/25
Maturity 25/25
Community 25/25
Maintenance 10/25
Adoption 4/25
Maturity 7/25
Community 17/25
Stars: 2,213
Forks: 484
Downloads:
Commits (30d): 2
Language: Python
License: MIT
Stars: 7
Forks: 8
Downloads:
Commits (30d): 0
Language: Python
License:
No risk flags
No License No Package No Dependents

About fairlearn

fairlearn/fairlearn

A Python package to assess and improve fairness of machine learning models.

This tool helps AI system developers and data scientists evaluate and improve the fairness of their machine learning models. You provide an existing AI model and information about the groups you want to assess for fairness, and it outputs metrics quantifying potential biases and offers algorithms to mitigate unfairness. It's designed for anyone building AI systems for sensitive applications like hiring or lending.

AI-ethics responsible-AI bias-detection machine-learning-fairness data-science

About fairmind

adhit-r/fairmind

Ethical AI Governance Platform | Bias Detection | Compliance | Fairness Testing for ML, LLM & Multimodal AI | Open Source

This platform helps organizations ensure their AI systems are fair and compliant with regulations. It takes your machine learning models (classic ML, large language models, or multimodal AI) and automatically detects biases, generates code to fix them, and produces reports for regulations like the EU AI Act or GDPR. AI governance specialists, risk managers, and compliance officers would use this to manage the ethical deployment of AI.

AI governance ethical AI regulatory compliance bias detection model risk management

Scores updated daily from GitHub, PyPI, and npm data. How scores work