olaflaitinen/medical_imaging_fairness

Ethnic bias analysis in medical imaging AI: Demonstrating that explainable-by-design models achieve 80% bias reduction across 5 ethnic groups (50k images)

23
/ 100
Experimental

This project helps medical professionals and AI developers evaluate how fair and understandable their medical imaging AI models are. It takes chest X-ray images and clinical diagnostic models as input, and provides detailed reports on how the models perform across different ethnic groups and why they make certain predictions. This is for medical researchers, AI ethicists, and regulatory bodies focused on ensuring equitable and transparent AI in healthcare.

Use this if you need to rigorously assess and improve the fairness and explainability of AI models used for diagnosing conditions from medical images, especially concerning ethnic bias.

Not ideal if you are looking for a pre-built diagnostic AI model for immediate clinical use, as this project focuses on evaluating model fairness and explainability rather than providing a ready-to-deploy tool.

medical-imaging AI-ethics diagnostic-fairness healthcare-AI algorithmic-bias
No Package No Dependents
Maintenance 6 / 25
Adoption 4 / 25
Maturity 13 / 25
Community 0 / 25

How are scores calculated?

Stars

7

Forks

Language

Python

License

MIT

Last pushed

Nov 07, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/olaflaitinen/medical_imaging_fairness"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.