biaslyze-dev/biaslyze
The NLP Bias Identification Toolkit
This toolkit helps you analyze and identify subtle biases within your Natural Language Processing (NLP) models. You provide your text classification model and the text data it processes, and the toolkit outputs a report and interactive dashboard highlighting how specific words and concepts might be unfairly influencing your model's predictions. This is for AI ethics researchers, machine learning engineers, and data scientists building or deploying NLP solutions who need to ensure fairness.
No commits in the last 6 months. Available on PyPI.
Use this if you are developing or managing NLP models and need a straightforward way to detect unintended biases related to protected attributes in your text classification systems.
Not ideal if you are looking for a tool to mitigate bias automatically or if your models are not text classification models with probability outputs.
Stars
39
Forks
2
Language
Jupyter Notebook
License
BSD-3-Clause
Category
Last pushed
Sep 08, 2023
Commits (30d)
0
Dependencies
11
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/biaslyze-dev/biaslyze"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
fairlearn/fairlearn
A Python package to assess and improve fairness of machine learning models.
Trusted-AI/AIF360
A comprehensive set of fairness metrics for datasets and machine learning models, explanations...
microsoft/responsible-ai-toolbox
Responsible AI Toolbox is a suite of tools providing model and data exploration and assessment...
holistic-ai/holisticai
This is an open-source tool to assess and improve the trustworthiness of AI systems.
EFS-OpenSource/Thetis
Service to examine data processing pipelines (e.g., machine learning or deep learning pipelines)...