biaslyze-dev/biaslyze

The NLP Bias Identification Toolkit

38
/ 100
Emerging

This toolkit helps you analyze and identify subtle biases within your Natural Language Processing (NLP) models. You provide your text classification model and the text data it processes, and the toolkit outputs a report and interactive dashboard highlighting how specific words and concepts might be unfairly influencing your model's predictions. This is for AI ethics researchers, machine learning engineers, and data scientists building or deploying NLP solutions who need to ensure fairness.

No commits in the last 6 months. Available on PyPI.

Use this if you are developing or managing NLP models and need a straightforward way to detect unintended biases related to protected attributes in your text classification systems.

Not ideal if you are looking for a tool to mitigate bias automatically or if your models are not text classification models with probability outputs.

AI ethics NLP model testing fairness in AI algorithmic bias detection machine learning auditing
Stale 6m
Maintenance 0 / 25
Adoption 7 / 25
Maturity 25 / 25
Community 6 / 25

How are scores calculated?

Stars

39

Forks

2

Language

Jupyter Notebook

License

BSD-3-Clause

Last pushed

Sep 08, 2023

Commits (30d)

0

Dependencies

11

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/biaslyze-dev/biaslyze"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.