cvs-health/langfair

LangFair is a Python library for conducting use-case level LLM bias and fairness assessments

60
/ 100
Established

When building applications with large language models, it's crucial to ensure they are fair and unbiased for your specific use case. This project helps you assess potential biases in LLM outputs, such as toxicity or stereotypes, by allowing you to feed in your own real-world prompts. It provides metrics to understand how an LLM performs in terms of fairness. This tool is for AI product managers, responsible AI teams, and data scientists developing and deploying LLM-powered applications.

255 stars. Available on PyPI.

Use this if you need to evaluate the bias and fairness of an LLM's responses for your specific application, especially for text generation and summarization tasks.

Not ideal if you are looking for a general-purpose LLM benchmark tool that doesn't focus on use-case specific prompts or output-based fairness metrics.

responsible-AI LLM-evaluation AI-governance NLP-applications fairness-assessment
Maintenance 6 / 25
Adoption 10 / 25
Maturity 25 / 25
Community 19 / 25

How are scores calculated?

Stars

255

Forks

41

Language

Python

License

Last pushed

Jan 09, 2026

Commits (30d)

0

Dependencies

17

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/cvs-health/langfair"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.