cvs-health/langfair
LangFair is a Python library for conducting use-case level LLM bias and fairness assessments
When building applications with large language models, it's crucial to ensure they are fair and unbiased for your specific use case. This project helps you assess potential biases in LLM outputs, such as toxicity or stereotypes, by allowing you to feed in your own real-world prompts. It provides metrics to understand how an LLM performs in terms of fairness. This tool is for AI product managers, responsible AI teams, and data scientists developing and deploying LLM-powered applications.
255 stars. Available on PyPI.
Use this if you need to evaluate the bias and fairness of an LLM's responses for your specific application, especially for text generation and summarization tasks.
Not ideal if you are looking for a general-purpose LLM benchmark tool that doesn't focus on use-case specific prompts or output-based fairness metrics.
Stars
255
Forks
41
Language
Python
License
—
Category
Last pushed
Jan 09, 2026
Commits (30d)
0
Dependencies
17
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/cvs-health/langfair"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related tools
BetterForAll/HonestyMeter
HonestyMeter: An NLP-powered framework for evaluating objectivity and bias in media content,...
bws82/biasclear
Structural bias detection and correction engine built on Persistent Influence Theory (PIT)
KID-22/LLM-IR-Bias-Fairness-Survey
This is the repo for the survey of Bias and Fairness in IR with LLMs.
Hanpx20/SafeSwitch
Official code repository for the paper "Internal Activation as the Polar Star for Steering...
faiyazabdullah/TranslationTangles
Uncovering Performance Gaps and Bias Patterns in LLM-Based Translations Across Language Families...