BetterForAll/HonestyMeter
HonestyMeter: An NLP-powered framework for evaluating objectivity and bias in media content, detecting manipulative techniques, and providing actionable feedback.
HonestyMeter helps individuals and organizations analyze media content to identify bias and manipulative techniques. You input text from an article, and it outputs an objectivity score, feedback on specific manipulative techniques, and suggestions for improvement. This tool is designed for anyone concerned about media integrity, such as journalists, researchers, or general consumers of news.
Use this if you need to quickly assess the objectivity and potential bias in written media content.
Not ideal if you need to analyze images, audio, or video, as the current version only supports text analysis.
Stars
26
Forks
1
Language
TypeScript
License
Apache-2.0
Category
Last pushed
Feb 28, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/BetterForAll/HonestyMeter"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
cvs-health/langfair
LangFair is a Python library for conducting use-case level LLM bias and fairness assessments
bws82/biasclear
Structural bias detection and correction engine built on Persistent Influence Theory (PIT)
KID-22/LLM-IR-Bias-Fairness-Survey
This is the repo for the survey of Bias and Fairness in IR with LLMs.
Hanpx20/SafeSwitch
Official code repository for the paper "Internal Activation as the Polar Star for Steering...
faiyazabdullah/TranslationTangles
Uncovering Performance Gaps and Bias Patterns in LLM-Based Translations Across Language Families...