HowieHwong/TrustLLM

[ICML 2024] TrustLLM: Trustworthiness in Large Language Models

55
/ 100
Established

This toolkit helps you understand how trustworthy a Large Language Model (LLM) is across key areas like truthfulness, safety, and fairness. You input the LLM you want to evaluate and a dataset of prompts, and the toolkit outputs scores and analysis detailing its performance in these trustworthiness dimensions. It's ideal for AI researchers, practitioners, or developers who need to rigorously assess LLMs before deployment or integration.

619 stars. No commits in the last 6 months. Available on PyPI.

Use this if you are developing or using Large Language Models and need a standardized, comprehensive way to measure their trustworthiness across multiple ethical and performance dimensions.

Not ideal if you are a casual user looking for a simple 'trust score' without diving into the underlying evaluations or are not working directly with LLM development or assessment.

AI ethics LLM evaluation model safety AI fairness truthfulness assessment
Stale 6m
Maintenance 2 / 25
Adoption 10 / 25
Maturity 25 / 25
Community 18 / 25

How are scores calculated?

Stars

619

Forks

66

Language

Python

License

MIT

Last pushed

Jun 24, 2025

Commits (30d)

0

Dependencies

20

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/HowieHwong/TrustLLM"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.