HowieHwong/TrustLLM
[ICML 2024] TrustLLM: Trustworthiness in Large Language Models
This toolkit helps you understand how trustworthy a Large Language Model (LLM) is across key areas like truthfulness, safety, and fairness. You input the LLM you want to evaluate and a dataset of prompts, and the toolkit outputs scores and analysis detailing its performance in these trustworthiness dimensions. It's ideal for AI researchers, practitioners, or developers who need to rigorously assess LLMs before deployment or integration.
619 stars. No commits in the last 6 months. Available on PyPI.
Use this if you are developing or using Large Language Models and need a standardized, comprehensive way to measure their trustworthiness across multiple ethical and performance dimensions.
Not ideal if you are a casual user looking for a simple 'trust score' without diving into the underlying evaluations or are not working directly with LLM development or assessment.
Stars
619
Forks
66
Language
Python
License
MIT
Category
Last pushed
Jun 24, 2025
Commits (30d)
0
Dependencies
20
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/HowieHwong/TrustLLM"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related models
Intelligent-CAT-Lab/PLTranslationEmpirical
Artifact repository for the paper "Lost in Translation: A Study of Bugs Introduced by Large...
rishub-tamirisa/tamper-resistance
[ICLR 2025] Official Repository for "Tamper-Resistant Safeguards for Open-Weight LLMs"
tsinghua-fib-lab/ANeurIPS2024_SPV-MIA
[NeurIPS'24] "Membership Inference Attacks against Fine-tuned Large Language Models via...
FudanDISC/ReForm-Eval
An benchmark for evaluating the capabilities of large vision-language models (LVLMs)
codessian/epistemic-confidence-layer
Model-agnostic trust protocol for calibrated, auditable AI