TrustGen/TrustEval-toolkit
[ICLR'26, NAACL'25 Demo] Toolkit & Benchmark for evaluating the trustworthiness of generative foundation models.
This toolkit helps AI researchers and developers evaluate the reliability and safety of large generative AI models (like ChatGPT or Stable Diffusion). You provide a model (or its API key) and evaluation criteria, and the system generates custom test datasets, runs the model against them, and produces detailed reports on its trustworthiness across dimensions like fairness, robustness, and privacy. It's designed for professionals building or deploying generative AI.
128 stars. No commits in the last 6 months.
Use this if you need to thoroughly assess how trustworthy a generative AI model is before integrating it into a product or research project.
Not ideal if you are an end-user simply looking to interact with or fine-tune an existing generative AI model without needing a deep technical evaluation.
Stars
128
Forks
10
Language
Python
License
—
Category
Last pushed
Aug 22, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/TrustGen/TrustEval-toolkit"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
PacificAI/langtest
Deliver safe & effective language models
microsoft/OpenRCA
[ICLR'25] OpenRCA: Can Large Language Models Locate the Root Cause of Software Failures?
Babelscape/ALERT
Official repository for the paper "ALERT: A Comprehensive Benchmark for Assessing Large Language...
ChenWu98/agent-attack
[ICLR 2025] Dissecting adversarial robustness of multimodal language model agents
Trust4AI/ASTRAL
Automated Safety Testing of Large Language Models