yyy01/LLMRiskEval_RCC
LLMs evaluation tool for robustness, consistency, and credibility
This tool helps AI practitioners and researchers systematically evaluate the reliability of Large Language Models (LLMs) like ChatGPT. It takes a list of questions, feeds them to an LLM, and then analyzes the model's responses to provide scores on its robustness, consistency, and the credibility of its training data. This helps identify potential risks and limitations before deploying LLMs in real-world applications.
No commits in the last 6 months.
Use this if you need to objectively measure how well an LLM handles variations in input, produces similar answers for similar questions, and relies on trustworthy training data.
Not ideal if you are looking for a tool to fine-tune or train LLMs, or to evaluate their general performance on tasks like translation or summarization.
Stars
9
Forks
—
Language
Python
License
Apache-2.0
Category
Last pushed
Aug 30, 2023
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/yyy01/LLMRiskEval_RCC"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
HowieHwong/TrustLLM
[ICML 2024] TrustLLM: Trustworthiness in Large Language Models
Intelligent-CAT-Lab/PLTranslationEmpirical
Artifact repository for the paper "Lost in Translation: A Study of Bugs Introduced by Large...
rishub-tamirisa/tamper-resistance
[ICLR 2025] Official Repository for "Tamper-Resistant Safeguards for Open-Weight LLMs"
tsinghua-fib-lab/ANeurIPS2024_SPV-MIA
[NeurIPS'24] "Membership Inference Attacks against Fine-tuned Large Language Models via...
FudanDISC/ReForm-Eval
An benchmark for evaluating the capabilities of large vision-language models (LVLMs)