JosephTLucas/llm_test
A suite of tests to verify bias, safety, trust, and security concerns for LLMs.
No commits in the last 6 months.
Stars
7
Forks
—
Language
Python
License
MIT
Category
Last pushed
Nov 07, 2022
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/JosephTLucas/llm_test"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
UBC-MDS/fixml
LLM Tool for effective test evaluation of ML projects with curated Checklists and LLM prompts
AstraBert/DebateLLM-Championship
5 LLMs, 1vs1 matches to produce the most convincing argumentation in favor or against a random...
brains-on-code/IterativeRefactoringLLM
Replication package, supplementary materials, and analysis pipeline for our paper on iterative...
ash-jyc/db84llm
College policy debate as a verbal reasoning benchmark for LLMs
RodillasJavier/debate-fallacy-detector
Logical Fallacy Detection in Presidential Debates using a Random Forest classifier and LLM-based...