fiddler-labs/fiddler-auditor
Fiddler Auditor is a tool to evaluate language models.
This tool helps machine learning and software application teams thoroughly test language models before they are used in real-world applications. You provide your language model and test prompts, and it generates reports highlighting potential weaknesses like hallucinations or privacy risks. This allows ML engineers and application developers to identify and fix issues to ensure models are safe and reliable.
189 stars. No commits in the last 6 months. Available on PyPI.
Use this if you are developing or deploying AI applications that rely on large language models and need to rigorously evaluate their safety and performance.
Not ideal if you are an end-user simply consuming an AI application and not involved in its development or deployment.
Stars
189
Forks
23
Language
Python
License
—
Category
Last pushed
Mar 11, 2024
Commits (30d)
0
Dependencies
10
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/fiddler-labs/fiddler-auditor"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Featured in
Related tools
EvolvingLMMs-Lab/lmms-eval
One-for-All Multimodal Evaluation Toolkit Across Text, Image, Video, and Audio Tasks
vibrantlabsai/ragas
Supercharge Your LLM Application Evaluations 🚀
open-compass/VLMEvalKit
Open-source evaluation toolkit of large multi-modality models (LMMs), support 220+ LMMs, 80+ benchmarks
EuroEval/EuroEval
The robust European language model benchmark.
Giskard-AI/giskard-oss
🐢 Open-Source Evaluation & Testing library for LLM Agents