uptrain-ai/uptrain
UpTrain is an open-source unified platform to evaluate and improve Generative AI applications. We provide grades for 20+ preconfigured checks (covering language, code, embedding use-cases), perform root cause analysis on failure cases and give insights on how to resolve them.
This platform helps AI product managers, data scientists, and machine learning engineers evaluate and refine their Generative AI applications. It takes your AI model's outputs and a set of evaluation criteria, then provides grades and identifies common failure patterns. The result is actionable insights to improve your AI's performance before it reaches end-users.
2,339 stars. No commits in the last 6 months. Available on PyPI.
Use this if you are developing or managing Large Language Model (LLM) or Generative AI applications and need a systematic way to check their quality and identify areas for improvement.
Not ideal if you are looking for a general-purpose machine learning model evaluation tool beyond Generative AI or if your primary need is basic model monitoring without deep root cause analysis.
Stars
2,339
Forks
202
Language
Python
License
Apache-2.0
Category
Last pushed
Aug 18, 2024
Commits (30d)
0
Dependencies
19
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/prompt-engineering/uptrain-ai/uptrain"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Featured in
Related tools
microsoft/promptbench
A unified evaluation framework for large language models
levitation-opensource/Manipulative-Expression-Recognition
MER is a software that identifies and highlights manipulative communication in text from human...
microsoftarchive/promptbench
A unified evaluation framework for large language models
gabe-mousa/Apolien
AI Safety Evaluation Library
GSA/FedRAMP-OllaLab-Lean
The OllaLab-Lean project is designed to help both novice and experienced developers rapidly set...