wgryc/phasellm
Large language model evaluation and workflow framework from Phase AI.
This framework helps product and brand managers evaluate and compare how different large language models (LLMs) and their prompts perform for specific user needs. You input various LLMs (like OpenAI, Anthropic, Cohere) and user prompts, and it outputs which model or prompt combination delivers the best user experience based on predefined objectives. It's designed for those creating products, content, or experiences powered by LLMs.
460 stars. No commits in the last 6 months.
Use this if you are developing products or experiences using multiple large language models and need a structured way to test and compare their performance against user needs.
Not ideal if you are looking for a simple API wrapper without the need for extensive comparative evaluation or workflow management.
Stars
460
Forks
27
Language
Python
License
MIT
Category
Last pushed
Jan 21, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/wgryc/phasellm"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Featured in
Higher-rated alternatives
EvolvingLMMs-Lab/lmms-eval
One-for-All Multimodal Evaluation Toolkit Across Text, Image, Video, and Audio Tasks
vibrantlabsai/ragas
Supercharge Your LLM Application Evaluations 🚀
open-compass/VLMEvalKit
Open-source evaluation toolkit of large multi-modality models (LMMs), support 220+ LMMs, 80+ benchmarks
EuroEval/EuroEval
The robust European language model benchmark.
Giskard-AI/giskard-oss
🐢 Open-Source Evaluation & Testing library for LLM Agents