Supahands/llm-comparison
This is an opensource project allowing you to compare two LLM's head to head with a given prompt, it has a wide range of supported models, from opensource ollama ones to the likes of openai and claude
This tool helps you evaluate and choose the best large language model (LLM) for your specific tasks. You input a prompt, and it presents responses from two different LLMs in a blind test, allowing you to compare their performance without bias. It's ideal for anyone, such as AI researchers, product managers, or content creators, who needs to understand and select the most effective LLM for their applications.
No commits in the last 6 months.
Use this if you need to rigorously compare the output quality of various LLMs for specific prompts or use cases and want to do so in an unbiased, blind testing environment.
Not ideal if you need a tool to fine-tune LLMs, integrate them into an application, or monitor their performance in production.
Stars
25
Forks
2
Language
TypeScript
License
Apache-2.0
Category
Last pushed
Mar 24, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/Supahands/llm-comparison"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Featured in
Higher-rated alternatives
open-compass/opencompass
OpenCompass is an LLM evaluation platform, supporting a wide range of models (Llama3, Mistral,...
IBM/unitxt
🦄 Unitxt is a Python library for enterprise-grade evaluation of AI performance, offering the...
lean-dojo/LeanDojo
Tool for data extraction and interacting with Lean programmatically.
GoodStartLabs/AI_Diplomacy
Frontier Models playing the board game Diplomacy.
google/litmus
Litmus is a comprehensive LLM testing and evaluation tool designed for GenAI Application...