Supahands/llm-comparison

This is an opensource project allowing you to compare two LLM's head to head with a given prompt, it has a wide range of supported models, from opensource ollama ones to the likes of openai and claude

30
/ 100
Emerging

This tool helps you evaluate and choose the best large language model (LLM) for your specific tasks. You input a prompt, and it presents responses from two different LLMs in a blind test, allowing you to compare their performance without bias. It's ideal for anyone, such as AI researchers, product managers, or content creators, who needs to understand and select the most effective LLM for their applications.

No commits in the last 6 months.

Use this if you need to rigorously compare the output quality of various LLMs for specific prompts or use cases and want to do so in an unbiased, blind testing environment.

Not ideal if you need a tool to fine-tune LLMs, integrate them into an application, or monitor their performance in production.

LLM evaluation AI model selection prompt engineering generative AI research content generation
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 7 / 25
Maturity 16 / 25
Community 7 / 25

How are scores calculated?

Stars

25

Forks

2

Language

TypeScript

License

Apache-2.0

Last pushed

Mar 24, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/Supahands/llm-comparison"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.