ThomasJButler/ModelViz
ModelViz is a project demonstrating AI model comparison capabilities. It allows users to test and compare various AI models including OpenAI's GPT, Anthropic's Claude, Google Gemini, and Perplexity Sonar models with real-time performance metrics and beautiful visualisations.
This tool helps AI practitioners and product managers evaluate different large language models from providers like OpenAI, Anthropic, Google, and Perplexity. You input a prompt or task, and it shows you how each model performs, including real-time performance metrics, usage costs, and visual comparisons. It's designed for anyone who needs to choose the best AI model for a specific application.
Use this if you need to compare multiple leading AI models side-by-side to understand their performance, cost, and suitability for your specific prompts or tasks.
Not ideal if you're looking for a tool to fine-tune models or to compare custom-trained models, as it focuses on evaluating pre-existing commercial APIs.
Stars
8
Forks
—
Language
TypeScript
License
MIT
Category
Last pushed
Dec 15, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/ThomasJButler/ModelViz"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
little51/llm-dev
《大模型项目实战:多领域智能应用开发》配套资源
Ahmet-Dedeler/ai-llm-comparison
A website where you can compare every AI Model ✨
Michaelgathara/llm-timeline
Visualize LLM Progress Overtime
nicucalcea/sheets-llm
Use Large Language Models (LLM) in Google Sheets
cohere-ai/sandbox-grounded-qa
A sandbox repo for grounded question answering with Cohere and Google Search