ThomasJButler/ModelViz

ModelViz is a project demonstrating AI model comparison capabilities. It allows users to test and compare various AI models including OpenAI's GPT, Anthropic's Claude, Google Gemini, and Perplexity Sonar models with real-time performance metrics and beautiful visualisations.

25
/ 100
Experimental

This tool helps AI practitioners and product managers evaluate different large language models from providers like OpenAI, Anthropic, Google, and Perplexity. You input a prompt or task, and it shows you how each model performs, including real-time performance metrics, usage costs, and visual comparisons. It's designed for anyone who needs to choose the best AI model for a specific application.

Use this if you need to compare multiple leading AI models side-by-side to understand their performance, cost, and suitability for your specific prompts or tasks.

Not ideal if you're looking for a tool to fine-tune models or to compare custom-trained models, as it focuses on evaluating pre-existing commercial APIs.

AI model evaluation prompt engineering large language models AI solution architect LLM cost analysis
No Package No Dependents
Maintenance 6 / 25
Adoption 4 / 25
Maturity 15 / 25
Community 0 / 25

How are scores calculated?

Stars

8

Forks

Language

TypeScript

License

MIT

Last pushed

Dec 15, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/ThomasJButler/ModelViz"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.