pvlbzn/latai
LatAI – A latency benchmarking tool for evaluating multiple generative AI providers and models 🌎.
When you're comparing different AI models and providers like OpenAI, Groq, or AWS Bedrock, LatAI helps you understand their real-world speed. You feed it either default test prompts or your own specific questions, and it shows you how quickly each AI model responds, including average, min, and max latency. This tool is for AI developers, researchers, and product managers who need to choose the fastest AI for their applications.
No commits in the last 6 months.
Use this if you need to objectively compare the response times of various Generative AI models and providers to select the most performant option for your application.
Not ideal if you're looking to evaluate the quality of AI responses or the cost efficiency of different models.
Stars
9
Forks
1
Language
Go
License
—
Category
Last pushed
Feb 25, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/pvlbzn/latai"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Featured in
Higher-rated alternatives
sierra-research/tau2-bench
τ²-Bench: Evaluating Conversational Agents in a Dual-Control Environment
xlang-ai/OSWorld
[NeurIPS 2024] OSWorld: Benchmarking Multimodal Agents for Open-Ended Tasks in Real Computer Environments
bigcode-project/bigcodebench
[ICLR'25] BigCodeBench: Benchmarking Code Generation Towards AGI
THUDM/AgentBench
A Comprehensive Benchmark to Evaluate LLMs as Agents (ICLR'24)
scicode-bench/SciCode
A benchmark that challenges language models to code solutions for scientific problems