nexmoe/lm-speed
Help developers optimize AI application performance through comprehensive speed testing and analysis
This tool helps AI application developers evaluate and optimize the performance of Large Language Model (LLM) APIs. You provide details like the API base URL, API key, and model ID, and it outputs detailed performance metrics, visualizations, and professional test reports. It's designed for developers building AI applications who need to compare and select the most efficient LLM models and service providers.
No commits in the last 6 months.
Use this if you are developing an AI application and need to objectively compare the speed and reliability of different LLM APIs and models to make data-driven decisions.
Not ideal if you are a non-technical user simply looking to evaluate the output quality or capabilities of different LLMs without needing deep performance metrics.
Stars
79
Forks
8
Language
TypeScript
License
MIT
Category
Last pushed
Mar 31, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/nexmoe/lm-speed"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Featured in
Higher-rated alternatives
open-compass/opencompass
OpenCompass is an LLM evaluation platform, supporting a wide range of models (Llama3, Mistral,...
IBM/unitxt
🦄 Unitxt is a Python library for enterprise-grade evaluation of AI performance, offering the...
lean-dojo/LeanDojo
Tool for data extraction and interacting with Lean programmatically.
GoodStartLabs/AI_Diplomacy
Frontier Models playing the board game Diplomacy.
google/litmus
Litmus is a comprehensive LLM testing and evaluation tool designed for GenAI Application...