yinxulai/ait

批量测试符合 OpenAI 协议和 Anthropic 协议的 AI 模型性能指标。支持 TTFT(首字节时间)、TPS(吞吐量)、网络延迟等关键性能指标的测量,提供多模型对比测试和详细的性能报告生成功能。

34
/ 100
Emerging

This tool helps AI engineers and MLOps professionals rigorously evaluate the performance of large language models (LLMs) from providers like OpenAI and Anthropic, or even local models. You provide API keys and model names, and it outputs detailed reports on key metrics like Time To First Token (TTFT), Tokens Per Second (TPS), and network latency. It's designed for those who need to benchmark and compare different AI models.

Use this if you need to systematically compare the speed and efficiency of various AI models under different load conditions to make informed deployment decisions.

Not ideal if you are looking for qualitative evaluations of AI model outputs or fine-tuning models rather than quantitative performance metrics.

AI-model-benchmarking LLM-performance MLOps API-performance-testing AI-infrastructure
No Package No Dependents
Maintenance 6 / 25
Adoption 8 / 25
Maturity 15 / 25
Community 5 / 25

How are scores calculated?

Stars

50

Forks

2

Language

Go

License

MIT

Last pushed

Dec 25, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/yinxulai/ait"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.