awslabs/fmbench-orchestrator
Run FMBench simultaneously across multiple Amazon EC2 machines to benchmark an FM across multiple serving stacks simultaneously
This tool helps machine learning engineers and researchers automatically compare the performance and cost of different large language models (LLMs) across various AWS hosting options like EC2, SageMaker, or Bedrock. You provide details about the models, datasets, and serving stacks you want to test, and it outputs detailed reports, graphs, and cost comparisons to help you choose the best setup. It's designed for anyone deploying or optimizing LLMs on AWS.
No commits in the last 6 months.
Use this if you need to systematically evaluate how different LLMs perform and how much they cost when served on various AWS infrastructure, to make informed deployment decisions.
Not ideal if you are looking to benchmark models on local hardware or outside of the AWS ecosystem.
Stars
15
Forks
6
Language
Python
License
—
Category
Last pushed
Apr 11, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/mlops/awslabs/fmbench-orchestrator"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
kserve/kserve
Standardized Distributed Generative and Predictive AI Inference Platform for Scalable,...
omegaml/omegaml
MLOps simplified. One-stop AI delivery platform, all the features you need.
awslabs/aiops-modules
AIOps modules is a collection of reusable Infrastructure as Code (IaC) modules for Machine...
GoogleCloudDataproc/dataproc-ml-python
Library to simplify running distributed ML workloads with Apache Spark
jina-ai/serve
☁️ Build multimodal AI applications with cloud-native stack