awslabs/fmbench-orchestrator

Run FMBench simultaneously across multiple Amazon EC2 machines to benchmark an FM across multiple serving stacks simultaneously

38
/ 100
Emerging

This tool helps machine learning engineers and researchers automatically compare the performance and cost of different large language models (LLMs) across various AWS hosting options like EC2, SageMaker, or Bedrock. You provide details about the models, datasets, and serving stacks you want to test, and it outputs detailed reports, graphs, and cost comparisons to help you choose the best setup. It's designed for anyone deploying or optimizing LLMs on AWS.

No commits in the last 6 months.

Use this if you need to systematically evaluate how different LLMs perform and how much they cost when served on various AWS infrastructure, to make informed deployment decisions.

Not ideal if you are looking to benchmark models on local hardware or outside of the AWS ecosystem.

LLM deployment Model benchmarking Cloud cost optimization ML infrastructure evaluation AWS machine learning
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 6 / 25
Maturity 16 / 25
Community 16 / 25

How are scores calculated?

Stars

15

Forks

6

Language

Python

License

Last pushed

Apr 11, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/mlops/awslabs/fmbench-orchestrator"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.