KevinLee1110/dynamic-batching

The official repo for the paper "Optimizing LLM Inference Throughput via Memory-aware and SLA-constrained Dynamic Batching"

31
/ 100
Emerging

This project helps optimize the performance of large language model (LLM) inference systems by dynamically adjusting how many requests are processed simultaneously. It takes your existing LLM serving infrastructure and, by continuously monitoring GPU memory and response time targets, automatically configures the optimal batch size. The end result is a significant boost in how many requests your LLM can handle per second and increased overall capacity, without changing your current setup. This is for infrastructure engineers or MLOps teams managing LLM deployments.

No commits in the last 6 months.

Use this if you are deploying large language models and need to maximize their throughput and capacity while consistently meeting service-level agreements for latency.

Not ideal if you are working with smaller models that are not memory-constrained or if your inference workloads are entirely static and predictable.

LLM deployment MLOps GPU optimization AI inference serving system performance
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 6 / 25
Maturity 16 / 25
Community 9 / 25

How are scores calculated?

Stars

17

Forks

2

Language

License

Apache-2.0

Last pushed

Mar 17, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/KevinLee1110/dynamic-batching"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.