HPMLL/BurstGPT

A ChatGPT(GPT-3.5) & GPT-4 Workload Trace to Optimize LLM Serving Systems

47
/ 100
Emerging

This provides real-world historical data about how large language models (LLMs) like ChatGPT and GPT-4 are used over time, specifically focusing on server workloads. It includes detailed logs of user requests and model responses, such as timestamps, session IDs, response times, model types, and token counts. This dataset is designed for researchers and academics studying how to make LLM serving systems more efficient and robust.

241 stars.

Use this if you are a researcher or academic working on optimizing LLM serving systems and need realistic, large-scale workload data for simulations or analyses.

Not ideal if you are looking for a tool to deploy or manage LLMs directly, or if you need to analyze the content of LLM interactions rather than the operational workload.

LLM-serving-research systems-optimization workload-modeling performance-analysis academic-research
No Package No Dependents
Maintenance 10 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 11 / 25

How are scores calculated?

Stars

241

Forks

14

Language

Python

License

CC-BY-4.0

Last pushed

Feb 01, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/HPMLL/BurstGPT"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.