yigitkonur/cli-batch-requester

10K+ req/s batch API client for LLM endpoints — Rust, async, load-balanced

37
/ 100
Emerging

This tool helps you send a large number of requests to AI language models efficiently and reliably. You provide a list of inputs or full request bodies, and it handles sending them, managing multiple API keys, retrying failed requests, and writing all the results and errors to separate files as they come in. It's designed for data scientists, ML engineers, or researchers who need to process massive datasets using LLMs without manual oversight.

Use this if you need to send hundreds of thousands of requests to LLM APIs and want a robust system that handles rate limits, retries, and load balancing across multiple API keys automatically.

Not ideal if you're only making a few dozen requests or if you need to make interactive, real-time calls to an LLM where immediate individual responses are critical.

LLM-inference large-scale-data-processing API-integration ML-operations AI-research
No Package No Dependents
Maintenance 10 / 25
Adoption 6 / 25
Maturity 16 / 25
Community 5 / 25

How are scores calculated?

Stars

19

Forks

1

Language

Rust

License

MIT

Last pushed

Feb 21, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/yigitkonur/cli-batch-requester"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.