kimmmmyy223/llm-batch
🚀 Process JSON data in batches with `llm-batch`, leveraging sequential or parallel modes for efficient interaction with LLMs.
21
/ 100
Experimental
No Package
No Dependents
Maintenance
10 / 25
Adoption
0 / 25
Maturity
11 / 25
Community
0 / 25
Stars
—
Forks
—
Language
Go
License
MIT
Category
Last pushed
Mar 13, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/kimmmmyy223/llm-batch"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
robert-mcdermott/ollama-batch-cluster
Large Scale Batch Processing with Ollama
32
anmolg1997/Multi-LoRA-Serve
Multi-adapter inference gateway — one base model, many LoRA adapters per-request,...
22
Rohit2sali/vllm-multi-tenant-llm-gateway
This is vllm multi tenant large language model gateway. This system is created to serve lot of...
13