parasail-ai/openai-batch
Make OpenAI batch easy to use.
This project simplifies processing large volumes of data using AI models like ChatGPT or Llama. You provide a list of inputs (text, images) in a file, and it sends them to the AI model in bulk. It then gathers all the responses into an output file once complete, helping data scientists, researchers, and analysts efficiently generate insights or content.
Available on PyPI.
Use this if you need to process thousands or millions of prompts, documents, or images with large language models and want an easy, automated way to manage the entire workflow from submission to result retrieval.
Not ideal if you only need to process a few inputs at a time or require real-time, interactive responses from the AI models.
Stars
9
Forks
1
Language
Python
License
Apache-2.0
Category
Last pushed
Feb 23, 2026
Commits (30d)
0
Dependencies
1
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/parasail-ai/openai-batch"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
jundot/omlx
LLM inference server with continuous batching & SSD caching for Apple Silicon — managed from the...
josStorer/RWKV-Runner
A RWKV management and startup tool, full automation, only 8MB. And provides an interface...
waybarrios/vllm-mlx
OpenAI and Anthropic compatible server for Apple Silicon. Run LLMs and vision-language models...
jordanhubbard/nanolang
A tiny experimental language designed to be targeted by coding LLMs
akivasolutions/tightwad
Pool your CUDA + ROCm GPUs into one OpenAI-compatible API. Speculative decoding proxy gives you...