amazon-science/concurry
Easy scaling for AI research and production workloads
This project helps Python developers make their AI models, data processing, or web scraping tasks run much faster by executing them in parallel. You give it your existing Python code, and it handles running many instances of it simultaneously across different threads, processes, or even a computing cluster. The result is significantly reduced execution time for repetitive, independent operations. This is for Python developers and AI researchers who need to speed up their workloads without extensive code rewrites.
Use this if you are a Python developer or AI researcher struggling with slow code that could be sped up by running many operations concurrently, especially when calling external services like large language models or APIs.
Not ideal if your Python code is already highly optimized for single-threaded performance or if your tasks are inherently sequential and cannot be run in parallel.
Stars
14
Forks
1
Language
Python
License
Apache-2.0
Category
Last pushed
Mar 11, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/generative-ai/amazon-science/concurry"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Featured in
Higher-rated alternatives
openvinotoolkit/model_server
A scalable inference server for models optimized with OpenVINO™
madroidmaq/mlx-omni-server
MLX Omni Server is a local inference server powered by Apple's MLX framework, specifically...
NVIDIA-NeMo/Guardrails
NeMo Guardrails is an open-source toolkit for easily adding programmable guardrails to LLM-based...
generative-computing/mellea
Mellea is a library for writing generative programs.
rhesis-ai/rhesis
Open-source platform & SDK for testing LLM and agentic apps. Define expected behavior, generate...