amazon-science/concurry

Easy scaling for AI research and production workloads

37
/ 100
Emerging

This project helps Python developers make their AI models, data processing, or web scraping tasks run much faster by executing them in parallel. You give it your existing Python code, and it handles running many instances of it simultaneously across different threads, processes, or even a computing cluster. The result is significantly reduced execution time for repetitive, independent operations. This is for Python developers and AI researchers who need to speed up their workloads without extensive code rewrites.

Use this if you are a Python developer or AI researcher struggling with slow code that could be sped up by running many operations concurrently, especially when calling external services like large language models or APIs.

Not ideal if your Python code is already highly optimized for single-threaded performance or if your tasks are inherently sequential and cannot be run in parallel.

AI-research ML-engineering data-processing API-integration backend-development
No Package No Dependents
Maintenance 10 / 25
Adoption 5 / 25
Maturity 16 / 25
Community 6 / 25

How are scores calculated?

Stars

14

Forks

1

Language

Python

License

Apache-2.0

Last pushed

Mar 11, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/generative-ai/amazon-science/concurry"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.