tpietruszka/rate_limited

Efficient parallel utilization of slow, rate-limited APIs - like those of Large Language Models

37
/ 100
Emerging

When working with APIs that have usage limits, like Large Language Models, this tool helps you send many requests as quickly as possible without hitting those limits. You provide your API client and the rate limits, then schedule your data for processing. The output will be the results of your API calls, handled efficiently in parallel, complete with retries and response validation. This is for data scientists, analysts, or anyone who needs to process large batches of data through external, rate-limited services.

No commits in the last 6 months. Available on PyPI.

Use this if you need to make many calls to a slow, rate-limited API and want to optimize processing time while respecting usage quotas and ensuring data quality.

Not ideal if your API calls are interdependent or if you don't interact with external APIs that impose rate limits.

API-integration large-language-models data-processing external-service-orchestration workflow-automation
Stale 6m No Dependents
Maintenance 0 / 25
Adoption 5 / 25
Maturity 25 / 25
Community 7 / 25

How are scores calculated?

Stars

10

Forks

1

Language

Python

License

Apache-2.0

Last pushed

Oct 13, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/tpietruszka/rate_limited"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.