Mattbusel/tokio-prompt-orchestrator

Multi-core, Tokio-native orchestration for LLM pipelines.

42
/ 100
Emerging

This is for developers building applications that use Large Language Models (LLMs) and need to handle many user requests efficiently and reliably. It takes raw text prompts or conversational turns and routes them through various LLM providers, returning the generated text responses. Developers would use this to ensure their LLM-powered applications are fast, cost-effective, and secure in a production environment.

Use this if you are a developer creating an application that heavily relies on LLMs and needs to manage high volumes of requests, optimize costs, and enhance security.

Not ideal if you are simply experimenting with LLMs or building a small, low-traffic application where advanced orchestration and performance features are not critical.

LLM application development API orchestration prompt engineering backend services AI infrastructure
No Package No Dependents
Maintenance 10 / 25
Adoption 8 / 25
Maturity 15 / 25
Community 9 / 25

How are scores calculated?

Stars

50

Forks

4

Language

Rust

License

MIT

Last pushed

Mar 10, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/mcp/Mattbusel/tokio-prompt-orchestrator"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.