anyscale/llm-router

Tutorial for building LLM router

33
/ 100
Emerging

This project helps application developers manage the trade-off between response quality and cost when using Large Language Models (LLMs). It takes user queries as input and intelligently routes them to either a powerful, expensive LLM or a more economical open-source LLM. The output is a high-quality LLM response while significantly reducing operational costs for the developer building the application.

246 stars. No commits in the last 6 months.

Use this if you are building LLM-powered applications and need to deliver consistent, high-quality responses while keeping your inference costs under control.

Not ideal if your application exclusively uses a single LLM or if cost optimization is not a primary concern for your LLM deployments.

LLM-application-development cost-optimization AI-inference-management developer-tools large-language-models
No License Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 10 / 25
Maturity 8 / 25
Community 15 / 25

How are scores calculated?

Stars

246

Forks

23

Language

Python

License

Last pushed

Jul 19, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/anyscale/llm-router"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.