maximhq/bifrost

Fastest enterprise AI gateway (50x faster than LiteLLM) with adaptive load balancer, cluster mode, guardrails, 1000+ models support & <100 µs overhead at 5k RPS.

68
/ 100
Established

Building AI-powered applications that rely on multiple language models can be complex. This tool acts as a single entry point for all your AI model requests, simplifying how your applications connect to providers like OpenAI, Anthropic, or Google Vertex. It takes your application's prompts and forwards them to the best-performing or most cost-effective AI model, returning the generated response. AI application developers and operations engineers building and maintaining large-scale AI systems would use this to ensure reliability and manage costs.

2,853 stars. Actively maintained with 398 commits in the last 30 days.

Use this if you are building an AI application that needs to reliably access various AI models, manage traffic efficiently, and maintain high availability with features like automatic failover and load balancing.

Not ideal if you are only experimenting with a single AI model and do not require advanced features like unified API access, load balancing, or enterprise-grade governance.

AI-application-development MLOps API-management cloud-infrastructure developer-operations
No Package No Dependents
Maintenance 22 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 20 / 25

How are scores calculated?

Stars

2,853

Forks

303

Language

Go

License

Apache-2.0

Last pushed

Mar 11, 2026

Commits (30d)

398

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/mlops/maximhq/bifrost"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.