teilomillet/hapax
The reliability layer between your code and LLM providers.
Hapax helps engineering and operations teams ensure their AI applications run without interruption, even if a primary AI provider (like OpenAI or Anthropic) experiences an outage. It takes your existing AI API requests, routes them to the best available provider, and delivers the AI's response, all while keeping a close eye on system health. This is for operations engineers, SREs, and platform teams managing critical AI services.
No commits in the last 6 months.
Use this if you need to build robust AI applications that require high availability and resilience against AI provider outages, and you want to reduce the operational burden of managing multiple providers.
Not ideal if your AI workloads are non-critical, can tolerate occasional downtime, or you only use a single AI provider and do not anticipate needing failover capabilities.
Stars
23
Forks
2
Language
Go
License
Apache-2.0
Category
Last pushed
Jan 06, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/teilomillet/hapax"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
BerriAI/litellm
Python SDK, Proxy Server (AI Gateway) to call 100+ LLM APIs in OpenAI (or native) format, with...
vava-nessa/free-coding-models
Find, benchmark and install in CLI 158 FREE coding LLM models across 20 providers in real time
envoyproxy/ai-gateway
Manages Unified Access to Generative AI Services built on Envoy Gateway
theopenco/llmgateway
Route, manage, and analyze your LLM requests across multiple providers with a unified API interface.
Portkey-AI/gateway
A blazing fast AI Gateway with integrated guardrails. Route to 200+ LLMs, 50+ AI Guardrails with...