teilomillet/hapax

The reliability layer between your code and LLM providers.

29
/ 100
Experimental

Hapax helps engineering and operations teams ensure their AI applications run without interruption, even if a primary AI provider (like OpenAI or Anthropic) experiences an outage. It takes your existing AI API requests, routes them to the best available provider, and delivers the AI's response, all while keeping a close eye on system health. This is for operations engineers, SREs, and platform teams managing critical AI services.

No commits in the last 6 months.

Use this if you need to build robust AI applications that require high availability and resilience against AI provider outages, and you want to reduce the operational burden of managing multiple providers.

Not ideal if your AI workloads are non-critical, can tolerate occasional downtime, or you only use a single AI provider and do not anticipate needing failover capabilities.

AI-operations site-reliability cloud-infrastructure API-management LLM-deployment
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 6 / 25
Maturity 16 / 25
Community 7 / 25

How are scores calculated?

Stars

23

Forks

2

Language

Go

License

Apache-2.0

Category

llm-api-gateways

Last pushed

Jan 06, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/teilomillet/hapax"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.