gitcommitshow/resilient-llm
Resilient multi-LLM orchestration with in-built failure handing, rate limits, retries, and circuit breaker.
This tool helps developers build AI applications or agents that consistently interact with various Large Language Models (LLMs) like OpenAI, Anthropic, or Google. It takes your conversation history and desired LLM configuration, then reliably retrieves responses from the chosen LLM, automatically handling network issues, rate limits, and provider failures. This is for software developers or AI engineers who are integrating LLMs into their applications and need robust, production-ready performance.
Available on npm.
Use this if you are building an application that needs to reliably communicate with one or more LLM providers and you want to avoid common issues like API rate limits, network errors, or provider outages.
Not ideal if you are an end-user looking for a pre-built chat application or a no-code solution for interacting with LLMs.
Stars
30
Forks
3
Language
JavaScript
License
MIT
Category
Last pushed
Mar 09, 2026
Commits (30d)
0
Dependencies
1
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/gitcommitshow/resilient-llm"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related tools
BerriAI/litellm
Python SDK, Proxy Server (AI Gateway) to call 100+ LLM APIs in OpenAI (or native) format, with...
vava-nessa/free-coding-models
Find, benchmark and install in CLI 158 FREE coding LLM models across 20 providers in real time
envoyproxy/ai-gateway
Manages Unified Access to Generative AI Services built on Envoy Gateway
theopenco/llmgateway
Route, manage, and analyze your LLM requests across multiple providers with a unified API interface.
Portkey-AI/gateway
A blazing fast AI Gateway with integrated guardrails. Route to 200+ LLMs, 50+ AI Guardrails with...