muxi-ai/onellm
Unified interface for interacting with various LLMs hundreds of models, caching, fallback mechanisms, and enhanced reliability.
This is a tool for developers who are building applications that use Large Language Models (LLMs). It helps manage the complexity of integrating different LLMs by providing a single, consistent way to send prompts and receive responses, regardless of the LLM provider. Developers building AI-powered applications would use this to ensure their applications are reliable and performant when interacting with various LLMs.
Available on PyPI.
Use this if you are a developer building an application that needs to interact with multiple Large Language Models from different providers, and you want to simplify your code and improve reliability.
Not ideal if you are a non-technical user looking for a ready-to-use application, or if you only ever plan to use a single LLM provider in a simple way.
Stars
44
Forks
3
Language
Python
License
Apache-2.0
Category
Last pushed
Mar 10, 2026
Commits (30d)
0
Dependencies
10
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/muxi-ai/onellm"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
mgonzs13/llama_ros
llama.cpp (GGUF LLMs) and llava.cpp (GGUF VLMs) for ROS 2
Atome-FE/llama-node
Believe in AI democratization. llama for nodejs backed by llama-rs, llama.cpp and rwkv.cpp, work...
docusealco/rllama
Ruby FFI bindings for llama.cpp to run open-source LLMs such as GPT-OSS, Qwen 3, Gemma 3, and...
Rin313/StegLLM
离线的LLM文本隐写程序。Offline LLM text steganography program.
XrecentX/vllm-skills
🚀 Deploy and manage vLLM with ready-made skills for modular automation, adhering to the...