LLM-API-Key-Proxy and lm-proxy

These are competitors offering overlapping core functionality—both provide OpenAI-compatible HTTP gateways for multi-provider LLM inference—though lm-proxy emphasizes lightweight library usage while LLM-API-Key-Proxy adds intelligent load-balancing features.

LLM-API-Key-Proxy
57
Established
lm-proxy
55
Established
Maintenance 10/25
Adoption 10/25
Maturity 15/25
Community 22/25
Maintenance 10/25
Adoption 9/25
Maturity 24/25
Community 12/25
Stars: 418
Forks: 76
Downloads:
Commits (30d): 0
Language: Python
License:
Stars: 92
Forks: 10
Downloads:
Commits (30d): 0
Language: Python
License: MIT
No Package No Dependents
No risk flags

About LLM-API-Key-Proxy

Mirrowel/LLM-API-Key-Proxy

Universal LLM Gateway: One API, every LLM. OpenAI/Anthropic-compatible endpoints with multi-provider translation and intelligent load-balancing.

This tool helps individuals or small teams who work with various Large Language Models (LLMs) and need a simpler, more reliable way to manage them. You put in your different LLM API keys and model choices, and it gives you one single access point that works with almost any existing LLM application. This is ideal for developers, researchers, or anyone building applications that use LLMs.

LLM-management API-integration developer-tools AI-application-development system-administration

About lm-proxy

Nayjest/lm-proxy

OpenAI-compatible HTTP LLM proxy / gateway for multi-provider inference (Google, Anthropic, OpenAI, PyTorch). Lightweight, extensible Python/FastAPI—use as library or standalone service.

This tool helps developers and system architects manage their use of Large Language Models (LLMs) from various providers like OpenAI, Anthropic, or Google, as well as local models. It acts as a single access point, allowing you to send requests using the familiar OpenAI API format, and the proxy intelligently routes them to the correct LLM. You input your LLM requests and configuration, and it outputs responses from the chosen models, simplifying multi-provider setups.

LLM management API integration backend development AI infrastructure multi-model deployment

Scores updated daily from GitHub, PyPI, and npm data. How scores work