iaalm/llama-api-server
A OpenAI API compatible REST server for llama.
This project lets developers host open-source large language models (like Llama) on their own infrastructure, making them accessible via an API that mimics OpenAI's. It takes your pre-trained Llama model files and outputs a running server that can respond to natural language prompts or generate text embeddings. This is for software developers building applications that use large language models and want to control their own model deployment.
209 stars. No commits in the last 6 months. Available on PyPI.
Use this if you are a developer who wants to integrate open-source Llama models into your applications using an OpenAI-compatible API, without relying on external services.
Not ideal if you are an end-user without programming experience, as this tool requires technical setup and coding to use.
Stars
209
Forks
10
Language
Python
License
MIT
Category
Last pushed
Feb 24, 2025
Commits (30d)
0
Dependencies
4
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/iaalm/llama-api-server"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
ludwig-ai/ludwig
Low-code framework for building custom LLMs, neural networks, and other AI models
withcatai/node-llama-cpp
Run AI models locally on your machine with node.js bindings for llama.cpp. Enforce a JSON schema...
mudler/LocalAI
:robot: The free, Open Source alternative to OpenAI, Claude and others. Self-hosted and...
zhudotexe/kani
kani (カニ) is a highly hackable microframework for tool-calling language models. (NLP-OSS @ EMNLP 2023)
SciSharp/LLamaSharp
A C#/.NET library to run LLM (🦙LLaMA/LLaVA) on your local device efficiently.