RhinoDevel/mt_llm
Pure C wrapper library to use llama.cpp with Linux and Windows as simple as possible.
This is a C library that simplifies integrating large language models (LLMs) into single-user applications on Linux and Windows. It allows developers to feed text prompts into an LLM and receive generated text, embeddings, or ranking scores as output. This tool is for C/C++ developers who want to embed local LLM capabilities directly into their software without dealing with complex configurations.
Use this if you are a C or C++ developer building a desktop application and want to add simple, local LLM inference capabilities, like text generation or embeddings, without extensive setup.
Not ideal if you need a solution for web-based applications, multi-user deployments, or prefer to work with higher-level programming languages like Python.
Stars
14
Forks
1
Language
C++
License
MIT
Category
Last pushed
Mar 11, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/RhinoDevel/mt_llm"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
beehive-lab/GPULlama3.java
GPU-accelerated Llama3.java inference in pure Java using TornadoVM.
gitkaz/mlx_gguf_server
This is a FastAPI based LLM server. Load multiple LLM models (MLX or llama.cpp) simultaneously...
srgtuszy/llama-cpp-swift
Swift bindings for llama-cpp library
JackZeng0208/llama.cpp-android-tutorial
llama.cpp tutorial on Android phone
awinml/llama-cpp-python-bindings
Run fast LLM Inference using Llama.cpp in Python