jmont-dev/ollama-hpp
Modern, Header-only C++ bindings for the Ollama API.
This is a C++ library that allows C++ developers to easily integrate local large language models (LLMs) into their C++ applications. It takes model names and user prompts as input, and outputs generated text or chat responses from the LLM. C++ developers who want to add AI capabilities using local LLMs to their software would use this.
213 stars.
Use this if you are a C++ developer building an application and want to incorporate the power of local Ollama-compatible large language models directly into your C++ code.
Not ideal if you are not a C++ developer, or if you prefer to use cloud-based LLM APIs rather than running models locally.
Stars
213
Forks
27
Language
C++
License
MIT
Category
Last pushed
Oct 20, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/jmont-dev/ollama-hpp"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
ludwig-ai/ludwig
Low-code framework for building custom LLMs, neural networks, and other AI models
withcatai/node-llama-cpp
Run AI models locally on your machine with node.js bindings for llama.cpp. Enforce a JSON schema...
mudler/LocalAI
:robot: The free, Open Source alternative to OpenAI, Claude and others. Self-hosted and...
zhudotexe/kani
kani (カニ) is a highly hackable microframework for tool-calling language models. (NLP-OSS @ EMNLP 2023)
SciSharp/LLamaSharp
A C#/.NET library to run LLM (🦙LLaMA/LLaVA) on your local device efficiently.