ShelbyJenkins/llm_client
The Easiest Rust Interface for Local LLMs and an Interface for Deterministic Signals from Probabilistic LLM Vibes
This project provides a robust and easy-to-use Rust interface for `llama.cpp`'s `llama-server`. It simplifies managing the `llama.cpp` toolchain and interacting with local large language models (LLMs) for tasks like text generation, text infilling, and creating text embeddings. Developers building applications that need to integrate local LLM capabilities would use this.
245 stars and 10 monthly downloads. No commits in the last 6 months.
Use this if you are a Rust developer looking for a straightforward, fully-typed way to integrate and manage local `llama.cpp` servers within your applications on Linux, macOS, or Windows.
Not ideal if you are looking for a high-level agent building framework or a tool for interacting with cloud-based LLMs rather than local ones.
Stars
245
Forks
25
Language
Rust
License
MIT
Category
Last pushed
Aug 06, 2025
Monthly downloads
10
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/ShelbyJenkins/llm_client"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Compare
Higher-rated alternatives
trymirai/uzu
A high-performance inference engine for AI models
justrach/bhumi
⚡ Bhumi – The fastest AI inference client for Python, built with Rust for unmatched speed,...
lipish/llm-connector
LLM Connector - A unified interface for connecting to various Large Language Model providers
keyvank/femtoGPT
Pure Rust implementation of a minimal Generative Pretrained Transformer
rustformers/llm
[Unmaintained, see README] An ecosystem of Rust libraries for working with large language models