mdrokz/rust-llama.cpp
LLama.cpp rust bindings
This project helps Rust developers integrate large language models (LLMs) into their applications, leveraging the efficient llama.cpp library. It takes a pre-trained LLM in a GGML or GGUF format and provides an interface to feed it text prompts, receiving generated text as output. Developers building Rust-based applications that require local, high-performance text generation capabilities would use this.
416 stars. No commits in the last 6 months.
Use this if you are a Rust developer looking to embed large language model inference directly into your application without relying on external services, especially for local or edge deployments.
Not ideal if you are not a Rust developer, if you need a fully managed LLM service, or if your application primarily uses other programming languages.
Stars
416
Forks
52
Language
Rust
License
MIT
Category
Last pushed
Jun 27, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/mdrokz/rust-llama.cpp"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
ludwig-ai/ludwig
Low-code framework for building custom LLMs, neural networks, and other AI models
withcatai/node-llama-cpp
Run AI models locally on your machine with node.js bindings for llama.cpp. Enforce a JSON schema...
mudler/LocalAI
:robot: The free, Open Source alternative to OpenAI, Claude and others. Self-hosted and...
zhudotexe/kani
kani (カニ) is a highly hackable microframework for tool-calling language models. (NLP-OSS @ EMNLP 2023)
SciSharp/LLamaSharp
A C#/.NET library to run LLM (🦙LLaMA/LLaVA) on your local device efficiently.