mdrokz/rust-llama.cpp

LLama.cpp rust bindings

44
/ 100
Emerging

This project helps Rust developers integrate large language models (LLMs) into their applications, leveraging the efficient llama.cpp library. It takes a pre-trained LLM in a GGML or GGUF format and provides an interface to feed it text prompts, receiving generated text as output. Developers building Rust-based applications that require local, high-performance text generation capabilities would use this.

416 stars. No commits in the last 6 months.

Use this if you are a Rust developer looking to embed large language model inference directly into your application without relying on external services, especially for local or edge deployments.

Not ideal if you are not a Rust developer, if you need a fully managed LLM service, or if your application primarily uses other programming languages.

AI application development local AI inference natural language processing Rust programming edge AI
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 18 / 25

How are scores calculated?

Stars

416

Forks

52

Language

Rust

License

MIT

Last pushed

Jun 27, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/mdrokz/rust-llama.cpp"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.