docusealco/rllama
Ruby FFI bindings for llama.cpp to run open-source LLMs such as GPT-OSS, Qwen 3, Gemma 3, and Llama 3 locally with Ruby.
This tool helps Ruby developers integrate powerful, open-source large language models (LLMs) directly into their applications. You can input text, system prompts, or conversation histories, and it outputs generated text, chat responses, or numerical vector embeddings. Developers who want to add AI capabilities like text generation, chatbots, or semantic search to their Ruby projects will find this useful.
No commits in the last 6 months.
Use this if you are a Ruby developer building applications and want to incorporate local, open-source large language models for text generation, conversational AI, or text embeddings.
Not ideal if you need a pre-built, high-level AI service without coding, or if your application isn't built with Ruby.
Stars
87
Forks
5
Language
Ruby
License
—
Category
Last pushed
Oct 07, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/docusealco/rllama"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
mgonzs13/llama_ros
llama.cpp (GGUF LLMs) and llava.cpp (GGUF VLMs) for ROS 2
muxi-ai/onellm
Unified interface for interacting with various LLMs hundreds of models, caching, fallback...
Atome-FE/llama-node
Believe in AI democratization. llama for nodejs backed by llama-rs, llama.cpp and rwkv.cpp, work...
Rin313/StegLLM
离线的LLM文本隐写程序。Offline LLM text steganography program.
XrecentX/vllm-skills
🚀 Deploy and manage vLLM with ready-made skills for modular automation, adhering to the...