donderom/llm4s
Scala 3 bindings for llama.cpp 🦙
This project helps Scala developers integrate large language models (LLMs) into their applications. It takes a pre-trained LLM (in a compatible format) and a prompt, then outputs generated text or numerical embeddings. This is for Scala developers who need to add local LLM capabilities to their software.
Use this if you are a Scala developer building an application and want to incorporate local large language model inference or text embedding generation directly within your Scala codebase, without relying on external API calls.
Not ideal if you are not a Scala developer, or if you need to use cloud-based LLM services rather than local models.
Stars
65
Forks
6
Language
Scala
License
Apache-2.0
Category
Last pushed
Feb 13, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/donderom/llm4s"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
ludwig-ai/ludwig
Low-code framework for building custom LLMs, neural networks, and other AI models
withcatai/node-llama-cpp
Run AI models locally on your machine with node.js bindings for llama.cpp. Enforce a JSON schema...
mudler/LocalAI
:robot: The free, Open Source alternative to OpenAI, Claude and others. Self-hosted and...
zhudotexe/kani
kani (カニ) is a highly hackable microframework for tool-calling language models. (NLP-OSS @ EMNLP 2023)
SciSharp/LLamaSharp
A C#/.NET library to run LLM (🦙LLaMA/LLaVA) on your local device efficiently.