hybridgroup/yzma

Go with your own intelligence - Go applications that directly integrate llama.cpp for local inference using hardware acceleration.

44
/ 100
Emerging

Yzma helps Go developers build applications that use large language models (LLMs) and vision language models (VLMs) for tasks like interactive chat or image analysis. It takes GGUF-formatted models and text/image inputs, then outputs generated text, all running directly on the developer's hardware. This is for Go developers creating smart applications that need local AI capabilities.

350 stars.

Use this if you are a Go developer who wants to embed local, hardware-accelerated AI inference directly into your applications without needing external servers or CGo.

Not ideal if you are not a Go developer or if you prefer using cloud-based AI services or other programming languages.

application-development local-ai natural-language-processing computer-vision edge-ai
No Package No Dependents
Maintenance 10 / 25
Adoption 10 / 25
Maturity 15 / 25
Community 9 / 25

How are scores calculated?

Stars

350

Forks

11

Language

Go

License

Last pushed

Mar 08, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/hybridgroup/yzma"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.