Talnz007/VulkanIlm

GPU-accelerated LLaMA inference wrapper for legacy Vulkan-capable systems a Pythonic way to run AI with knowledge (Ilm) on fire (Vulkan).

24
/ 100
Experimental

This project helps developers integrate powerful AI language models into their applications or workflows, even on older computers. It takes an AI model file (like a Llama model) and, using your computer's graphics card, makes it generate text much faster than your main processor. Developers with systems that don't have the newest NVIDIA GPUs, particularly those with AMD or Intel graphics, will find this especially useful.

No commits in the last 6 months.

Use this if you are a developer looking to run large language models (LLMs) locally with GPU acceleration on AMD, Intel, or older NVIDIA graphics cards, without needing specialized NVIDIA CUDA software.

Not ideal if you already have a modern NVIDIA GPU and are using CUDA, as there are existing tools optimized for that specific setup.

AI development local LLM inference GPU acceleration legacy hardware optimization Python development
Stale 6m No Package No Dependents
Maintenance 2 / 25
Adoption 7 / 25
Maturity 15 / 25
Community 0 / 25

How are scores calculated?

Stars

28

Forks

Language

Python

License

MIT

Last pushed

Oct 14, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/Talnz007/VulkanIlm"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.