Talnz007/VulkanIlm
GPU-accelerated LLaMA inference wrapper for legacy Vulkan-capable systems a Pythonic way to run AI with knowledge (Ilm) on fire (Vulkan).
This project helps developers integrate powerful AI language models into their applications or workflows, even on older computers. It takes an AI model file (like a Llama model) and, using your computer's graphics card, makes it generate text much faster than your main processor. Developers with systems that don't have the newest NVIDIA GPUs, particularly those with AMD or Intel graphics, will find this especially useful.
No commits in the last 6 months.
Use this if you are a developer looking to run large language models (LLMs) locally with GPU acceleration on AMD, Intel, or older NVIDIA graphics cards, without needing specialized NVIDIA CUDA software.
Not ideal if you already have a modern NVIDIA GPU and are using CUDA, as there are existing tools optimized for that specific setup.
Stars
28
Forks
—
Language
Python
License
MIT
Category
Last pushed
Oct 14, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/Talnz007/VulkanIlm"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
Goekdeniz-Guelmez/mlx-lm-lora
Train Large Language Models on MLX.
uber-research/PPLM
Plug and Play Language Model implementation. Allows to steer topic and attributes of GPT-2 models.
VHellendoorn/Code-LMs
Guide to using pre-trained large language models of source code
ssbuild/chatglm_finetuning
chatglm 6b finetuning and alpaca finetuning
jarobyte91/pytorch_beam_search
A lightweight implementation of Beam Search for sequence models in PyTorch.