Scottcjn/llama-cpp-power8

AltiVec/VSX optimized llama.cpp for IBM POWER8

48
/ 100
Emerging

This project helps you run large language models (LLMs) like LLaMA and DeepSeek directly on your older IBM POWER8 server hardware. It takes trained LLM models and processes them much faster on POWER8 than standard methods, letting you use your existing powerful servers for AI tasks. If you own or manage IBM POWER8 systems and want to perform AI inference locally, this tool is for you.

Use this if you need to run large language models efficiently on your existing IBM POWER8 server infrastructure, rather than relying on cloud services or newer, more expensive hardware.

Not ideal if you don't have access to IBM POWER8 hardware or are looking for a solution compatible with x86, ARM, or NVIDIA GPU systems.

AI inference On-premise AI Data center optimization Edge AI High-performance computing
No Package No Dependents
Maintenance 10 / 25
Adoption 8 / 25
Maturity 13 / 25
Community 17 / 25

How are scores calculated?

Stars

47

Forks

10

Language

C

License

Last pushed

Mar 13, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/Scottcjn/llama-cpp-power8"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.