Scottcjn/llama-cpp-tigerleopard
WORLD FIRST: llama.cpp for Mac OS X Tiger & Leopard on PowerPC G4/G5
This project lets you run modern large language models (LLMs) like TinyLlama or Phi-2 on your vintage Mac OS X Tiger or Leopard computers with PowerPC G4/G5 processors. It takes a pre-trained LLM and a text prompt as input, and outputs generated text. This is for hobbyists, vintage computer enthusiasts, or researchers interested in running AI on older hardware.
Use this if you want to experiment with running modern AI language models on your classic Mac OS X PowerPC machine, connecting vintage computing with contemporary technology.
Not ideal if you need fast, high-performance LLM inference for demanding tasks, as the speed will be significantly slower than on modern hardware.
Stars
25
Forks
1
Language
C++
License
—
Category
Last pushed
Mar 08, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/Scottcjn/llama-cpp-tigerleopard"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
containers/ramalama
RamaLama is an open-source developer tool that simplifies the local serving of AI models from...
av/harbor
One command brings a complete pre-wired LLM stack with hundreds of services to explore.
RunanywhereAI/runanywhere-sdks
Production ready toolkit to run AI locally
runpod-workers/worker-vllm
The RunPod worker template for serving our large language model endpoints. Powered by vLLM.
foldl/chatllm.cpp
Pure C++ implementation of several models for real-time chatting on your computer (CPU & GPU)