AmpereComputingAI/llama.cpp
Ampere optimized llama.cpp
This project helps you run large language models (LLMs) more efficiently on Ampere CPUs. You provide a GGUF format LLM, and it outputs the model ready for faster inference, especially using Ampere's custom quantization. It is designed for developers and AI practitioners who need to deploy and optimize LLMs on Ampere hardware.
Use this if you are a developer or AI practitioner working with Large Language Models and want to run them optimally on Ampere CPUs or Ampere-based cloud VMs.
Not ideal if you are not using Ampere hardware or if you do not have experience with Docker and command-line model conversion/quantization.
Stars
33
Forks
5
Language
Python
License
—
Category
Last pushed
Jan 30, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/AmpereComputingAI/llama.cpp"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
vllm-project/vllm
A high-throughput and memory-efficient inference and serving engine for LLMs
sgl-project/sglang
SGLang is a high-performance serving framework for large language models and multimodal models.
alibaba/MNN
MNN: A blazing-fast, lightweight inference engine battle-tested by Alibaba, powering...
xorbitsai/inference
Swap GPT for any LLM by changing a single line of code. Xinference lets you run open-source,...
tensorzero/tensorzero
TensorZero is an open-source stack for industrial-grade LLM applications. It unifies an LLM...