VPanjeta/PyLLaMa-CPU

Fast LLaMa inference on CPU using llama.cpp for Python

29
/ 100
Experimental

This tool helps developers who want to run large language models (LLaMA) directly on their computer's CPU, without needing specialized graphics cards. You provide a LLaMA model file, and it quickly generates text outputs based on your prompts. This is ideal for developers building applications that integrate LLaMA capabilities into Python.

No commits in the last 6 months.

Use this if you are a Python developer and need to incorporate fast LLaMA text generation into your applications, running entirely on a CPU.

Not ideal if you are looking for a plug-and-play AI chat application or do not have experience with Python development and model conversion.

AI-development natural-language-processing machine-learning-inference application-development
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 5 / 25
Maturity 16 / 25
Community 8 / 25

How are scores calculated?

Stars

9

Forks

1

Language

C

License

MIT

Last pushed

Mar 23, 2023

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/VPanjeta/PyLLaMa-CPU"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.