hpretila/llama.net
.NET wrapper for LLaMA.cpp for LLaMA language model inference on CPU. 🦙
This is a .NET library for developers who want to integrate LLaMA language model capabilities directly into their applications. You provide a LLaMA model file and a text prompt, and the library outputs generated text based on the model's predictions. This tool is for .NET developers building applications that require local, CPU-based language model inference.
No commits in the last 6 months.
Use this if you are a .NET developer creating applications that need to generate text using LLaMA models on a CPU, specifically on Linux.
Not ideal if you are not a .NET developer, need to run models on other operating systems like Windows or macOS, or require GPU acceleration for inference.
Stars
58
Forks
9
Language
C#
License
MIT
Category
Last pushed
May 09, 2023
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/hpretila/llama.net"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
ludwig-ai/ludwig
Low-code framework for building custom LLMs, neural networks, and other AI models
withcatai/node-llama-cpp
Run AI models locally on your machine with node.js bindings for llama.cpp. Enforce a JSON schema...
mudler/LocalAI
:robot: The free, Open Source alternative to OpenAI, Claude and others. Self-hosted and...
zhudotexe/kani
kani (カニ) is a highly hackable microframework for tool-calling language models. (NLP-OSS @ EMNLP 2023)
SciSharp/LLamaSharp
A C#/.NET library to run LLM (🦙LLaMA/LLaVA) on your local device efficiently.