node-llama-cpp and LLamaSharp
These are ecosystem siblings: node-llama-cpp provides Node.js bindings for the underlying llama.cpp C++ inference engine, while LLamaSharp provides C#/.NET bindings for the same engine, allowing developers to run local LLMs in their preferred language/runtime.
About node-llama-cpp
withcatai/node-llama-cpp
Run AI models locally on your machine with node.js bindings for llama.cpp. Enforce a JSON schema on the model output on the generation level
This project helps JavaScript and TypeScript developers integrate advanced AI capabilities directly into their applications by running large language models (LLMs) on their own machines. Developers input a language model and prompts, and the tool outputs structured text, function calls, or embeddings, enabling features like smart chatbots, data summarization, or advanced search within their applications. It's designed for developers building AI-powered features without relying on external cloud services.
About LLamaSharp
SciSharp/LLamaSharp
A C#/.NET library to run LLM (🦙LLaMA/LLaVA) on your local device efficiently.
This is a C#/.NET library for developers who want to integrate large language models (LLMs) like LLaMA and LLaVA directly into their applications. It allows you to take pre-trained LLM model files and run them efficiently on a local computer's CPU or GPU. The library provides the tools to build applications that can process text inputs and generate human-like text outputs, or even understand images in multimodal models.
Related comparisons
Scores updated daily from GitHub, PyPI, and npm data. How scores work