AlenVelocity/langchain-llama
Run LLAMA LLMs in Node with Langchain
This is a tool for Node.js developers who want to integrate LLaMA large language models directly into their applications. It allows you to feed text inputs to a local LLaMA model and receive generated text outputs. This is specifically for developers building applications that need on-device AI text generation.
No commits in the last 6 months.
Use this if you are a Node.js developer looking to embed local LLaMA model capabilities into your application without relying on external API services.
Not ideal if you are not a Node.js developer or if you need a high-level, ready-to-use application rather than a programming library.
Stars
39
Forks
4
Language
TypeScript
License
MIT
Category
Last pushed
Apr 11, 2023
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/AlenVelocity/langchain-llama"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
ludwig-ai/ludwig
Low-code framework for building custom LLMs, neural networks, and other AI models
withcatai/node-llama-cpp
Run AI models locally on your machine with node.js bindings for llama.cpp. Enforce a JSON schema...
mudler/LocalAI
:robot: The free, Open Source alternative to OpenAI, Claude and others. Self-hosted and...
zhudotexe/kani
kani (カニ) is a highly hackable microframework for tool-calling language models. (NLP-OSS @ EMNLP 2023)
SciSharp/LLamaSharp
A C#/.NET library to run LLM (🦙LLaMA/LLaVA) on your local device efficiently.