jbilcke-hf/template-node-llama-express
A minimalist Docker project to get started with Node, llama-node and Express. Ready to be used in a Hugging Face Space.
This project helps developers quickly set up a local server to experiment with large language models. You provide a prompt through a web query, and the server returns text generated by the model. This is for developers or technical users who want to integrate or test local language model capabilities.
No commits in the last 6 months.
Use this if you are a developer looking for a basic, ready-to-use template to run a Llama-based language model locally via a web interface.
Not ideal if you are an end-user looking for a polished application to interact with large language models without any setup or coding.
Stars
8
Forks
1
Language
TypeScript
License
Apache-2.0
Category
Last pushed
Jun 21, 2023
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/jbilcke-hf/template-node-llama-express"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
mlc-ai/web-llm
High-performance In-browser LLM Inference Engine
e2b-dev/desktop
E2B Desktop Sandbox for LLMs. E2B Sandbox with desktop graphical environment that you can...
geekjr/quickai
QuickAI is a Python library that makes it extremely easy to experiment with state-of-the-art...
Azure-Samples/llama-index-javascript
This sample shows how to quickly get started with LlamaIndex.ai on Azure 🚀
AkagawaTsurunaki/zerolan-core
ZerolanCore integrates many open-source, locally deployable AI models, and aims to integrate a...