jbilcke-hf/template-node-llama-express

A minimalist Docker project to get started with Node, llama-node and Express. Ready to be used in a Hugging Face Space.

28
/ 100
Experimental

This project helps developers quickly set up a local server to experiment with large language models. You provide a prompt through a web query, and the server returns text generated by the model. This is for developers or technical users who want to integrate or test local language model capabilities.

No commits in the last 6 months.

Use this if you are a developer looking for a basic, ready-to-use template to run a Llama-based language model locally via a web interface.

Not ideal if you are an end-user looking for a polished application to interact with large language models without any setup or coding.

local LLM deployment language model experimentation server-side AI developer tools AI integration
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 4 / 25
Maturity 16 / 25
Community 8 / 25

How are scores calculated?

Stars

8

Forks

1

Language

TypeScript

License

Apache-2.0

Last pushed

Jun 21, 2023

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/jbilcke-hf/template-node-llama-express"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.