rozek/node-red-flow-openai-api

Node-RED Flows for OpenAI API compatible endpoints calling llama.cpp

28
/ 100
Experimental

This project helps developers and technical users run large language models (LLMs) like LLaMA 2 on their own local servers using Node-RED. It takes standard OpenAI API requests as input and outputs chat completions or embeddings, allowing users to leverage local hardware for AI tasks. This is ideal for those building applications with tools like LangChain or Flowise who want to control their AI models and data.

No commits in the last 6 months.

Use this if you want to run LLaMA 2 models on your own server to process natural language tasks and integrate with AI application development platforms like LangChain or Flowise, without relying on external cloud services.

Not ideal if you need to use a different LLM model than LLaMA 2, prefer a fully managed cloud service for your AI tasks, or are not comfortable setting up a Node-RED server and compiling local executables.

local-AI-inference LLM-deployment AI-application-development natural-language-processing data-privacy
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 4 / 25
Maturity 16 / 25
Community 8 / 25

How are scores calculated?

Stars

8

Forks

1

Language

License

MIT

Last pushed

Aug 08, 2023

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/rozek/node-red-flow-openai-api"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.