rozek/node-red-flow-openai-api
Node-RED Flows for OpenAI API compatible endpoints calling llama.cpp
This project helps developers and technical users run large language models (LLMs) like LLaMA 2 on their own local servers using Node-RED. It takes standard OpenAI API requests as input and outputs chat completions or embeddings, allowing users to leverage local hardware for AI tasks. This is ideal for those building applications with tools like LangChain or Flowise who want to control their AI models and data.
No commits in the last 6 months.
Use this if you want to run LLaMA 2 models on your own server to process natural language tasks and integrate with AI application development platforms like LangChain or Flowise, without relying on external cloud services.
Not ideal if you need to use a different LLM model than LLaMA 2, prefer a fully managed cloud service for your AI tasks, or are not comfortable setting up a Node-RED server and compiling local executables.
Stars
8
Forks
1
Language
—
License
MIT
Category
Last pushed
Aug 08, 2023
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/rozek/node-red-flow-openai-api"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
mishushakov/llm-scraper
Turn any webpage into structured data using LLMs
Mobile-Artificial-Intelligence/maid
Maid is a free and open source application for interfacing with llama.cpp models locally, and...
run-llama/LlamaIndexTS
Data framework for your LLM applications. Focus on server side solution
JHubi1/ollama-app
A modern and easy-to-use client for Ollama
serge-chat/serge
A web interface for chatting with Alpaca through llama.cpp. Fully dockerized, with an easy to use API.