mabuonomo/ollama-rag-nodejs
This is a simple example of how to use the Ollama RAG (retrieval augmented generation) using Ollama embeddings with nodejs, typescript, docker and chromadb
This project helps developers build custom AI applications that can answer questions based on your specific documents. It takes your text data and a question, then generates an answer using an AI model running locally on your machine. This is designed for software developers looking to integrate local large language models (LLMs) into their applications.
No commits in the last 6 months.
Use this if you are a developer wanting to understand and implement a basic Retrieval Augmented Generation (RAG) system with local AI models using Node.js and Docker.
Not ideal if you are an end-user looking for a ready-to-use AI application; this is a foundational code example for developers.
Stars
9
Forks
2
Language
TypeScript
License
—
Category
Last pushed
Oct 06, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/rag/mabuonomo/ollama-rag-nodejs"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
aws-samples/aws-genai-llm-chatbot
A modular and comprehensive solution to deploy a Multi-LLM and Multi-RAG powered chatbot (Amazon...
Aquiles-ai/Aquiles-RAG
Is a high-performance Augmented Recovery-Generation (RAG) solution based on Redis, Qdrant or...
tavily-ai/crawl2rag
Crawl any website with Tavily, embed the content, and deploy the RAG on MongoDB Atlas vector search.
neondatabase/pgrag
Postgres extensions to support end-to-end Retrieval-Augmented Generation (RAG) pipelines
mithun50/groq-rag
Extended Groq SDK with RAG (Retrieval-Augmented Generation), web browsing, and AI agent...