RAG-Driven-Generative-AI and local-rag
About RAG-Driven-Generative-AI
Denis2054/RAG-Driven-Generative-AI
This repository provides programs to build Retrieval Augmented Generation (RAG) code for Generative AI with LlamaIndex, Deep Lake, and Pinecone leveraging the power of OpenAI and Hugging Face models for generation and evaluation.
This project provides practical guidance and code examples for building advanced Generative AI systems. It helps AI practitioners integrate their proprietary data into large language models to produce accurate, contextually relevant, and traceable outputs. You'll input your documents, images, and other data, and get out AI models that generate responses grounded in your specific information.
About local-rag
jonfairbanks/local-rag
Ingest files for retrieval augmented generation (RAG) with open-source Large Language Models (LLMs), all without 3rd parties or sensitive data leaving your network.
This tool helps you quickly get answers from your own documents using a conversational AI, without sending your private information to external services. You provide your local files, GitHub repositories, or website content, and it allows you to chat with an AI that draws knowledge only from those sources. It's ideal for anyone who needs to extract information from their own data securely and privately.
Related comparisons
Scores updated daily from GitHub, PyPI, and npm data. How scores work