J4NN0/llm-rag
LLMs prompt augmentation with RAG by integrating external custom data from a variety of sources, allowing chat with such documents
This project helps you chat with your own documents and data sources as if they were a knowledgeable assistant. You feed in your own text files, PDFs, web pages, or other documents, and it allows you to ask questions and get answers directly from that information. This is for anyone who needs to quickly extract specific details or insights from a collection of their own proprietary documents without manually sifting through them.
No commits in the last 6 months.
Use this if you need to quickly find answers or extract information from a large set of your own documents, like reports, articles, or project files.
Not ideal if you're looking for a general-purpose chatbot that answers questions based on broad public knowledge, or if you only have a few simple documents to search.
Stars
20
Forks
5
Language
Python
License
MIT
Category
Last pushed
Jul 22, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/rag/J4NN0/llm-rag"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
run-llama/llama_index
LlamaIndex is the leading document agent and OCR platform
emarco177/documentation-helper
Reference implementation of a RAG-based documentation helper using LangChain, Pinecone, and Tavily..
janus-llm/janus-llm
Leveraging LLMs for modernization through intelligent chunking, iterative prompting and...
JetXu-LLM/llama-github
Llama-github is an open-source Python library that empowers LLM Chatbots, AI Agents, and...
Vasallo94/ObsidianRAG
RAG system to query your Obsidian notes using LangGraph and local LLMs (Ollama)