Otman404/local-rag-llamaindex
Local llamaindex RAG to assist researchers quickly navigate research papers
This project helps researchers quickly get answers from a collection of academic papers without needing an internet connection. You provide a set of research papers, ask a question, and it returns a clear answer with citations to the original sources. This is ideal for academics, scientists, or students who need to synthesize information from many documents efficiently.
133 stars. No commits in the last 6 months.
Use this if you need to extract specific information or insights from a large collection of research papers offline.
Not ideal if you need to analyze real-time data or require a tool for general web search rather than specific document collections.
Stars
133
Forks
23
Language
Python
License
—
Category
Last pushed
May 16, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/rag/Otman404/local-rag-llamaindex"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
run-llama/llama_index
LlamaIndex is the leading document agent and OCR platform
emarco177/documentation-helper
Reference implementation of a RAG-based documentation helper using LangChain, Pinecone, and Tavily..
janus-llm/janus-llm
Leveraging LLMs for modernization through intelligent chunking, iterative prompting and...
JetXu-LLM/llama-github
Llama-github is an open-source Python library that empowers LLM Chatbots, AI Agents, and...
Vasallo94/ObsidianRAG
RAG system to query your Obsidian notes using LangGraph and local LLMs (Ollama)