rag-from-scratch and ragmate-lagacy
These are complementary tools: the first provides educational foundations for understanding RAG architecture (embeddings, retrieval, generation), while the second applies those concepts as a practical code-indexing server that performs retrieval-augmented completions for editors.
About rag-from-scratch
pguso/rag-from-scratch
Demystify RAG by building it from scratch. Local LLMs, no black boxes - real understanding of embeddings, vector search, retrieval, and context-augmented generation.
This project helps software developers understand and implement Retrieval-Augmented Generation (RAG) systems. It breaks down the process of turning unstructured text documents into numerical representations, storing them efficiently, and then using a query to retrieve the most relevant information. Developers can use this to build applications that provide highly accurate, context-aware answers from custom knowledge bases using local language models, rather than relying on external APIs.
About ragmate-lagacy
ragmate/ragmate-lagacy
Local RAG server for code editors. Scans your codebase, builds a local context index, and connects to any external LLM for context-aware completions and assistance.
Implements local semantic search over your codebase using embeddings and file change tracking, injecting relevant code snippets into JetBrains AI Assistant prompts via an HTTP bridge. Supports any LLM provider (OpenAI, Mistral, local models) with pluggable embedding models, and automatically reindexes your project while respecting Git branch context—all running in Docker without external cloud dependencies.
Related comparisons
Scores updated daily from GitHub, PyPI, and npm data. How scores work