vmlinuzx/llmc
One stop shop - Local-first RAG stack with intelligent polyglot-code/docs, remote code execution, local llama enrichment, progressive disclosure tools, mcp server, sandboxed security.
This tool helps software developers and engineering teams dramatically reduce the cost of using Large Language Models (LLMs) to understand and interact with their codebase. It takes your code and technical documentation as input, intelligently finds the most relevant pieces, and sends only those snippets to an LLM, returning a cost-optimized answer or code suggestion. This is ideal for developers, tech leads, and anyone regularly using LLMs for code-related tasks like refactoring, debugging, or generating documentation.
Use this if you are a software developer frequently querying LLMs about your codebase and want to cut down on API token costs and improve data security.
Not ideal if you primarily use LLMs for non-code-related tasks or if your codebase is not frequently analyzed by LLMs.
Stars
31
Forks
4
Language
Python
License
MIT
Category
Last pushed
Feb 17, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/rag/vmlinuzx/llmc"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
run-llama/llama_index
LlamaIndex is the leading document agent and OCR platform
emarco177/documentation-helper
Reference implementation of a RAG-based documentation helper using LangChain, Pinecone, and Tavily..
janus-llm/janus-llm
Leveraging LLMs for modernization through intelligent chunking, iterative prompting and...
JetXu-LLM/llama-github
Llama-github is an open-source Python library that empowers LLM Chatbots, AI Agents, and...
Vasallo94/ObsidianRAG
RAG system to query your Obsidian notes using LangGraph and local LLMs (Ollama)