vmlinuzx/llmc

One stop shop - Local-first RAG stack with intelligent polyglot-code/docs, remote code execution, local llama enrichment, progressive disclosure tools, mcp server, sandboxed security.

41
/ 100
Emerging

This tool helps software developers and engineering teams dramatically reduce the cost of using Large Language Models (LLMs) to understand and interact with their codebase. It takes your code and technical documentation as input, intelligently finds the most relevant pieces, and sends only those snippets to an LLM, returning a cost-optimized answer or code suggestion. This is ideal for developers, tech leads, and anyone regularly using LLMs for code-related tasks like refactoring, debugging, or generating documentation.

Use this if you are a software developer frequently querying LLMs about your codebase and want to cut down on API token costs and improve data security.

Not ideal if you primarily use LLMs for non-code-related tasks or if your codebase is not frequently analyzed by LLMs.

software-development code-analysis LLM-ops developer-productivity cost-optimization
No Package No Dependents
Maintenance 10 / 25
Adoption 7 / 25
Maturity 13 / 25
Community 11 / 25

How are scores calculated?

Stars

31

Forks

4

Language

Python

License

MIT

Last pushed

Feb 17, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/rag/vmlinuzx/llmc"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.