balisujohn/localpilot
codellama on CPU without Docker
This VSCode extension helps software developers write code faster by interacting with a local AI model. You provide existing code or a prompt within your editor, and the AI suggests completions or modifications directly. It's designed for developers who want AI assistance without relying on cloud services or powerful GPUs.
No commits in the last 6 months.
Use this if you are a developer looking for an in-editor AI coding assistant that runs locally on your CPU and integrates with CodeLlama or WizardCoder.
Not ideal if you need a code generation tool that doesn't require a separate local server running a language model.
Stars
25
Forks
7
Language
JavaScript
License
MIT
Category
Last pushed
Feb 08, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ai-coding/balisujohn/localpilot"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
suhaibbinyounis/github-copilot-api-vscode
Unlock GitHub Copilot as a local API Gateway. Use Copilot with Cursor, LangChain, and any...
Inc44/CoFlu
CoFlu is a powerful text manipulation, generation, and comparison tool. It's designed for tasks...
quack-ai/companion
VSCode coding companion for software teams 🦆 Turn your team insights into a portable...
ErikBjare/are-copilots-local-yet
Are Copilots Local Yet? The frontier of local LLM Copilots for code completion, project...
mjrusso/wingman
Emacs package for LLM-assisted code/text completion