xencon/aixcl
Local first development platform with LLM integration.
This project helps software developers integrate Large Language Models (LLMs) directly into their local coding environment, ensuring privacy and full control. It allows developers to run various LLMs on their own hardware, managing models and switching inference engines through a simple command-line interface and web interface. The output is a locally hosted AI stack that powers development tools like OpenCode for on-device chat and code assistance.
Use this if you are a software developer who needs to use LLMs for coding tasks but requires complete control over data privacy and model deployment, without relying on external APIs.
Not ideal if you don't have the necessary hardware (8 GB VRAM, 32 GB RAM, 128 GB disk space) or are looking for a cloud-based, managed LLM service.
Stars
12
Forks
10
Language
Shell
License
Apache-2.0
Category
Last pushed
Mar 11, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/xencon/aixcl"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
mlc-ai/web-llm
High-performance In-browser LLM Inference Engine
e2b-dev/desktop
E2B Desktop Sandbox for LLMs. E2B Sandbox with desktop graphical environment that you can...
geekjr/quickai
QuickAI is a Python library that makes it extremely easy to experiment with state-of-the-art...
Azure-Samples/llama-index-javascript
This sample shows how to quickly get started with LlamaIndex.ai on Azure 🚀
AkagawaTsurunaki/zerolan-core
ZerolanCore integrates many open-source, locally deployable AI models, and aims to integrate a...