xencon/aixcl

Local first development platform with LLM integration.

48
/ 100
Emerging

This project helps software developers integrate Large Language Models (LLMs) directly into their local coding environment, ensuring privacy and full control. It allows developers to run various LLMs on their own hardware, managing models and switching inference engines through a simple command-line interface and web interface. The output is a locally hosted AI stack that powers development tools like OpenCode for on-device chat and code assistance.

Use this if you are a software developer who needs to use LLMs for coding tasks but requires complete control over data privacy and model deployment, without relying on external APIs.

Not ideal if you don't have the necessary hardware (8 GB VRAM, 32 GB RAM, 128 GB disk space) or are looking for a cloud-based, managed LLM service.

local-AI-development developer-tools privacy-focused-AI LLM-integration coding-assistance
No Package No Dependents
Maintenance 10 / 25
Adoption 5 / 25
Maturity 16 / 25
Community 17 / 25

How are scores calculated?

Stars

12

Forks

10

Language

Shell

License

Apache-2.0

Last pushed

Mar 11, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/xencon/aixcl"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.