mit-han-lab/TinyChatEngine

TinyChatEngine: On-Device LLM Inference Library

45
/ 100
Emerging

This project lets you run advanced AI chatbots, like those that write code or describe images, directly on your laptop, car, or robot without needing an internet connection. It takes a large language model and compresses it so it can process your text or image inputs locally, giving you instant replies and better privacy. This is for developers or hobbyists who want to integrate powerful AI features into on-device applications.

944 stars. No commits in the last 6 months.

Use this if you are a developer looking to embed large language model (LLM) or vision language model (VLM) capabilities directly into an application or device for real-time, private, and offline AI interactions.

Not ideal if you are an end-user without programming experience or if you primarily rely on cloud-based AI services with less concern for local execution and data privacy.

edge-AI on-device-inference AI-assistants robotics-AI embedded-systems
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 19 / 25

How are scores calculated?

Stars

944

Forks

95

Language

C++

License

MIT

Last pushed

Jul 04, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/mit-han-lab/TinyChatEngine"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.