qwen.cpp and qwen2.cpp

These are ecosystem siblings serving different implementation needs: QwenLM/qwen.cpp is the official C++ inference engine for Qwen models, while yvonwin/qwen2.cpp is a community fork that extends the approach to support both Qwen2 and Llama3 models with potentially different optimizations.

qwen.cpp
43
Emerging
qwen2.cpp
33
Emerging
Maintenance 0/25
Adoption 10/25
Maturity 16/25
Community 17/25
Maintenance 0/25
Adoption 8/25
Maturity 16/25
Community 9/25
Stars: 619
Forks: 61
Downloads:
Commits (30d): 0
Language: C++
License:
Stars: 48
Forks: 4
Downloads:
Commits (30d): 0
Language: C++
License:
Archived Stale 6m No Package No Dependents
Stale 6m No Package No Dependents

About qwen.cpp

QwenLM/qwen.cpp

C++ implementation of Qwen-LM

About qwen2.cpp

yvonwin/qwen2.cpp

qwen2 and llama3 cpp implementation

This project allows you to run powerful large language models (LLMs) like Qwen2 and Llama3 directly on your own computer, even without specialized cloud infrastructure. You provide a pre-trained model, and it gives you a local, interactive chatbot or an API server for integrating AI into your applications. It's designed for technical users who want to deploy and experiment with advanced language AI locally.

local-AI-deployment large-language-models AI-model-inference private-AI-chatbots

Scores updated daily from GitHub, PyPI, and npm data. How scores work