Deep-Spark/DeepSparkInference

DeepSparkInference has selected 216 inference models of both small and large sizes. The small models cover fields such as computer vision, natural language processing, and speech recognition; the LLMs involve various frameworks including vLLM, TGI and LMDeploy. This repository is the mirror of Gitee.

49
/ 100
Emerging

This project provides a collection of pre-optimized AI models ready for deployment. It takes various AI tasks, such as understanding text, recognizing speech, or analyzing images, and outputs high-performance AI inferences. This is ideal for AI solution developers or system integrators working with specific Chinese-made GPU hardware who need to quickly implement AI capabilities into their applications.

Use this if you are developing AI applications and need access to a curated library of pre-configured deep learning models optimized for specific Chinese GPU inference engines.

Not ideal if you are a general AI practitioner without access to or interest in integrating with Iluvatar GPUs and their specific inference engines (IGIE or ixRT).

AI deployment GPU optimization natural language processing computer vision speech recognition
No Package No Dependents
Maintenance 10 / 25
Adoption 7 / 25
Maturity 16 / 25
Community 16 / 25

How are scores calculated?

Stars

28

Forks

7

Language

Python

License

Apache-2.0

Last pushed

Mar 12, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/Deep-Spark/DeepSparkInference"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.