m0dulo/InferSpore

🌱 A fully independent Large Language Model (LLM) inference engine, built leveraging cuBLAS and cub. 🧩

36
/ 100
Emerging

InferSpore is a tool for developers who want to run large language models on their own infrastructure without relying on external services or complex frameworks. It takes a trained LLM and efficiently performs inference, producing the model's outputs. This is ideal for developers building applications that integrate LLMs and need fine-grained control over the deployment environment.

No commits in the last 6 months.

Use this if you are a developer looking for a standalone, high-performance engine to run large language models directly on NVIDIA GPUs.

Not ideal if you are an end-user looking for a pre-built application or a simple API to interact with large language models.

LLM deployment GPU inference ML infrastructure Custom AI applications High-performance computing
Stale 6m No Package No Dependents
Maintenance 2 / 25
Adoption 7 / 25
Maturity 16 / 25
Community 11 / 25

How are scores calculated?

Stars

32

Forks

4

Language

Cuda

License

MIT

Last pushed

Jun 24, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/m0dulo/InferSpore"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.