Notnaton/microllm

My own implementation to run inference on local LLM models

28
/ 100
Experimental

Microllm helps developers run large language models (LLMs) directly on their personal computers, rather than relying on cloud services. It takes a model file in GGUF format as input and allows you to perform basic inference tasks. This is for software developers and researchers who want to test or utilize LLMs privately or offline.

No commits in the last 6 months.

Use this if you are a developer looking for a straightforward, local way to run LLM inference without external dependencies.

Not ideal if you need a full-featured LLM application with advanced capabilities like fine-tuning, complex prompt engineering, or robust token generation out of the box.

local-inference LLM-development model-deployment edge-AI private-AI
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 4 / 25
Maturity 16 / 25
Community 8 / 25

How are scores calculated?

Stars

8

Forks

1

Language

Python

License

AGPL-3.0

Last pushed

Sep 03, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/Notnaton/microllm"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.