rbitr/llm.f90

LLM inference in Fortran

37
/ 100
Emerging

This project allows developers to run large language models (LLMs) on their own computers using Fortran. It takes a pre-trained LLM model file (like a GGUF file) and a text prompt as input, then generates text completions. The output is the generated text and performance metrics. This is for developers or researchers who want direct control over LLM inference on CPU without complex frameworks.

No commits in the last 6 months.

Use this if you are a developer who needs to run LLM inference on a CPU with minimal dependencies, desire high performance from a simple, hackable codebase, and want to integrate or customize the language model at a low level.

Not ideal if you are a non-developer seeking an out-of-the-box application for general LLM use without programming, or if you require extensive multi-platform support or GPU acceleration directly from this tool.

LLM-inference-development CPU-optimization scientific-computing custom-language-models embedded-AI
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 8 / 25
Maturity 16 / 25
Community 13 / 25

How are scores calculated?

Stars

64

Forks

8

Language

Fortran

License

MIT

Last pushed

May 30, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/rbitr/llm.f90"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.