BobMcDear/llaf

LLMs in Futhark

28
/ 100
Experimental

This project offers a way to perform large language model (LLM) inference using Futhark, a functional programming language. It takes pre-trained LLM parameters and an initial text context as input, then generates additional text tokens. This tool is designed for developers who are building high-performance deep learning applications and are interested in exploring alternative languages for GPU-accelerated array processing.

No commits in the last 6 months.

Use this if you are a developer with a functional programming background (like Haskell or ML family languages) looking to implement and experiment with LLM inference using Futhark for data-parallel performance on GPUs or multi-threaded CPUs.

Not ideal if you need state-of-the-art performance for LLM inference, as dedicated deep learning frameworks like PyTorch will be significantly faster.

deep-learning-engineering GPU-programming functional-programming LLM-inference high-performance-computing
Stale 6m No Package No Dependents
Maintenance 2 / 25
Adoption 5 / 25
Maturity 15 / 25
Community 6 / 25

How are scores calculated?

Stars

12

Forks

1

Language

Futhark

License

MIT

Last pushed

Sep 01, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/BobMcDear/llaf"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.