BeRo1985/pasllm

PasLLM - LLM inference engine in Object Pascal (synced from my private work repository)

50
/ 100
Established

This project helps developers integrate specific Large Language Models (LLMs) into their Object Pascal applications. It takes pre-trained model weights (like Llama, Qwen, Phi) and allows them to be run efficiently on a CPU, even on resource-constrained systems. The output is text generated by the LLM, directly within the application. This is primarily for Pascal developers who need to embed local AI capabilities.

Use this if you are an Object Pascal developer building applications and need to integrate local, CPU-based inference for specific LLMs.

Not ideal if you require GPU acceleration, multi-modal capabilities, or support for the very latest LLM architectures like Mamba.

Object Pascal development local AI integration embedded LLMs CPU inference application development
No Package No Dependents
Maintenance 10 / 25
Adoption 9 / 25
Maturity 13 / 25
Community 18 / 25

How are scores calculated?

Stars

76

Forks

15

Language

Pascal

License

AGPL-3.0

Last pushed

Jan 26, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/BeRo1985/pasllm"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.