eigencore/Tlama_124M

Tlama (124M) is a language model based on LlaMa3 (127M) optimized by EigenCore. It is designed for computational efficiency and scalability, allowing its use on resource-limited hardware without compromising performance.

27
/ 100
Experimental

This project provides a compact and efficient language model designed for developers who need to integrate AI text generation into their applications without requiring high-end hardware. It takes text prompts as input and generates human-like text outputs, allowing developers to build features like content creation, chatbots, or summarization tools. It's ideal for those building AI-powered features on consumer-grade GPUs or systems with limited computational resources.

No commits in the last 6 months.

Use this if you are a developer looking for a language model that offers competitive performance while being exceptionally efficient and trainable on consumer hardware like an NVIDIA RTX 4060.

Not ideal if you need a language model with the absolute highest accuracy for complex, cutting-edge NLP tasks, as it prioritizes efficiency and smaller size.

AI-application-development edge-AI resource-constrained-systems text-generation-APIs machine-learning-engineering
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 5 / 25
Maturity 16 / 25
Community 6 / 25

How are scores calculated?

Stars

12

Forks

1

Language

Python

License

Apache-2.0

Last pushed

Mar 27, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/eigencore/Tlama_124M"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.