akanyaani/miniLLAMA
A simplified LLAMA implementation for training and inference tasks.
This project helps machine learning engineers and researchers understand the core mechanics of large language models like LLAMA and LLAMA2. It takes raw text data as input, processes it, and allows you to pre-train a simplified LLAMA model. The output is a functional model that can generate text based on your prompts, offering a hands-on way to grasp complex architectures.
No commits in the last 6 months.
Use this if you are a machine learning engineer or researcher who wants to learn the fundamental architecture and implementation details of LLAMA and LLAMA2 by building and experimenting with a simplified version.
Not ideal if you are looking to deploy a production-ready large language model or need multi-GPU support and advanced features like instruction-tuning, as this focuses on educational clarity over robust deployment.
Stars
36
Forks
3
Language
Python
License
MIT
Category
Last pushed
Jul 09, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/akanyaani/miniLLAMA"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
hkproj/pytorch-llama
LLaMA 2 implemented from scratch in PyTorch
4AI/LS-LLaMA
A Simple but Powerful SOTA NER Model | Official Code For Label Supervised LLaMA Finetuning
luchangli03/export_llama_to_onnx
export llama to onnx
ayaka14732/llama-2-jax
JAX implementation of the Llama 2 model
harleyszhang/lite_llama
A light llama-like llm inference framework based on the triton kernel.