FareedKhan-dev/Building-llama3-from-scratch

LLaMA 3 is one of the most promising open-source model after Mistral, we will recreate it's architecture in a simpler manner.

40
/ 100
Emerging

This project helps machine learning engineers and researchers understand the inner workings of large language models by guiding them through the process of building a LLaMA 3-like model from scratch. It takes foundational knowledge of neural networks and Transformer architecture as input, and outputs a working, simplified implementation of LLaMA 3. This is ideal for those looking to demystify how these advanced models are constructed.

203 stars. No commits in the last 6 months.

Use this if you want to learn the practical steps of implementing a large language model architecture like LLaMA 3, focusing on its core components without needing a GPU.

Not ideal if you are looking for a pre-trained model to use directly, or if you do not have at least 17GB of RAM available for processing the model files.

large-language-models deep-learning-architecture natural-language-processing machine-learning-engineering
No License Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 10 / 25
Maturity 8 / 25
Community 22 / 25

How are scores calculated?

Stars

203

Forks

46

Language

Jupyter Notebook

License

Last pushed

Aug 23, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/FareedKhan-dev/Building-llama3-from-scratch"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.