aju22/LLaMA2

This repository contains an implementation of the LLaMA 2 (Large Language Model Meta AI) model, a Generative Pretrained Transformer (GPT) variant. The implementation focuses on the model architecture and the inference process. The code is restructured and heavily commented to facilitate easy understanding of the key parts of the architecture.

29
/ 100
Experimental

This is an open-source implementation of the LLaMA 2 language model, which helps with tasks like generating text, summarizing documents, or engaging in chat conversations. It takes natural language text as input and produces human-like text outputs. Anyone who wants to understand and experiment with a powerful generative AI model for text-based applications would find this useful.

No commits in the last 6 months.

Use this if you are a machine learning researcher or engineer looking to understand, modify, and experiment with the internal workings of a LLaMA 2-like large language model.

Not ideal if you are looking for a pre-trained, ready-to-use LLaMA 2 model for immediate application without diving into the underlying code.

large-language-models generative-ai natural-language-processing ai-model-architecture text-generation
No License Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 9 / 25
Maturity 8 / 25
Community 12 / 25

How are scores calculated?

Stars

74

Forks

8

Language

Python

License

Last pushed

Oct 01, 2023

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/aju22/LLaMA2"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.