ayaka14732/llama-2-jax

JAX implementation of the Llama 2 model

42
/ 100
Emerging

This project offers a JAX implementation of the Llama 2 model, enabling researchers and machine learning engineers to train and run large language models efficiently. It takes existing Llama 2 model weights (from Hugging Face) as input and provides a JAX-compatible model for high-performance computing, especially on Google Cloud TPUs. The primary users are ML researchers and engineers who work with or fine-tune Llama 2.

216 stars. No commits in the last 6 months.

Use this if you are a machine learning engineer or researcher looking to leverage Google Cloud TPUs for efficient training, fine-tuning, or inference of Llama 2 models using JAX.

Not ideal if you are an end-user simply looking to use Llama 2 for text generation or other NLP tasks without needing to work with the model's internal architecture or training process.

large-language-models model-training natural-language-processing high-performance-computing deep-learning-research
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 16 / 25

How are scores calculated?

Stars

216

Forks

24

Language

Python

License

CC0-1.0

Last pushed

Feb 02, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/ayaka14732/llama-2-jax"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.