ayaka14732/llama-2-jax
JAX implementation of the Llama 2 model
This project offers a JAX implementation of the Llama 2 model, enabling researchers and machine learning engineers to train and run large language models efficiently. It takes existing Llama 2 model weights (from Hugging Face) as input and provides a JAX-compatible model for high-performance computing, especially on Google Cloud TPUs. The primary users are ML researchers and engineers who work with or fine-tune Llama 2.
216 stars. No commits in the last 6 months.
Use this if you are a machine learning engineer or researcher looking to leverage Google Cloud TPUs for efficient training, fine-tuning, or inference of Llama 2 models using JAX.
Not ideal if you are an end-user simply looking to use Llama 2 for text generation or other NLP tasks without needing to work with the model's internal architecture or training process.
Stars
216
Forks
24
Language
Python
License
CC0-1.0
Category
Last pushed
Feb 02, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/ayaka14732/llama-2-jax"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
hkproj/pytorch-llama
LLaMA 2 implemented from scratch in PyTorch
4AI/LS-LLaMA
A Simple but Powerful SOTA NER Model | Official Code For Label Supervised LLaMA Finetuning
luchangli03/export_llama_to_onnx
export llama to onnx
harleyszhang/lite_llama
A light llama-like llm inference framework based on the triton kernel.
liangyuwang/zo2
ZO2 (Zeroth-Order Offloading): Full Parameter Fine-Tuning 175B LLMs with 18GB GPU Memory [COLM2025]