MoFHeka/LLaMA-Megatron

A LLaMA1/LLaMA12 Megatron implement.

30
/ 100
Emerging

This project helps machine learning engineers and researchers implement and run large language models (LLaMA) more efficiently. It takes LLaMA model checkpoints and tokenizer files as input, and outputs a configured LLaMA model ready for inference or further pretraining using the Megatron-LM framework. This is primarily for those working with large-scale natural language processing models.

No commits in the last 6 months.

Use this if you are a machine learning engineer or researcher looking to run or pretrain LLaMA models with the performance and scalability benefits of Nvidia's Megatron-LM.

Not ideal if you are a non-technical user or do not have experience with large-scale deep learning frameworks and Python development environments.

large-language-models natural-language-processing deep-learning-infrastructure model-pretraining model-inference
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 7 / 25
Maturity 16 / 25
Community 7 / 25

How are scores calculated?

Stars

28

Forks

2

Language

Python

License

Apache-2.0

Last pushed

Dec 13, 2023

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/MoFHeka/LLaMA-Megatron"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.