litgpt and jam-gpt

The high-performance, production-ready framework for LLMs (Lightning-AI/litgpt) and the experimental reimplementation for research and development (loke-x/jam-gpt) are ecosystem siblings, representing distinct phases or approaches within the LLM implementation lifecycle.

litgpt
72
Verified
jam-gpt
33
Emerging
Maintenance 17/25
Adoption 10/25
Maturity 25/25
Community 20/25
Maintenance 0/25
Adoption 6/25
Maturity 16/25
Community 11/25
Stars: 13,225
Forks: 1,409
Downloads:
Commits (30d): 11
Language: Python
License: Apache-2.0
Stars: 21
Forks: 3
Downloads:
Commits (30d): 0
Language: Jupyter Notebook
License: MIT
No risk flags
Stale 6m No Package No Dependents

About litgpt

Lightning-AI/litgpt

20+ high-performance LLMs with recipes to pretrain, finetune and deploy at scale.

This project helps machine learning engineers and researchers build custom large language models. You can select from over 20 pre-built LLMs, feed in your specific datasets for training or fine-tuning, and then deploy these models for various applications. It's designed for users who need fine-grained control and high performance for their custom AI language tasks.

large-language-models machine-learning-engineering natural-language-processing AI-model-training model-deployment

About jam-gpt

loke-x/jam-gpt

An Experimental Reimplementation of LLM models for research and development process

This project helps AI researchers and developers explore and understand the inner workings of Large Language Models (LLMs). You can input your own datasets to train and fine-tune experimental Generative Pretrained Transformers (GPT) models, gaining insights into their architecture and design. It's designed for individuals building or experimenting with LLM models for research purposes.

LLM-research Generative-AI-development Deep-learning-experimentation Transformer-model-design Custom-model-training

Related comparisons

Scores updated daily from GitHub, PyPI, and npm data. How scores work