gpt-neox and gpt-neo
These are ecosystem siblings representing different technological approaches to the same goal—GPT-Neo uses mesh-tensorflow for distributed training while GPT-NeoX uses Megatron/DeepSpeed for the same purpose, with NeoX being the more recent evolution designed to scale to larger models.
About gpt-neox
EleutherAI/gpt-neox
An implementation of model parallel autoregressive transformers on GPUs, based on the Megatron and DeepSpeed libraries
This is a specialized toolkit for researchers and engineers who need to train very large language models from scratch, or fine-tune existing ones, using substantial computational resources. It takes raw text data and configuration settings as input, and outputs a custom-trained language model capable of generating human-like text. This is for users operating at the cutting edge of AI, often in academic, industry, or government labs.
About gpt-neo
EleutherAI/gpt-neo
An implementation of model parallel GPT-2 and GPT-3-style models using the mesh-tensorflow library.
Supports diverse attention mechanisms including local and linear attention variants, alongside mixture-of-experts and axial positional embeddings beyond standard GPT architectures. Built on mesh-tensorflow for distributed training across TPU and GPU clusters with both data and model parallelism, enabling efficient scaling to multi-billion parameter models. Includes pre-trained checkpoints (1.3B and 2.7B parameters) trained on The Pile dataset, compatible with HuggingFace Transformers for immediate inference.
Scores updated daily from GitHub, PyPI, and npm data. How scores work