x-transformers and Fast-Transformer

These are ecosystem siblings where x-transformers provides a general-purpose transformer implementation framework, while Fast-Transformer offers a specialized alternative attention mechanism (additive attention) that could be integrated into or compared against x-transformers' modular architecture.

x-transformers
79
Verified
Fast-Transformer
51
Established
Maintenance 20/25
Adoption 15/25
Maturity 25/25
Community 19/25
Maintenance 0/25
Adoption 10/25
Maturity 25/25
Community 16/25
Stars: 5,808
Forks: 507
Downloads:
Commits (30d): 8
Language: Python
License: MIT
Stars: 148
Forks: 22
Downloads:
Commits (30d): 0
Language: Jupyter Notebook
License: Apache-2.0
No risk flags
Stale 6m

About x-transformers

lucidrains/x-transformers

A concise but complete full-attention transformer with a set of promising experimental features from various papers

This project provides pre-built, flexible transformer models for various AI tasks. You can input text, images, or a combination to generate new text, classify images, or create image captions. It's designed for AI researchers and practitioners who want to experiment with advanced transformer architectures without building them from scratch.

natural-language-processing computer-vision multimodal-ai generative-ai machine-learning-research

About Fast-Transformer

Rishit-dagli/Fast-Transformer

An implementation of Additive Attention

This is a developer tool that provides a TensorFlow implementation of the Fastformer model, which uses additive attention for efficient processing of long text sequences. It takes long text as input and outputs processed sequences, enabling faster and more effective text modeling. Machine learning engineers and researchers working on natural language processing tasks would use this.

natural-language-processing deep-learning large-language-models text-modeling machine-learning-engineering

Scores updated daily from GitHub, PyPI, and npm data. How scores work