kyegomez/VortexFusion
Transformers + Mambas + LSTMS All in One Model
This project offers a novel deep learning model architecture that combines the strengths of Mamba, Transformer, and LSTM networks. It takes in sequential data, typically numerical representations of text, audio, or other time-series information, and processes it to produce an enhanced sequence output. This is designed for machine learning researchers and practitioners who are experimenting with advanced model designs for sequence-based tasks.
Use this if you are a machine learning researcher or engineer exploring cutting-edge, hybrid model architectures for sequence processing and want to experiment with combining Mamba, Transformer, and LSTM layers.
Not ideal if you are a business user looking for a ready-to-deploy, pre-trained solution for a specific application without getting into model architecture design.
Stars
14
Forks
1
Language
Python
License
MIT
Category
Last pushed
Mar 09, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/kyegomez/VortexFusion"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
dorarad/gansformer
Generative Adversarial Transformers
j-min/VL-T5
PyTorch code for "Unifying Vision-and-Language Tasks via Text Generation" (ICML 2021)
invictus717/MetaTransformer
Meta-Transformer for Unified Multimodal Learning
rkansal47/MPGAN
The message passing GAN https://arxiv.org/abs/2106.11535 and generative adversarial particle...
Yachay-AI/byt5-geotagging
Confidence and Byt5 - based geotagging model predicting coordinates from text alone.