rwth-i6/returnn
The RWTH extensible training framework for universal recurrent neural networks
This framework helps machine learning researchers and engineers quickly set up, train, and debug advanced recurrent neural network models for tasks like speech recognition and machine translation. You provide your data and model specifications, and it efficiently processes them, especially on multi-GPU systems, to output optimized trained models. It's designed for those who need to experiment with and deploy complex sequence-to-sequence models.
373 stars. Available on PyPI.
Use this if you are a researcher or engineer developing and experimenting with cutting-edge recurrent neural networks for sequence-based tasks and need a fast, flexible, and robust training environment.
Not ideal if you are looking for a simple, out-of-the-box solution for basic machine learning problems or if you do not have experience with neural network architectures.
Stars
373
Forks
134
Language
Python
License
—
Category
Last pushed
Mar 17, 2026
Commits (30d)
0
Dependencies
1
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/rwth-i6/returnn"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related frameworks
pytorch/pytorch
Tensors and Dynamic neural networks in Python with strong GPU acceleration
keras-team/keras
Deep Learning for humans
Lightning-AI/torchmetrics
Machine learning metrics for distributed, scalable PyTorch applications.
Lightning-AI/pytorch-lightning
Pretrain, finetune ANY AI model of ANY size on 1 or 10,000+ GPUs with zero code changes.
lanpa/tensorboardX
tensorboard for pytorch (and chainer, mxnet, numpy, ...)