MaximumEntropy/Seq2Seq-PyTorch
Sequence to Sequence Models with PyTorch
This project helps machine learning engineers and researchers build and experiment with sequence-to-sequence models for tasks like machine translation. It takes sequences of words or characters in one language as input and produces translated sequences in another. The implementations cover standard and attention-based models, providing a foundation for natural language processing applications.
742 stars. No commits in the last 6 months.
Use this if you are a machine learning engineer developing or researching neural machine translation systems and need PyTorch implementations of common sequence-to-sequence architectures.
Not ideal if you are a developer looking for an off-the-shelf, easy-to-integrate translation API or a non-technical user needing a ready-to-use translation tool.
Stars
742
Forks
161
Language
Python
License
WTFPL
Category
Last pushed
Mar 27, 2022
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/MaximumEntropy/Seq2Seq-PyTorch"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Compare
Related frameworks
facebookresearch/fairseq2
FAIR Sequence Modeling Toolkit 2
lhotse-speech/lhotse
Tools for handling multimodal data in machine learning projects.
google/sequence-layers
A neural network layer API and library for sequence modeling, designed for easy creation of...
awslabs/sockeye
Sequence-to-sequence framework with a focus on Neural Machine Translation based on PyTorch
OpenNMT/OpenNMT-tf
Neural machine translation and sequence learning using TensorFlow