teelinsan/parallel-decoding

Repository of the paper "Accelerating Transformer Inference for Translation via Parallel Decoding"

35
/ 100
Emerging

This project offers methods to make existing machine translation systems faster without changing the underlying model or retraining it. You input a pre-trained machine translation model and source text, and it outputs the translated text much quicker. Researchers working on improving the efficiency of machine translation or natural language processing models would use this.

124 stars. No commits in the last 6 months.

Use this if you are a researcher focused on accelerating transformer-based machine translation inference for research purposes.

Not ideal if you need a production-ready solution for live translation services or if you are not comfortable with command-line operations and research-focused code.

Machine Translation Natural Language Processing Research Transformer Models Decoding Algorithms Computational Linguistics
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 9 / 25

How are scores calculated?

Stars

124

Forks

7

Language

Python

License

Apache-2.0

Last pushed

Mar 15, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/teelinsan/parallel-decoding"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.