xuanlinli17/autoregressive_inference
Code for "Discovering Non-monotonic Autoregressive Orderings with Variational Inference" (paper and code updated from ICLR 2021)
This project helps generate natural language sequences like image captions or translated text by determining the best order to create words. It takes your raw image data or text in one language, processes it, and outputs high-quality, relevant text in a data-driven way. This is for researchers or practitioners working with advanced natural language generation and machine translation tasks.
No commits in the last 6 months.
Use this if you need to generate high-quality text sequences, such as image captions or translated sentences, and want the model to learn the most effective word generation order from your data.
Not ideal if you're looking for a simple, out-of-the-box text generation tool without deep customization, or if your primary goal is basic sequence-to-sequence tasks where ordering isn't a complex factor.
Stars
12
Forks
3
Language
Python
License
MIT
Last pushed
Mar 07, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/xuanlinli17/autoregressive_inference"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
Naresh1318/Adversarial_Autoencoder
A wizard's guide to Adversarial Autoencoders
mseitzer/pytorch-fid
Compute FID scores with PyTorch.
acids-ircam/RAVE
Official implementation of the RAVE model: a Realtime Audio Variational autoEncoder
ratschlab/aestetik
AESTETIK: Convolutional autoencoder for learning spot representations from spatial...
jaanli/variational-autoencoder
Variational autoencoder implemented in tensorflow and pytorch (including inverse autoregressive flow)