FoundationVision/BitVAE
official training and inference code of bitwise tokenizer
This project offers the tools to convert high-resolution images into a more compact, bitwise representation and then reconstruct them. It helps researchers and engineers working with large image datasets to efficiently process, store, or transmit visual information. You provide images, and it outputs a 'tokenized' version and can reconstruct images from these tokens.
No commits in the last 6 months.
Use this if you need to compress or represent high-resolution images in a compact, tokenized format while maintaining high visual quality.
Not ideal if you are looking for an off-the-shelf application to simply view or edit images, as this requires technical setup and scripting.
Stars
70
Forks
2
Language
Python
License
MIT
Category
Last pushed
May 18, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/diffusion/FoundationVision/BitVAE"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
jxhe/vae-lagging-encoder
PyTorch implementation of "Lagging Inference Networks and Posterior Collapse in Variational...
chaitanya100100/VAE-for-Image-Generation
Implemented Variational Autoencoder generative model in Keras for image generation and its...
taldatech/soft-intro-vae-pytorch
[CVPR 2021 Oral] Official PyTorch implementation of Soft-IntroVAE from the paper "Soft-IntroVAE:...
lavinal712/AutoencoderKL
Train Your VAE: A VAE Training and Finetuning Script for SD/FLUX
Rayhane-mamah/Efficient-VDVAE
Official Pytorch and JAX implementation of "Efficient-VDVAE: Less is more"