ChenRocks/UNITER
Research code for ECCV 2020 paper "UNITER: UNiversal Image-TExt Representation Learning"
This project helps researchers and developers working with computer vision and natural language processing tasks. It takes images and text as input and produces a unified representation that can be used for various downstream applications like image-text retrieval or visual question answering. It is intended for machine learning practitioners and researchers who need to train or fine-tune models for complex visual-linguistic understanding.
800 stars. No commits in the last 6 months.
Use this if you need to build or improve models that understand the relationship between images and text, such as systems that describe images, answer questions about visuals, or retrieve images based on text queries.
Not ideal if you are looking for a plug-and-play application for general users or if you don't have access to NVIDIA GPUs and a Docker environment.
Stars
800
Forks
113
Language
Python
License
—
Category
Last pushed
Jun 30, 2021
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/ChenRocks/UNITER"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
NVlabs/MambaVision
[CVPR 2025] Official PyTorch Implementation of MambaVision: A Hybrid Mamba-Transformer Vision Backbone
sign-language-translator/sign-language-translator
Python library & framework to build custom translators for the hearing-impaired and translate...
kyegomez/Jamba
PyTorch Implementation of Jamba: "Jamba: A Hybrid Transformer-Mamba Language Model"
autonomousvision/transfuser
[PAMI'23] TransFuser: Imitation with Transformer-Based Sensor Fusion for Autonomous Driving;...
kyegomez/MultiModalMamba
A novel implementation of fusing ViT with Mamba into a fast, agile, and high performance...