uakarsh/latr
Implementation of LaTr: Layout-aware transformer for scene-text VQA,a novel multimodal architecture for Scene Text Visual Question Answering (STVQA)
This project helps researchers and developers working with visual question answering (VQA) on images containing text. It takes an image with embedded text and a question about that text, then provides an accurate answer. It's designed for computer vision researchers, AI engineers, and data scientists developing advanced VQA systems.
No commits in the last 6 months.
Use this if you are a computer vision researcher or AI engineer looking to implement and experiment with a state-of-the-art layout-aware transformer model for scene-text VQA.
Not ideal if you need a ready-to-use application with pre-trained weights for immediate deployment, as significant computational resources are required for training.
Stars
56
Forks
6
Language
Python
License
MIT
Category
Last pushed
Oct 30, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/uakarsh/latr"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
pairlab/SlotFormer
Code release for ICLR 2023 paper: SlotFormer on object-centric dynamics models
ChristophReich1996/Swin-Transformer-V2
PyTorch reimplementation of the paper "Swin Transformer V2: Scaling Up Capacity and Resolution"...
prismformore/Multi-Task-Transformer
Code of ICLR2023 paper "TaskPrompter: Spatial-Channel Multi-Task Prompting for Dense Scene...
DirtyHarryLYL/Transformer-in-Vision
Recent Transformer-based CV and related works.
kyegomez/MegaVIT
The open source implementation of the model from "Scaling Vision Transformers to 22 Billion Parameters"