uakarsh/latr

Implementation of LaTr: Layout-aware transformer for scene-text VQA,a novel multimodal architecture for Scene Text Visual Question Answering (STVQA)

35
/ 100
Emerging

This project helps researchers and developers working with visual question answering (VQA) on images containing text. It takes an image with embedded text and a question about that text, then provides an accurate answer. It's designed for computer vision researchers, AI engineers, and data scientists developing advanced VQA systems.

No commits in the last 6 months.

Use this if you are a computer vision researcher or AI engineer looking to implement and experiment with a state-of-the-art layout-aware transformer model for scene-text VQA.

Not ideal if you need a ready-to-use application with pre-trained weights for immediate deployment, as significant computational resources are required for training.

visual question answering scene text recognition multimodal AI deep learning research computer vision
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 8 / 25
Maturity 16 / 25
Community 11 / 25

How are scores calculated?

Stars

56

Forks

6

Language

Python

License

MIT

Last pushed

Oct 30, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/uakarsh/latr"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.