williamcfrancis/Visual-Question-Answering-using-Stacked-Attention-Networks
Pytorch implementation of VQA using Stacked Attention Networks: Multimodal architecture for image and question input, using CNN and LSTM, with stacked attention layer for improved accuracy (54.82%). Includes visualization of attention layers. Contributions welcome. Utilizes Visual VQA v2.0 dataset.
This project helps researchers and developers explore how machines can understand images and answer questions about them. You provide an image and a natural language question, and the system tries to generate the correct answer. This is useful for anyone working on artificial intelligence that needs to interpret visual information.
No commits in the last 6 months.
Use this if you are an AI researcher or developer building systems that need to answer questions based on visual content.
Not ideal if you are looking for a ready-to-use, production-grade VQA application rather than a research implementation.
Stars
8
Forks
6
Language
Jupyter Notebook
License
Apache-2.0
Category
Last pushed
Jan 18, 2023
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/williamcfrancis/Visual-Question-Answering-using-Stacked-Attention-Networks"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
open-mmlab/mmpretrain
OpenMMLab Pre-training Toolbox and Benchmark
facebookresearch/mmf
A modular framework for vision & language multimodal research from Facebook AI Research (FAIR)
adambielski/siamese-triplet
Siamese and triplet networks with online pair/triplet mining in PyTorch
HuaizhengZhang/Awsome-Deep-Learning-for-Video-Analysis
Papers, code and datasets about deep learning and multi-modal learning for video analysis
KaiyangZhou/pytorch-vsumm-reinforce
Unsupervised video summarization with deep reinforcement learning (AAAI'18)