williamcfrancis/Visual-Question-Answering-using-Stacked-Attention-Networks

Pytorch implementation of VQA using Stacked Attention Networks: Multimodal architecture for image and question input, using CNN and LSTM, with stacked attention layer for improved accuracy (54.82%). Includes visualization of attention layers. Contributions welcome. Utilizes Visual VQA v2.0 dataset.

36
/ 100
Emerging

This project helps researchers and developers explore how machines can understand images and answer questions about them. You provide an image and a natural language question, and the system tries to generate the correct answer. This is useful for anyone working on artificial intelligence that needs to interpret visual information.

No commits in the last 6 months.

Use this if you are an AI researcher or developer building systems that need to answer questions based on visual content.

Not ideal if you are looking for a ready-to-use, production-grade VQA application rather than a research implementation.

artificial-intelligence computer-vision natural-language-processing multimodal-AI deep-learning-research
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 4 / 25
Maturity 16 / 25
Community 16 / 25

How are scores calculated?

Stars

8

Forks

6

Language

Jupyter Notebook

License

Apache-2.0

Last pushed

Jan 18, 2023

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/williamcfrancis/Visual-Question-Answering-using-Stacked-Attention-Networks"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.