anujanegi/VQA
Visual Question Answering System
This system helps you get specific answers about the contents of an image by asking questions in natural language. You provide an image and a question like "How many people are there?" or "What color is the person wearing?", and it generates a direct answer. It's designed for anyone who needs to quickly extract factual details from pictures without manual inspection.
No commits in the last 6 months.
Use this if you need to programmatically query images to extract specific factual information about objects, people, or actions shown.
Not ideal if you need to understand complex emotional context or subjective interpretations of an image.
Stars
11
Forks
—
Language
Python
License
MIT
Category
Last pushed
Nov 13, 2019
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/anujanegi/VQA"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
open-mmlab/mmpretrain
OpenMMLab Pre-training Toolbox and Benchmark
facebookresearch/mmf
A modular framework for vision & language multimodal research from Facebook AI Research (FAIR)
HuaizhengZhang/Awsome-Deep-Learning-for-Video-Analysis
Papers, code and datasets about deep learning and multi-modal learning for video analysis
KaiyangZhou/pytorch-vsumm-reinforce
Unsupervised video summarization with deep reinforcement learning (AAAI'18)
adambielski/siamese-triplet
Siamese and triplet networks with online pair/triplet mining in PyTorch