RachanaJayaram/Cross-Attention-VizWiz-VQA
A self-evident application of the VQA task is to design systems that aid blind people with sight reliant queries. The VizWiz VQA dataset originates from images and questions compiled by members of the visually impaired community and as such, highlights some of the challenges presented by this particular use case.
This project helps create systems that can answer questions about images, specifically designed to assist people who are blind or visually impaired. You provide an image (even if it's blurry or poorly framed) and a spoken question, and the system aims to provide a natural language answer. This is intended for developers building assistive technology applications for the visually impaired community.
No commits in the last 6 months.
Use this if you are building an application that needs to accurately interpret natural language questions about real-world images, especially when those images or questions might be imperfect due to the user's visual impairment.
Not ideal if your application requires perfectly clear, well-composed images and precisely worded questions, or if you need to perform general object detection without question answering.
Stars
15
Forks
6
Language
Python
License
MIT
Category
Last pushed
Dec 12, 2023
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/computer-vision/RachanaJayaram/Cross-Attention-VizWiz-VQA"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.