uakarsh/med-vqa
An approach for solving the problem of medical visual question answering
This project helps medical professionals get quick answers to questions about medical images. You input a medical image and a question in natural language, and it outputs a text-based answer. It's designed for doctors, radiologists, or medical students who need to extract information efficiently from visual medical data.
No commits in the last 6 months.
Use this if you need an automated way to query medical images for specific information, like 'What does this MRI show?' or 'Is there a fracture here?'
Not ideal if you require highly accurate, production-ready diagnostic support, as this is an experimental implementation.
Stars
7
Forks
4
Language
Jupyter Notebook
License
—
Category
Last pushed
Oct 01, 2021
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/uakarsh/med-vqa"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
open-mmlab/mmpretrain
OpenMMLab Pre-training Toolbox and Benchmark
facebookresearch/mmf
A modular framework for vision & language multimodal research from Facebook AI Research (FAIR)
adambielski/siamese-triplet
Siamese and triplet networks with online pair/triplet mining in PyTorch
HuaizhengZhang/Awsome-Deep-Learning-for-Video-Analysis
Papers, code and datasets about deep learning and multi-modal learning for video analysis
KaiyangZhou/pytorch-vsumm-reinforce
Unsupervised video summarization with deep reinforcement learning (AAAI'18)