pliang279/MultiViz
[ICLR 2023] MultiViz: Towards Visualizing and Understanding Multimodal Models
This project helps machine learning researchers and practitioners understand why multimodal AI models make specific predictions. It takes a pre-trained multimodal model and your dataset as input, then generates visualizations and quantitative analyses explaining how different input types (like images and text) contribute to the model's output. This is for researchers and advanced practitioners who build, evaluate, and deploy complex AI systems.
No commits in the last 6 months.
Use this if you need to debug, explain, or gain deeper insights into the decision-making process of your multimodal AI models, such as those that analyze both images and text.
Not ideal if you are looking for a simple, out-of-the-box solution for basic model evaluation or if you are not comfortable working with machine learning research code.
Stars
99
Forks
6
Language
Python
License
MIT
Category
Last pushed
Aug 22, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/pliang279/MultiViz"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
open-mmlab/mmpretrain
OpenMMLab Pre-training Toolbox and Benchmark
facebookresearch/mmf
A modular framework for vision & language multimodal research from Facebook AI Research (FAIR)
adambielski/siamese-triplet
Siamese and triplet networks with online pair/triplet mining in PyTorch
HuaizhengZhang/Awsome-Deep-Learning-for-Video-Analysis
Papers, code and datasets about deep learning and multi-modal learning for video analysis
KaiyangZhou/pytorch-vsumm-reinforce
Unsupervised video summarization with deep reinforcement learning (AAAI'18)