pliang279/MultiViz

[ICLR 2023] MultiViz: Towards Visualizing and Understanding Multimodal Models

34
/ 100
Emerging

This project helps machine learning researchers and practitioners understand why multimodal AI models make specific predictions. It takes a pre-trained multimodal model and your dataset as input, then generates visualizations and quantitative analyses explaining how different input types (like images and text) contribute to the model's output. This is for researchers and advanced practitioners who build, evaluate, and deploy complex AI systems.

No commits in the last 6 months.

Use this if you need to debug, explain, or gain deeper insights into the decision-making process of your multimodal AI models, such as those that analyze both images and text.

Not ideal if you are looking for a simple, out-of-the-box solution for basic model evaluation or if you are not comfortable working with machine learning research code.

multimodal-ai model-interpretability explainable-ai machine-learning-research ai-debugging
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 9 / 25
Maturity 16 / 25
Community 9 / 25

How are scores calculated?

Stars

99

Forks

6

Language

Python

License

MIT

Last pushed

Aug 22, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/pliang279/MultiViz"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.