TAU-VAILab/isbertblind
This repository is for the paper "Is BERT Blind? Exploring the Effect of Vision-and-Language Pretraining on Visual Language Understanding" (CVPR 2023)
This tool helps researchers and developers evaluate how well different AI models understand concepts like colors or shapes when presented with text. You input a sentence with a missing word (like a color or shape) and a list of possible options, and it tells you which option the model thinks is the best fit. It's designed for AI researchers and practitioners working on vision-and-language models.
No commits in the last 6 months.
Use this if you need to systematically test and compare the "visual understanding" capabilities of different large language models through masked language modeling or Stroop probing.
Not ideal if you're looking for a general-purpose natural language processing library for everyday tasks, as its focus is on specific model evaluation techniques.
Stars
21
Forks
—
Language
Python
License
—
Category
Last pushed
Nov 02, 2023
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/TAU-VAILab/isbertblind"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
open-mmlab/mmpretrain
OpenMMLab Pre-training Toolbox and Benchmark
facebookresearch/mmf
A modular framework for vision & language multimodal research from Facebook AI Research (FAIR)
HuaizhengZhang/Awsome-Deep-Learning-for-Video-Analysis
Papers, code and datasets about deep learning and multi-modal learning for video analysis
KaiyangZhou/pytorch-vsumm-reinforce
Unsupervised video summarization with deep reinforcement learning (AAAI'18)
adambielski/siamese-triplet
Siamese and triplet networks with online pair/triplet mining in PyTorch