shikhartuli/cnn_txf_bias
[CogSci'21] Study of human inductive biases in CNNs and Transformers.
This project helps cognitive scientists and AI researchers understand how well different computer vision models mimic human vision. It takes popular CNNs and Vision Transformers, along with augmented ImageNet data, to test and compare their error patterns against human visual recognition. Researchers studying artificial intelligence, cognitive science, and human perception would use this.
No commits in the last 6 months.
Use this if you are researching how closely AI vision models replicate human visual biases and error patterns, beyond just accuracy scores.
Not ideal if you are looking for a tool to build or deploy new computer vision applications or to improve model performance on standard benchmarks.
Stars
43
Forks
3
Language
Jupyter Notebook
License
—
Category
Last pushed
May 18, 2021
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/shikhartuli/cnn_txf_bias"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
zalkikar/mlm-bias
Measuring Biases in Masked Language Models for PyTorch Transformers. Support for multiple social...
ejurasek00/Hashing_LLM_Debiasing
Repository consisting of the files used in the experiments + brief description of the experiments.
koudounasalkis/CLUES
This repo contains the code for "A Contrastive Learning Approach to Mitigate Bias in Speech...
anoopkdcs/NLPBias
Towards Comprehensive Understanding of Bias in Pre-trained Neural Language Models: A Survey with...
gdorleon/mbib-framing-crf
Code (CRF + Transformers) to reproduce the experiments from the paper "Detecting Framing Bias...