shikhartuli/cnn_txf_bias

[CogSci'21] Study of human inductive biases in CNNs and Transformers.

23
/ 100
Experimental

This project helps cognitive scientists and AI researchers understand how well different computer vision models mimic human vision. It takes popular CNNs and Vision Transformers, along with augmented ImageNet data, to test and compare their error patterns against human visual recognition. Researchers studying artificial intelligence, cognitive science, and human perception would use this.

No commits in the last 6 months.

Use this if you are researching how closely AI vision models replicate human visual biases and error patterns, beyond just accuracy scores.

Not ideal if you are looking for a tool to build or deploy new computer vision applications or to improve model performance on standard benchmarks.

cognitive-science AI-research human-perception computer-vision-analysis model-comparison
No License Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 8 / 25
Maturity 8 / 25
Community 7 / 25

How are scores calculated?

Stars

43

Forks

3

Language

Jupyter Notebook

License

Last pushed

May 18, 2021

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/shikhartuli/cnn_txf_bias"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.