MaitySubhajit/KArAt

Kolmogorov-Arnold Attention: Is Learnable Attention Better for Vision Transformers?

24
/ 100
Experimental

This project offers a refined way to train image classification models by integrating Kolmogorov-Arnold Attention into Vision Transformers. It takes image datasets like CIFAR-10 or ImageNet and outputs a more accurate, trained model for recognizing objects in images. This tool is ideal for researchers and practitioners working on improving the performance of AI models for computer vision tasks.

No commits in the last 6 months.

Use this if you are a machine learning researcher or engineer looking to improve the accuracy and efficiency of your image classification models using advanced attention mechanisms.

Not ideal if you are looking for a pre-trained, ready-to-use model for general image classification without needing to delve into model architecture and training parameters.

image-classification computer-vision deep-learning AI-model-training neural-networks
Stale 6m No Package No Dependents
Maintenance 2 / 25
Adoption 6 / 25
Maturity 16 / 25
Community 0 / 25

How are scores calculated?

Stars

15

Forks

Language

Python

License

Last pushed

Jul 09, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/MaitySubhajit/KArAt"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.