MaitySubhajit/KArAt
Kolmogorov-Arnold Attention: Is Learnable Attention Better for Vision Transformers?
This project offers a refined way to train image classification models by integrating Kolmogorov-Arnold Attention into Vision Transformers. It takes image datasets like CIFAR-10 or ImageNet and outputs a more accurate, trained model for recognizing objects in images. This tool is ideal for researchers and practitioners working on improving the performance of AI models for computer vision tasks.
No commits in the last 6 months.
Use this if you are a machine learning researcher or engineer looking to improve the accuracy and efficiency of your image classification models using advanced attention mechanisms.
Not ideal if you are looking for a pre-trained, ready-to-use model for general image classification without needing to delve into model architecture and training parameters.
Stars
15
Forks
—
Language
Python
License
—
Category
Last pushed
Jul 09, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/MaitySubhajit/KArAt"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
philipperemy/keras-attention
Keras Attention Layer (Luong and Bahdanau scores).
tatp22/linformer-pytorch
My take on a practical implementation of Linformer for Pytorch.
datalogue/keras-attention
Visualizing RNNs using the attention mechanism
ematvey/hierarchical-attention-networks
Document classification with Hierarchical Attention Networks in TensorFlow. WARNING: project is...
thushv89/attention_keras
Keras Layer implementation of Attention for Sequential models