Rishit-dagli/Transformer-in-Transformer

An Implementation of Transformer in Transformer in TensorFlow for image classification, attention inside local patches

29
/ 100
Experimental

This is a TensorFlow implementation of the Transformer in Transformer model for image classification. It processes images by applying attention at both the pixel and broader patch levels to identify and categorize objects within the image. It's designed for machine learning engineers and researchers focused on computer vision tasks.

No commits in the last 6 months.

Use this if you are a machine learning engineer or researcher who needs to implement an advanced image classification model using TensorFlow, particularly for improving recognition accuracy through detailed pixel and patch analysis.

Not ideal if you are looking for a plug-and-play solution without any coding, or if your primary framework is PyTorch.

image classification computer vision deep learning research model development visual recognition
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 8 / 25
Maturity 16 / 25
Community 5 / 25

How are scores calculated?

Stars

43

Forks

2

Language

Jupyter Notebook

License

Apache-2.0

Last pushed

Feb 12, 2022

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/Rishit-dagli/Transformer-in-Transformer"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.