DirtyHarryLYL/Transformer-in-Vision
Recent Transformer-based CV and related works.
This project compiles a list of recent research papers, code, and surveys related to the use of Transformer models in computer vision and related fields. It provides practitioners and researchers in AI with a curated collection of resources to stay updated on advancements in image and video analysis, generative models, and multimodal learning. The project helps you explore state-of-the-art techniques and foundational works in this rapidly evolving area.
1,339 stars. No commits in the last 6 months.
Use this if you are an AI researcher or practitioner looking for a collection of the latest academic papers and associated code on Transformer models applied to computer vision tasks like image generation, video analysis, and robotics.
Not ideal if you are looking for a ready-to-use software library or a hands-on tutorial for implementing a specific computer vision application.
Stars
1,339
Forks
143
Language
—
License
—
Category
Last pushed
Aug 22, 2023
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/DirtyHarryLYL/Transformer-in-Vision"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
pairlab/SlotFormer
Code release for ICLR 2023 paper: SlotFormer on object-centric dynamics models
ChristophReich1996/Swin-Transformer-V2
PyTorch reimplementation of the paper "Swin Transformer V2: Scaling Up Capacity and Resolution"...
prismformore/Multi-Task-Transformer
Code of ICLR2023 paper "TaskPrompter: Spatial-Channel Multi-Task Prompting for Dense Scene...
kyegomez/MegaVIT
The open source implementation of the model from "Scaling Vision Transformers to 22 Billion Parameters"
uakarsh/latr
Implementation of LaTr: Layout-aware transformer for scene-text VQA,a novel multimodal...