pairlab/SlotFormer
Code release for ICLR 2023 paper: SlotFormer on object-centric dynamics models
This project helps machine learning researchers simulate and predict visual dynamics by breaking down complex scenes into individual objects. You input raw video footage, and it outputs predictions of how objects will move or interact, along with answers to visual questions about the scene. It's designed for researchers working on computer vision tasks like video prediction and visual question answering.
120 stars. No commits in the last 6 months.
Use this if you are a machine learning researcher focused on object-centric models for understanding and predicting how objects move and interact in videos without extensive supervision.
Not ideal if you need a plug-and-play solution for real-world video analysis or if you are not comfortable working with research-grade code and Slurm GPU clusters.
Stars
120
Forks
22
Language
Python
License
MIT
Category
Last pushed
Sep 20, 2023
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/pairlab/SlotFormer"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related models
ChristophReich1996/Swin-Transformer-V2
PyTorch reimplementation of the paper "Swin Transformer V2: Scaling Up Capacity and Resolution"...
prismformore/Multi-Task-Transformer
Code of ICLR2023 paper "TaskPrompter: Spatial-Channel Multi-Task Prompting for Dense Scene...
DirtyHarryLYL/Transformer-in-Vision
Recent Transformer-based CV and related works.
kyegomez/MegaVIT
The open source implementation of the model from "Scaling Vision Transformers to 22 Billion Parameters"
uakarsh/latr
Implementation of LaTr: Layout-aware transformer for scene-text VQA,a novel multimodal...