pairlab/SlotFormer

Code release for ICLR 2023 paper: SlotFormer on object-centric dynamics models

44
/ 100
Emerging

This project helps machine learning researchers simulate and predict visual dynamics by breaking down complex scenes into individual objects. You input raw video footage, and it outputs predictions of how objects will move or interact, along with answers to visual questions about the scene. It's designed for researchers working on computer vision tasks like video prediction and visual question answering.

120 stars. No commits in the last 6 months.

Use this if you are a machine learning researcher focused on object-centric models for understanding and predicting how objects move and interact in videos without extensive supervision.

Not ideal if you need a plug-and-play solution for real-world video analysis or if you are not comfortable working with research-grade code and Slurm GPU clusters.

machine-learning-research computer-vision video-prediction visual-question-answering unsupervised-learning
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 18 / 25

How are scores calculated?

Stars

120

Forks

22

Language

Python

License

MIT

Last pushed

Sep 20, 2023

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/pairlab/SlotFormer"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.