Hon-Wong/Elysium
[ECCV 2024] Elysium: Exploring Object-level Perception in Videos via MLLM
This tool helps researchers and video analysts automatically identify and track specific objects within video footage using text descriptions. You input videos and a text prompt (e.g., "the person in red" or an object's initial coordinates), and it outputs precise bounding box coordinates for that object in every frame. It's designed for anyone needing to automate object tracking or answer questions about objects across video frames.
No commits in the last 6 months.
Use this if you need to precisely track an object's location throughout a video, either by describing it in natural language or by providing its initial position.
Not ideal if you need to analyze static images or require detection of multiple, unspecified objects without prior identification.
Stars
86
Forks
4
Language
Python
License
—
Category
Last pushed
Oct 25, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/computer-vision/Hon-Wong/Elysium"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
col14m/cadrille
[ICLR2026] cadrille: Multi-modal CAD Reconstruction with Online Reinforcement Learning
filaPro/cad-recode
[ICCV2025] CAD-Recode: Reverse Engineering CAD Code from Point Clouds
pengsongyou/openscene
[CVPR'23] OpenScene: 3D Scene Understanding with Open Vocabularies
worldbench/3EED
[NeurIPS 2025 DB Track] 3EED: Ground Everything Everywhere in 3D
cambrian-mllm/cambrian-s
Cambrian-S: Towards Spatial Supersensing in Video