Hon-Wong/Elysium

[ECCV 2024] Elysium: Exploring Object-level Perception in Videos via MLLM

24
/ 100
Experimental

This tool helps researchers and video analysts automatically identify and track specific objects within video footage using text descriptions. You input videos and a text prompt (e.g., "the person in red" or an object's initial coordinates), and it outputs precise bounding box coordinates for that object in every frame. It's designed for anyone needing to automate object tracking or answer questions about objects across video frames.

No commits in the last 6 months.

Use this if you need to precisely track an object's location throughout a video, either by describing it in natural language or by providing its initial position.

Not ideal if you need to analyze static images or require detection of multiple, unspecified objects without prior identification.

video-analysis object-tracking computer-vision-research surveillance-analysis content-moderation
No License Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 9 / 25
Maturity 8 / 25
Community 7 / 25

How are scores calculated?

Stars

86

Forks

4

Language

Python

License

Last pushed

Oct 25, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/computer-vision/Hon-Wong/Elysium"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.