Pilhyeon/Learning-Action-Completeness-from-Points

Official Pytorch Implementation of 'Learning Action Completeness from Points for Weakly-supervised Temporal Action Localization' (ICCV-21 Oral)

44
/ 100
Emerging

This project helps video analysis researchers precisely pinpoint the start and end times of specific actions within long video recordings. It takes in video features extracted from raw video files and a sparse set of labeled frames for each action instance. The output is a highly accurate temporal localization of actions, even rivaling methods that require more extensive, frame-by-frame annotation.

No commits in the last 6 months.

Use this if you need to identify the exact duration of actions in video with minimal manual labeling effort.

Not ideal if you lack the technical expertise to set up and run a deep learning model, or if your primary interest is in high-level video classification rather than precise temporal action localization.

video-analytics activity-recognition temporal-localization sparse-labeling computer-vision-research
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 9 / 25
Maturity 16 / 25
Community 19 / 25

How are scores calculated?

Stars

88

Forks

17

Language

Python

License

MIT

Last pushed

Sep 05, 2023

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/Pilhyeon/Learning-Action-Completeness-from-Points"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.