Sid2697/HOI-Ref
Code implementation for paper titled "HOI-Ref: Hand-Object Interaction Referral in Egocentric Vision"
This project helps researchers and developers working with video analysis to automatically identify hands and the specific objects they are interacting with in first-person video footage. You provide video frames captured from a user's perspective, and it highlights which hands are interacting with which objects. This tool is ideal for scientists studying human behavior, robotics, or augmented reality applications.
No commits in the last 6 months.
Use this if you need to precisely track and label hand-object interactions within egocentric video streams.
Not ideal if you are looking for a simple plug-and-play solution without any technical setup or if your videos are not captured from a first-person perspective.
Stars
29
Forks
3
Language
Python
License
BSD-3-Clause
Category
Last pushed
Apr 16, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/computer-vision/Sid2697/HOI-Ref"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
col14m/cadrille
[ICLR2026] cadrille: Multi-modal CAD Reconstruction with Online Reinforcement Learning
filaPro/cad-recode
[ICCV2025] CAD-Recode: Reverse Engineering CAD Code from Point Clouds
pengsongyou/openscene
[CVPR'23] OpenScene: 3D Scene Understanding with Open Vocabularies
worldbench/3EED
[NeurIPS 2025 DB Track] 3EED: Ground Everything Everywhere in 3D
cambrian-mllm/cambrian-s
Cambrian-S: Towards Spatial Supersensing in Video