masashi-hatano/EgoH4
Official code releasse for "The Invisible EgoHand: 3D Hand Forecasting through EgoBody Pose Estimation"
This project helps researchers in computer vision and robotics analyze human motion by predicting future 3D hand poses from an ego-centric view of the body. It takes in video or sensor data depicting a person's body movements and outputs future 3D coordinates of their hands. It's designed for academics and engineers developing applications like human-computer interaction, virtual reality, or action recognition.
No commits in the last 6 months.
Use this if you need to anticipate hand movements in 3D space based on observed body posture, particularly from a first-person perspective.
Not ideal if you're looking for a tool to track hands in real-time from an external viewpoint or for general-purpose object detection.
Stars
30
Forks
2
Language
Python
License
MIT
Category
Last pushed
Aug 19, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/diffusion/masashi-hatano/EgoH4"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
hao-ai-lab/FastVideo
A unified inference and post-training framework for accelerated video generation.
ModelTC/LightX2V
Light Image Video Generation Inference Framework
thu-ml/TurboDiffusion
TurboDiffusion: 100–200× Acceleration for Video Diffusion Models
PKU-YuanGroup/Helios
Helios: Real Real-Time Long Video Generation Model
PKU-YuanGroup/MagicTime
[TPAMI 2025🔥] MagicTime: Time-lapse Video Generation Models as Metamorphic Simulators