mrezaei92/TriHorn-Net
Official PyTorch implementation of TriHorn-Net
This project helps researchers and engineers analyze how hands are positioned and oriented in 3D space. By taking a depth image of a hand, it provides precise 3D coordinates for each joint, which is crucial for applications like human-computer interaction or robotics. The end-users are typically computer vision researchers or developers working on advanced interactive systems.
No commits in the last 6 months.
Use this if you need to accurately determine the 3D position of hand joints from depth sensor data for research or specialized applications.
Not ideal if you are looking for a simple, off-the-shelf solution for general object detection or if your input data is standard 2D RGB images rather than depth images.
Stars
83
Forks
16
Language
Python
License
MIT
Category
Last pushed
Nov 09, 2023
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/mrezaei92/TriHorn-Net"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
talmolab/sleap
A deep learning framework for multi-animal pose tracking.
kennymckormick/pyskl
A toolbox for skeleton-based action recognition.
open-mmlab/mmaction2
OpenMMLab's Next Generation Video Understanding Toolbox and Benchmark
jgraving/DeepPoseKit
a toolkit for pose estimation using deep learning
DenisTome/Lifting-from-the-Deep-release
Implementation of "Lifting from the Deep: Convolutional 3D Pose Estimation from a Single Image"