LexaNagiBator228/Keypoints-Tracking-via-Transformer-Networks
Keypoints Tracking via Transformer Networks
This project helps you accurately track specific points or features across different images, even when images are taken from varying perspectives or lighting conditions. It takes two images as input and identifies corresponding keypoints, either for matching the images or following the movement of specific points within them. This is useful for computer vision researchers or engineers working on tasks like object tracking, image stitching, or augmented reality.
No commits in the last 6 months.
Use this if you need to precisely track sparse keypoints between images, particularly in scenarios with significant changes in viewpoint or illumination.
Not ideal if your primary need is general object detection or semantic segmentation, rather than specific point tracking.
Stars
15
Forks
1
Language
Python
License
—
Category
Last pushed
Mar 25, 2022
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/computer-vision/LexaNagiBator228/Keypoints-Tracking-via-Transformer-Networks"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
roboflow/rf-detr
[ICLR 2026] RF-DETR is a real-time object detection and segmentation model architecture...
stereolabs/zed-sdk
⚡️The spatial perception framework for rapidly building smart robots and spaces
mikel-brostrom/boxmot
BoxMOT: Pluggable SOTA multi-object tracking modules with support for axis-aligned and oriented...
RizwanMunawar/yolov7-object-tracking
YOLOv7 Object Tracking Using PyTorch, OpenCV and Sort Tracking
google-deepmind/tapnet
Tracking Any Point (TAP)