LexaNagiBator228/Keypoints-Tracking-via-Transformer-Networks

Keypoints Tracking via Transformer Networks

19
/ 100
Experimental

This project helps you accurately track specific points or features across different images, even when images are taken from varying perspectives or lighting conditions. It takes two images as input and identifies corresponding keypoints, either for matching the images or following the movement of specific points within them. This is useful for computer vision researchers or engineers working on tasks like object tracking, image stitching, or augmented reality.

No commits in the last 6 months.

Use this if you need to precisely track sparse keypoints between images, particularly in scenarios with significant changes in viewpoint or illumination.

Not ideal if your primary need is general object detection or semantic segmentation, rather than specific point tracking.

image-matching feature-tracking computer-vision robotics-perception visual-odometry
No License Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 6 / 25
Maturity 8 / 25
Community 5 / 25

How are scores calculated?

Stars

15

Forks

1

Language

Python

License

Last pushed

Mar 25, 2022

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/computer-vision/LexaNagiBator228/Keypoints-Tracking-via-Transformer-Networks"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.