Easonyesheng/A2PM-MESA
[CVPR'24 & TPAMI'26] Area to Point Matching Framework
This framework helps scientists, engineers, and researchers accurately compare and align different images, especially when they have varying resolutions or perspectives. It takes in two images and identifies corresponding areas and points between them, even if there are large changes in viewpoint. The output is a detailed mapping that shows how different parts of one image relate to the other.
157 stars.
Use this if you need to precisely match features between images that have significant differences in scale, resolution, or viewing angle, for tasks like image stitching, 3D reconstruction, or change detection.
Not ideal if you are working with images that are already perfectly aligned or if you only need a rough, qualitative comparison.
Stars
157
Forks
17
Language
Python
License
MIT
Category
Last pushed
Jan 19, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/computer-vision/Easonyesheng/A2PM-MESA"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related tools
drprojects/superpoint_transformer
Official PyTorch implementation of Superpoint Transformer introduced in [ICCV'23] "Efficient 3D...
yuxumin/PoinTr
[ICCV 2021 Oral] PoinTr: Diverse Point Cloud Completion with Geometry-Aware Transformers
charlesq34/frustum-pointnets
Frustum PointNets for 3D Object Detection from RGB-D Data
drprojects/DeepViewAgg
[CVPR'22 Best Paper Finalist] Official PyTorch implementation of the method presented in...
facebookresearch/votenet
Deep Hough Voting for 3D Object Detection in Point Clouds