nianticspatial/ace-g
[ICCV 2025] ACE-G is an architecture and pre-training scheme to improve generalization for scene coordinate regression-based visual relocalization.
This project helps computer vision practitioners accurately determine the precise location and orientation (pose) of a camera within a known environment by identifying specific points in images and matching them to 3D coordinates in a digital map. You provide a set of images of a space, and the system creates a map that can then be used to find the camera pose of new images within that space. This is ideal for robotics engineers, augmented reality developers, or anyone needing highly precise camera tracking.
Use this if you need to precisely localize a camera or device within a pre-mapped indoor or outdoor scene using visual input, even when the scene changes or the visual conditions are challenging.
Not ideal if you need to localize a camera in an entirely new, unmapped environment without any prior visual data.
Stars
87
Forks
5
Language
Python
License
—
Category
Last pushed
Feb 20, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/computer-vision/nianticspatial/ace-g"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Compare
Higher-rated alternatives
changh95/visual-slam-roadmap
Roadmap to become a Visual-SLAM developer in 2026
coperception/coperception
An SDK for multi-agent collaborative perception.
w111liang222/lidar-slam-detection
LSD (LiDAR SLAM & Detection) is an open source perception architecture for autonomous vehicle/robotic
ika-rwth-aachen/Cam2BEV
TensorFlow Implementation for Computing a Semantically Segmented Bird's Eye View (BEV) Image...
lvchuandong/Awesome-Multi-Camera-3D-Occupancy-Prediction
Awesome papers and code about Multi-Camera 3D Occupancy Prediction, such as TPVFormer,...