NVIDIA-ISAAC-ROS/isaac_ros_freespace_segmentation
NVIDIA-accelerated, deep-learned freespace segmentation
This project helps mobile robots understand their surroundings and avoid obstacles by identifying open areas for navigation. It takes real-time left and right stereo camera images, combined with the robot's position relative to the ground, and outputs an occupancy grid. This grid tells the robot where it can safely move, allowing for robust, vision-based obstacle avoidance. Robotics engineers and autonomous system developers would use this.
Use this if you need a real-time, vision-based system to detect walkable areas and obstacles for ground-based mobile robots, especially as a complement to existing lidar systems.
Not ideal if your robot does not have stereo cameras or if you primarily rely on non-visual sensors for obstacle detection.
Stars
43
Forks
4
Language
C++
License
Apache-2.0
Category
Last pushed
Dec 11, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/computer-vision/NVIDIA-ISAAC-ROS/isaac_ros_freespace_segmentation"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
roboflow/rf-detr
[ICLR 2026] RF-DETR is a real-time object detection and segmentation model architecture...
stereolabs/zed-sdk
⚡️The spatial perception framework for rapidly building smart robots and spaces
mikel-brostrom/boxmot
BoxMOT: Pluggable SOTA multi-object tracking modules with support for axis-aligned and oriented...
RizwanMunawar/yolov7-object-tracking
YOLOv7 Object Tracking Using PyTorch, OpenCV and Sort Tracking
google-deepmind/tapnet
Tracking Any Point (TAP)