microsoft/event-vae-rl
Visuomotor policies from event-based cameras through representation learning and reinforcement learning. Accompanies our paper: https://arxiv.org/abs/2103.00806
This project helps develop autonomous systems, like drones or robots, that need to navigate complex environments using event-based cameras. It takes raw event stream data from these specialized cameras and learns meaningful representations, which are then used to train the system to perform tasks like obstacle avoidance. Operations engineers or robotics researchers developing next-generation autonomous vehicles would use this.
No commits in the last 6 months.
Use this if you are developing visuomotor policies for autonomous systems using event-based cameras and need to process raw event data for tasks like obstacle avoidance.
Not ideal if your autonomous system uses traditional frame-based cameras or if you are not working with reinforcement learning for control.
Stars
59
Forks
15
Language
Python
License
MIT
Category
Last pushed
Aug 14, 2023
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/computer-vision/microsoft/event-vae-rl"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
andyzeng/apc-vision-toolbox
MIT-Princeton Vision Toolbox for the Amazon Picking Challenge 2016 - RGB-D ConvNet-based object...
OSU-NLP-Group/UGround
[ICLR'25 Oral] UGround: Universal GUI Visual Grounding for GUI Agents
Ewenwan/MVision
机器人视觉 移动机器人 VS-SLAM ORB-SLAM2 深度学习目标检测 yolov3 行为检测 opencv PCL 机器学习 无人驾驶
leggedrobotics/wild_visual_navigation
Wild Visual Navigation: A system for fast traversability learning via pre-trained models and...
RizwanMunawar/trajectory-forcast
Forecast object trajectory based on history of tracks. Provides a stable and computationally...