microsoft/event-vae-rl

Visuomotor policies from event-based cameras through representation learning and reinforcement learning. Accompanies our paper: https://arxiv.org/abs/2103.00806

42
/ 100
Emerging

This project helps develop autonomous systems, like drones or robots, that need to navigate complex environments using event-based cameras. It takes raw event stream data from these specialized cameras and learns meaningful representations, which are then used to train the system to perform tasks like obstacle avoidance. Operations engineers or robotics researchers developing next-generation autonomous vehicles would use this.

No commits in the last 6 months.

Use this if you are developing visuomotor policies for autonomous systems using event-based cameras and need to process raw event data for tasks like obstacle avoidance.

Not ideal if your autonomous system uses traditional frame-based cameras or if you are not working with reinforcement learning for control.

robotics autonomous-navigation event-cameras drone-control sensor-fusion
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 8 / 25
Maturity 16 / 25
Community 18 / 25

How are scores calculated?

Stars

59

Forks

15

Language

Python

License

MIT

Last pushed

Aug 14, 2023

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/computer-vision/microsoft/event-vae-rl"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.