sparisi/pvr_habitat
Pre-Trained Visual Representations for Control
This project helps robotics researchers and engineers train robots to perform tasks by learning from demonstrated examples. It takes raw robot trajectory data, processes the visual observations through pre-trained vision models, and then outputs a trained control policy for the robot. Robotics developers and researchers focused on robot learning and imitation will find this useful.
No commits in the last 6 months.
Use this if you are a robotics researcher working on behavioral cloning and want to leverage the power of pre-trained vision models to speed up robot learning from demonstrations.
Not ideal if you are looking for a general-purpose robotics simulation environment or a tool for designing robot hardware.
Stars
21
Forks
1
Language
Python
License
—
Category
Last pushed
May 26, 2022
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/computer-vision/sparisi/pvr_habitat"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
andyzeng/apc-vision-toolbox
MIT-Princeton Vision Toolbox for the Amazon Picking Challenge 2016 - RGB-D ConvNet-based object...
OSU-NLP-Group/UGround
[ICLR'25 Oral] UGround: Universal GUI Visual Grounding for GUI Agents
Ewenwan/MVision
机器人视觉 移动机器人 VS-SLAM ORB-SLAM2 深度学习目标检测 yolov3 行为检测 opencv PCL 机器学习 无人驾驶
leggedrobotics/wild_visual_navigation
Wild Visual Navigation: A system for fast traversability learning via pre-trained models and...
microsoft/event-vae-rl
Visuomotor policies from event-based cameras through representation learning and reinforcement...