UT-Austin-RPL/PRELUDE
Official codebase for PRELUDE (Perceptive Locomotion Under Dynamic Environments)
This project helps roboticists design and implement robust navigation and walking behaviors for quadruped robots operating in cluttered, dynamic environments. You provide either human-controlled demonstrations or existing datasets of a robot's visual input and movement commands. The system then outputs trained navigation and gait controllers that enable the robot to perceive and react to its surroundings, traverse complex terrains, and avoid obstacles autonomously. This is intended for robotics researchers and engineers working on autonomous quadruped locomotion.
No commits in the last 6 months.
Use this if you need to develop highly agile and perceptive quadruped robots capable of navigating unpredictable real-world environments with moving obstacles.
Not ideal if you are working with wheeled robots, static environments, or if your primary focus is on fine-tuned motor control rather than high-level perception and navigation.
Stars
80
Forks
4
Language
Python
License
MIT
Category
Last pushed
Aug 07, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/UT-Austin-RPL/PRELUDE"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
LucasAlegre/sumo-rl
Reinforcement Learning environments for Traffic Signal Control with SUMO. Compatible with...
hilo-mpc/hilo-mpc
HILO-MPC is a Python toolbox for easy, flexible and fast development of...
reiniscimurs/DRL-robot-navigation
Deep Reinforcement Learning for mobile robot navigation in ROS Gazebo simulator. Using Twin...
kyegomez/RoboCAT
Implementation of Deepmind's RoboCat: "Self-Improving Foundation Agent for Robotic Manipulation"...
cbfinn/gps
Guided Policy Search