anassinator/dqn-obstacle-avoidance
Deep Reinforcement Learning for Fixed-Wing Flight Control with Deep Q-Network
This project helps aerospace engineers or AI researchers develop and test autonomous flight systems for fixed-wing aircraft. It takes simulated flight conditions and obstacle environments as input, then trains an aircraft to navigate to a target waypoint while avoiding both stationary and moving obstacles. The output is a trained deep Q-network that can control the aircraft's flight path. This is ideal for those focused on developing smarter drone navigation or air traffic control systems.
No commits in the last 6 months.
Use this if you are exploring reinforcement learning approaches to teach simulated fixed-wing aircraft how to autonomously navigate and avoid obstacles.
Not ideal if you need a solution for real-world aircraft control, quadcopters, or different types of autonomous vehicles.
Stars
76
Forks
20
Language
Python
License
MIT
Category
Last pushed
Dec 07, 2016
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/anassinator/dqn-obstacle-avoidance"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
LucasAlegre/sumo-rl
Reinforcement Learning environments for Traffic Signal Control with SUMO. Compatible with...
hilo-mpc/hilo-mpc
HILO-MPC is a Python toolbox for easy, flexible and fast development of...
reiniscimurs/DRL-robot-navigation
Deep Reinforcement Learning for mobile robot navigation in ROS Gazebo simulator. Using Twin...
kyegomez/RoboCAT
Implementation of Deepmind's RoboCat: "Self-Improving Foundation Agent for Robotic Manipulation"...
cbfinn/gps
Guided Policy Search