anassinator/dqn-obstacle-avoidance

Deep Reinforcement Learning for Fixed-Wing Flight Control with Deep Q-Network

44
/ 100
Emerging

This project helps aerospace engineers or AI researchers develop and test autonomous flight systems for fixed-wing aircraft. It takes simulated flight conditions and obstacle environments as input, then trains an aircraft to navigate to a target waypoint while avoiding both stationary and moving obstacles. The output is a trained deep Q-network that can control the aircraft's flight path. This is ideal for those focused on developing smarter drone navigation or air traffic control systems.

No commits in the last 6 months.

Use this if you are exploring reinforcement learning approaches to teach simulated fixed-wing aircraft how to autonomously navigate and avoid obstacles.

Not ideal if you need a solution for real-world aircraft control, quadcopters, or different types of autonomous vehicles.

aerospace-engineering autonomous-flight drone-navigation simulation-training flight-control
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 9 / 25
Maturity 16 / 25
Community 19 / 25

How are scores calculated?

Stars

76

Forks

20

Language

Python

License

MIT

Last pushed

Dec 07, 2016

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/anassinator/dqn-obstacle-avoidance"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.