learn-to-race/l2r
Open-source reinforcement learning environment for autonomous racing — featured as a conference paper at ICCV 2021 and as the official challenge tracks at both SL4AD@ICML2022 and AI4AD@IJCAI2022. These are the L2R core libraries.
This project helps automotive engineers and AI researchers develop and test self-driving car algorithms in a realistic virtual racing environment. You provide control algorithms and sensor configurations, and it simulates how an autonomous race car performs on various tracks, including unseen ones. The output is a performance evaluation of your agent, showing how well it learns to race and generalizes its driving skills. This is for teams and individuals focused on autonomous vehicle development and reinforcement learning research.
174 stars. No commits in the last 6 months. Available on PyPI.
Use this if you are developing or benchmarking AI agents for autonomous racing and need a high-fidelity, customizable simulation environment that supports multimodal sensor inputs.
Not ideal if you are looking for a simple, low-resource simulator or a tool for general-purpose robotic control outside of racing scenarios.
Stars
174
Forks
17
Language
Python
License
GPL-2.0
Category
Last pushed
Dec 20, 2023
Commits (30d)
0
Dependencies
11
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/learn-to-race/l2r"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
microsoft/AirSim
Open source simulator for autonomous vehicles built on Unreal Engine / Unity, from Microsoft AI...
lgsvl/simulator
A ROS/ROS2 Multi-robot Simulator for Autonomous Vehicles
microsoft/AirSim-NeurIPS2019-Drone-Racing
Drone Racing @ NeurIPS 2019, built on Microsoft AirSim
DeepTecher/AutonomousVehiclePaper
无人驾驶相关论文速递
salesforce/warp-drive
Extremely Fast End-to-End Deep Multi-Agent Reinforcement Learning Framework on a GPU (JMLR 2022)