DivitMittal/CARLA-Autonomous-Driving

Via high-fidelity CARLA vehicle simulator & deep semantic segmentation, data from RGBA cameras and LiDAR sensors are combined to achieve comprehensive environmental awareness

37
/ 100
Emerging

This system helps autonomous vehicle researchers and developers design and test self-driving algorithms in a realistic virtual environment. It takes data from simulated cameras and LiDAR sensors and uses deep learning to understand the vehicle's surroundings. The output is a robust perception system that identifies objects, lane markings, and environmental features, ready for integration into autonomous driving systems.

Use this if you are developing or researching autonomous driving systems and need a high-fidelity simulator to test perception, planning, and control algorithms using multi-modal sensor data.

Not ideal if you are looking for a pre-trained, production-ready autonomous driving system for real-world deployment without further development.

autonomous-vehicles self-driving-car-development robotics-simulation sensor-fusion perception-systems
No Package No Dependents
Maintenance 10 / 25
Adoption 6 / 25
Maturity 16 / 25
Community 5 / 25

How are scores calculated?

Stars

19

Forks

1

Language

Jupyter Notebook

License

MIT

Last pushed

Jan 15, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/computer-vision/DivitMittal/CARLA-Autonomous-Driving"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.