autonomousvision/transfuser

[PAMI'23] TransFuser: Imitation with Transformer-Based Sensor Fusion for Autonomous Driving; [CVPR'21] Multi-Modal Fusion Transformer for End-to-End Autonomous Driving

55
/ 100
Established

This project helps self-driving car developers and researchers create and test models for autonomous vehicles. It takes in raw sensor data like camera images, depth images, LiDAR, and semantic segmentation maps, then processes it to output driving commands for a simulated car. The primary users are engineers and scientists working on perception and control systems for autonomous driving.

1,516 stars.

Use this if you are developing or evaluating end-to-end autonomous driving systems and need a robust framework for sensor fusion and imitation learning.

Not ideal if you are looking for a plug-and-play solution for physical self-driving cars, as this is a research framework for simulated environments.

autonomous-driving robotics-perception self-driving-research sensor-fusion imitation-learning
No Package No Dependents
Maintenance 6 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 23 / 25

How are scores calculated?

Stars

1,516

Forks

233

Language

Python

License

MIT

Last pushed

Oct 19, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/autonomousvision/transfuser"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.