le-liang/Multimodal-Wireless
Python scripts and assets related to Multimodal-Wireless dataset. The dataset can be found at
This toolkit helps researchers working with autonomous systems or wireless communication to generate or re-create complex, realistic simulation data. It takes configuration files defining driving scenarios and produces synchronized sensor data (like camera feeds) and wireless channel information, useful for training and testing AI models in diverse environments. It's designed for engineers and scientists researching self-driving cars, drone communication, or other wireless-enabled autonomous technologies.
Use this if you need to create or replay highly realistic, multi-modal datasets that combine visual scene information with detailed wireless channel characteristics for autonomous vehicle or robotic simulations.
Not ideal if you are looking for a simple plug-and-play dataset without needing to set up complex simulation environments or if your research does not involve both visual and wireless data.
Stars
18
Forks
—
Language
Python
License
MIT
Category
Last pushed
Jan 25, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/le-liang/Multimodal-Wireless"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
open-mmlab/mmpretrain
OpenMMLab Pre-training Toolbox and Benchmark
facebookresearch/mmf
A modular framework for vision & language multimodal research from Facebook AI Research (FAIR)
adambielski/siamese-triplet
Siamese and triplet networks with online pair/triplet mining in PyTorch
HuaizhengZhang/Awsome-Deep-Learning-for-Video-Analysis
Papers, code and datasets about deep learning and multi-modal learning for video analysis
KaiyangZhou/pytorch-vsumm-reinforce
Unsupervised video summarization with deep reinforcement learning (AAAI'18)