ML for Rapid Network Reconfiguration: Radar Detection using Open RAN and Multimodal Fusion for Vehicular mmWave Beamforming

This two part talk explores recent advances in using machine learning (ML) for enhanced perception and actuation in reconfigurable networks. First, we discuss how multimodal sensor data, such as LiDAR and camera images, can be fused to speed up mmWave beamforming in dynamic environments where exhaustive search becomes difficult. We describe approaches that generalize across unseen scenarios, including transfer and meta learning on data generated from digital twins, which accurately predict the optimal beam on datasets collected from a real-world sensor-equipped autonomous vehicle. In the second part of the talk, we consider the problem of cellular-radar coexistence in shared spectrum bands. Using datasets collected over-the-air we show how object classification models like You-only-look-once (YOLO) can be utilized to detect radar pulses within overlay LTE or 5G waveforms. Finally, we share experiences in integrating this radar detection model within an Open RAN-compliant network stack, allowing the base station to transform into a spectrum sensor for sub-second sensing performance. This may open up new avenues of spectrum sensing and sharing using existing deployed infrastructure.