Abstract
For the application of autonomous driving in low-light environments, traditional visual sensors often suffer significant performance degradation. Millimeter-wave radar offers a more robust alternative due to its light insensitivity and all-weather capabilities. However, its point cloud data is inherently sparse and noisy, which poses challenges for accurate detection. This paper proposes a lightweight detection framework for millimeter-wave radar point clouds, specifically designed for low-light scenarios in V2X-enabled intelligent transportation systems. The method extracts spatial, scale, and motion statistical features, and integrates them through a compact neural architecture named LightNetwork. Experiments on real-world datasets demonstrate that the proposed method delivers competitive or even superior accuracy compared to YOLOv8, particularly in terms of center localization and height estimation, while using only 0.65K parameters and achieving 0.24 ms inference latency. Ablation studies confirm the effectiveness of each feature component. Thanks to its compact design and high efficiency, the model is well suited for deployment on edge nodes in distributed V2X infrastructures, supporting real-time cooperative perception in next-generation networked applications.