Search for a command to run...
We present a novel fusion algorithm that enhances vehicular perception in Vehicle-to-Everything (V2X) networks. Traditional fusion methods face significant implementation challenges since these networks have limited available bandwidth for data transmission. Although intermediate-layer processing reduces overall message size, compatibility is often limited to vehicles operating under application-or platform-specific standardization. Due to these transmission and implementation issues, we have developed a novel system that uses raw-level positional telemetry data to generate global perception maps. We combine data pipelines and transformation matrices with Kalman Filtering techniques to generate a dynamic, unified representation of a multi-vehicle environment. Our algorithm increases detection precision by over 21 percent for the overall perception system and by over 59 percent when measured on a median-per-grid basis across various detection scenes and network delays for different sample sizes. This mechanism permits the efficient transmission of essential information while maintaining the integrity of perception and tracking processes. We improve upon existing methodologies in multi-vehicle environments and complex traffic scenarios by enhancing performance under challenging network conditions. Our work provides foundational support for systems related to cooperative perception, autonomous driving, and safety-critical maneuvering by enabling robust data fusion. This is essential for developing vehicular autonomy at scale and supporting network operations for ground-level V2X infrastructure.