Search for a command to run...
Ubiquitous aerial sensing with unmanned aerial vehicles (UAVs) is becoming an essential component of AI-native perception systems, motivated by the trend toward edge deployment and potential integration with future sixth-generation (6G)-connected aerial networks. In this work, we focus on improving the perception-side accuracy and computational efficiency of small-object detection in UAV imagery. However, small object detection in high-altitude UAV imagery remains highly challenging due to the extremely low pixel occupancy of targets and the severe multi-scale interference introduced by complex backgrounds. To address these limitations, we propose a Multi-scale Attention Fusion Network (MAF-Net), an AI-native paradigm for real-time small object detection in UAV imagery. The proposed approach enhances small-target representation and robustness through three key designs. First, a density-adaptive anchor optimization strategy is developed by combining K-means++ clustering with an IoU-based distance metric, enabling anchors to better match scale variation under diverse object densities. Second, a multi-scale feature reinforcement module is introduced to strengthen fine-grained detail preservation by integrating shallow feature maps via skip connections and hierarchical aggregation. Third, a dual-path attention mechanism is employed to jointly model channel importance and spatial localization, improving discriminative feature calibration in cluttered aerial scenes. Extensive experiments on three public benchmarks (AI-TOD, DOTA, and RSOD) demonstrate that MAF-Net consistently outperforms the baseline detector, achieving mAP@0.5 gains of 14.1%, 11.28%, and 22.09%, respectively. These results confirm that MAF-Net provides an effective and deployment-friendly solution for robust small object detection, supporting real-time UAV-based inspection and AI-native ubiquitous aerial sensing applications.