Search for a command to run...
This paper presents a vision-based autonomous landing system for multirotor Unmanned Aerial Vehicles (UAVs), designed for landing on a moving naval platform in Global Navigation Satellite System (GNSS)-denied environments. A multirotor platform was selected due to its ability to hover, high maneuverability, and suitability for confined landing areas, making it particularly appropriate for maritime operations. Since landing remains one of the most accident-prone phases of UAV missions, automating this process can significantly enhance operational safety and robustness. The system employs a downwardfacing onboard camera to capture Red, Green, and Blue (RGB) images, which are transmitted to a ground station for real-time processing. Relative pose estimation is performed and compared using two visual markers: a fiducial marker from the Augmented Reality University of Cordoba (ArUco) library, and the standard “H” marker commonly used at landing sites. These pose estimates enable the generation of landing trajectories using minimum jerk and minimum snap algorithms for performance comparison. Validation was conducted through simulation using a Software-In-The-Loop (SITL) framework that integrates the Robot Operating System (ROS) 2, the Gazebo simulator, and ArduPilot firmware. Complementary real-world flight tests are still being carried out under moderate environmental conditions for system validation. The results validate the proposed approach and demonstrate its potential for autonomous missions, such as surveillance, environmental monitoring, and search and rescue, particularly in coastal regions. Realistic simulation results demonstrate accurate and reliable performance, with landing errors consistently below 18 cm.