Search for a command to run...
Purpose Aiming at the risk of insufficient positioning accuracy and stability of autonomous mobile weeding robots relying on a single sensor when deployed in complex hilly and mountainous photovoltaic power station scenarios, this paper aims to propose a multi-sensor fusion localization method based on factor graph optimization. Design/methodology/approach By using pose constraints obtained from three types of sensor data, namely, a visual odometry combining point and line features, Inertial Measurement Units (IMU) pre-integration and satellite Real-Time Kinematic (RTK) signals, the visual point-line fusion residual, IMU residual and satellite RTK residual are, respectively, derived and incorporated into a multifactor graph model. The fusion localization system takes visual keyframes as the reference, synchronizing and tightly coupling multi-sensor information into a nonlinear least squares estimation problem to achieve optimal estimation of the robot’s global pose. Findings The proposed multi-sensor fusion localization method has been validated on the public Rosario data set, demonstrating favorable accuracy and robustness even under degraded satellite signals. This provides a novel and reliable solution for enhancing the positioning accuracy and stability of weeding robots in complex operating scenarios. Originality/value This study fuses the exploitable line features in the complex environments of weeding robots into a point feature-based visual odometry. Through the factor graph optimization framework, it integrates visual point-line odometry, IMU pre-integration and satellite RTK data, designing a tight coupling strategy based on visual keyframes and efficient, robust hard-threshold time synchronization.