Search for a command to run...
Grill mortality data gathering by hand is tedious, time-consuming, and unpleasant. The goal of this study was to: (1) to create a grill mortality removal robot from parts that can be bought in stores that will automatically gather dead poultry; (2) to assess deep learning models and image processing techniques for locating and recognising dead birds; and (3) to assess the robot's detection and mortality pickup abilities in various lighting conditions. A two-finger gripper, robot arm, camera mounted on the robot's arm, and computer controller made up the robot. 64 Ross 708 broilers between the ages of 7 and 14 days old were utilised for the robot's creation and evaluation, and the robot arm was put on a table. The target anatomical part for detection and mortality detection was the grill shank. In order for the gripper to approach and position itself for precision pickup, deep learning models and image processing techniques were integrated into the vision system to provide the location and orientation of the shank of interest. We investigated light levels of 10, 20, 30, 40, 50, 60, 70, and 1000 lux. The findings showed that the deep learning model "You Only Look Once (YOLO)" V4 was better than YOLO V3 at detecting and locating shanks. The performance of the deep learning model detection, orientation identification in image processing, and final pickup were all enhanced by higher light intensities. At 1000 lux light intensity, the final success rate for picking up dead birds was 90.0%. Finally, the developed system contributes to the further development of an integrated autonomous set of solutions to improve production and resource use efficiency in commercial grill production, as well as to improve worker well-being. It is useful for automating the removal of grill mortality from commercial housing.