Abstract
Offshore oil and gas operations are most concerned with safety, it often requires workers to be stationed on the platform to control ongoing process. The cost of transportation can be expensive, and the working environment is dangerous, so it is natural for certain operations to be automated away by autonomous robots. One such operation is maneuvering valves, which is a common and important task most suitable for robotics. To achieve this goal, the robot itself need the ability to perceive the location and orientation of the target valve. In this paper, we proposed a computer vision methods based on the RGB-D image inputs for guiding the real-world spatial information of valves to the robot. In detail, we first exploited the YOLACT as an instance segmentation method to precisely extract the valves from the background then pass the segmented valve pixel to the 6D pose estimation algorithm. The 2D pixels and 3D points generated are then utilized by the DenseFusion algorithm to predict the valve's 6D pose composed of position and orientation based on fusion of RGB and depth features. In order to evaluate the validity of the proposed method, these algorithms were then implemented in ROS2 and tested on edge device which embedded in the real offshore robot system. The results show that our proposed method is promising and could be effectively utilized by the offshore autonomous robot to operate the valves.