Secondly, we extend the tracking task to two surgical instruments. We set the same weights for the two surgical instruments in the tracking process. The same weights mean the tracking point is located at the midpoint of the keypoint on the two surgical instruments. Figure 3(c) shows the relative pixel position of the tracking points in the image plane. Figure 3(d) shows tracking error while tracking double surgical instruments. The average distance is about 28.47 pixels.

Discussion

Researchers reported the clinical requirements for adjusting the FOV of the laparoscope with the movement of surgical instruments.[7] The researchers analyzed collected videos of different clinical surgeries. They found that the average distance from the tip of the surgical instruments to the FOV center of the laparoscope was about 423.84 pixels, even though the surgical assistant was constantly adjusting the laparoscopic field of vision during the procedure. Considering the resolution of collected surgical videos, the error is approximately 22.08% of the horizontal resolution and 39.24% of the vertical resolution.
We have demonstrated the feasibility of our proposed autonomous laparoscopic control approach to adjust FOV regardless of whether the surgical instruments are moving. Although the fast movement of surgical instruments leads to the accuracy of keypoint detection and then affects the accuracy of surgical instrument tracking. From the experiment results on the automatic tracking task of surgical instruments, the distances between the tracking point and the center of FOV while tracking single moving instruments and double moving instruments are about 45.77 pixels and 28.47 pixels, respectively. The results are approximately 11.44% and 7.12% of our continuum laparoscopic FOV, which is also much smaller than the FOV error of clinical surgery. Considering the dynamic uncertainty of the continuum laparoscope system, the internal error of this system is about 19 pixels. This indicates our autonomous FOV adjusting method with a continuum laparoscope can satisfy the clinical requirements. In addition, laparoscopic FOV adjustment as part of the surgical procedure, our approach can promote the automation of the robotic surgery process.

Conclusion

This article presents a data-driven control method for a continuum laparoscope system with learning-based visual feedback to adjust the FOV automatically in RMIS. We first develop a nonlinear system identification method using the Koopman operator and Chebyshev polynomials. Then we build an LQR controller based on the trained Koopman Operator with the visual feedback. To provide precise visual feedback for the control system, a learning-based keypoint detection method is designed without any manual markers. This method provides more options for selecting keypoints on surgical instruments during the different surgical processes while ensuring detecting accuracy. Simulation and experiments are performed to evaluate the proposed methods. Compared with other keypoint detection methods, the pose estimation method provides higher accuracy. Tracking experiments show the feasibility of the proposed data-driven control method of a continuum laparoscope for adjusting the FOV automatically. Experiments results shows that the proposed method can also satisfy the clinical requirements. In the future, the freedom of continuum laparoscope in Z-axis direction to enlarge the view will be considered to provide surgeons a better experience. Constrained workspaces and inputs will be studied to ensure safety during robotic surgery.

Acknowledgements

Xiaowen Kong and Hangjie Mo contributed equally to this work. This work was supported by the Research Grants Council of the Hong Kong Special Administrative Region, China (Ref. no. T42-409/18R and no.11211421).

Conflict of interest

The authors declare no conflict of interest.