The autonomous navigation of greenhouse robots depends on precise
mapping, accurate localization information and a robust path planning
strategy. However, the complex agricultural environment introduces
significant challenges to robot perception and path planning. In this
study, a hardware system designed exclusively for greenhouse
agricultural environments is presented, employing multi-sensor fusion to
diminish the interference of complex environmental conditions.
Furthermore, a robust autonomous navigation framework based on the
improved LeGO-LOAM and OpenPlanner has been proposed. In the perception
phase, a relocalization module is integrated into the LeGO-LOAM
framework. Comprising two key steps - map matching and filtering
optimization, it ensures a more precise pose relocalization. During the
path planning process, ground structure and plant density are considered
in our Enhanced OpenPlanner. Additionally, a hysteresis strategy is
introduced to enhance the stability of system state transitions.The
performance of the navigation system in this paper was evaluated in
several complex greenhouse environments. The integration of the
relocalization module significantly decreases the Absolute Pose Error
(APE) in the perception process, resulting in more accurate pose
estimation and relocalization information. Moreover, our Enhanced
OpenPlanner exhibits the capability to plan safer trajectories and
achieve more stable state transitions in the experiments. The results
underscore the effectiveness and robustness of our proposed approach,
highlighting its promising application prospects in autonomous
navigation for agricultural robots.
Keywords: Autonomous navigation; Path planning of mobile robot;
Agricultural robotics
Introduction
Advancements in technology have led to the increasing prominence of
agricultural robots in farming practices (Bagagiolo et al., 2022; Xiong
et al., 2020; Erfani et al., 2019). This has brought about significant
improvements in agricultural productivity and labor efficiency
(Calicioglu et al., 2019; Gong et al., 2022). However, challenges such
as irregular terrain, complex crop structures, and obstructed views
impede the autonomous navigation of agricultural robots, particularly in
indoor greenhouses where they face narrow passages, fluctuating light
conditions, and high temperature and humidity. Environmental mapping,
real-time localization, and path planning are essential components for
robots to accomplish autonomous navigation. Agricultural environments
impose stringent demands on the associated algorithms (Balaso et al.,
2013). Consequently, devising an efficient and robust approach for
environmental mapping and autonomous path planning is of paramount
importance for the ongoing advancement of agricultural robots navigating
in such settings.
Environmental perception and mapping are essential prerequisites for
achieving autonomous navigation (Gao et al., 2018). It is crucial to
explore non-satellite-based positioning methods for greenhouse
navigation technologies due to the attenuation of GPS signals in indoor
environments (Shalal et al., 2015). The field of vision-based
environmental perception has seen a surge of research in recent years
(Ball et al., 2016; Blochliger et al., 2018; Lu et al., 2018). Monocular
cameras, due to their cost-effectiveness and straightforward
implementation, have been extensively employed in environmental
perception tasks. Krul et al. (2021) utilized a monocular camera mounted
on a compact Unmanned Aerial Vehicle (UAV) to evaluate two visual
Simultaneous Localization and Mapping (VSLAM) algorithms in greenhouse
environments: LSD-SLAM and ORB-SLAM. Through the analysis of absolute
trajectory and relative pose errors, they determined that ORB-SLAM is
more appropriate for greenhouse applications. However, monocular cameras
exhibit limitations in capturing depth information, resulting in
inaccuracies in mapping and positioning (Kalogeiton et al., 2019;
Plessen, 2017; Ataer-Cansizoglu et al., 2016). Stereo vision cameras,
which compute depth information through disparity estimation,
effectively mitigate the limitations of monocular vision cameras. Bai et
al. (2023) utilized stereo vision cameras to acquire greenhouse
environmental data, such as crop position, dimensions, and color.
Subsequently, they extracted navigation paths using Kalman filtering,
threshold-based segmentation, Hough transform, and stereo vision
algorithms, facilitating robotic autonomous navigation. Distinct from
stereo vision cameras, RGB-D cameras leverage infrared sensors to
simultaneously capture depth maps and visible light images. Matsuzaki et
al. (2018) integrated RGB-D-based visual SLAM with deep neural
network-driven semantic segmentation to generate a 3D map encompassing
object type information. This approach effectively addresses the
challenge posed by traditional mapping methods that erroneously identify
vegetation-covered traversable areas as obstructed regions.
While visual cameras can provide cost-effective environmental
perception, their scanning range is limited and they are sensitive to
lighting conditions. In contrast, 3D light detection and ranging (LiDAR)
can provide lighting-invariant and precise perception of the
environments with long detection range and superior robustness (Zhou et
al., 2021; Khan et al., 2021; Ren et al., 2019). Thus, LiDAR systems are
extensively employed in autonomous navigation applications. Zhang and
Singh proposed a low-drift and real-time lidar odometry and mapping
(LOAM) method in 2014. LOAM acquires initial localization information by
conducting two high-frequency odometry calculations after feature
extraction. The data is further refined by matching the current frame
with a submap frame, providing more accurate odometry information and
facilitating map updates. Subsequently, the system generates the pose
estimation by integrating results from both the Lidar Odometry and Lidar
Mapping processes. Building on LOAM, a lightweight ground optimization
LiDAR odometry and mapping technique (LeGO-LOAM) was proposed by Shan
and Englot (2019). It optimizes two objectives: firstly, it is optimized
for ground-based operations as it incorporates ground constraints during
the segmentation and optimization stages. Additionally, it demonstrates
computational efficiency by achieving comparable or superior precision
while requiring fewer computational resources.
The back-end implementation of autonomous navigation relies on path
planning algorithms (Campbell et al., 2020; Buniyamin et al., 2011). In
recent years, numerous researchers have applied path planning algorithms
to the agricultural domain (Yang et al., 2023; Zhang et al., 2022;
Santos et al., 2020). Among them, guideline-based path planning is the
simplest approach to implement. Sammons et al. (2005) designed a
greenhouse robot that achieved autonomous navigation by identifying
ground reflection markers, while Akai et al. (2014) developed a magnetic
strip-based navigation system, and Mosalanejad et al. (2020) controlled
greenhouse robot movement by constructing a rail system. However, these
methods have apparent drawbacks, such as limited adaptability and
inability to cope with environmental changes. To achieve navigation in
dynamic environments, Shi et al. (2020) designed a localization and
navigation system based on multi-sensor fusion and the A* algorithm. By
integrating odometry information with LiDAR data, the robot employed the
Gmapping algorithm to construct a two-dimensional environmental map. The
Adaptive Monte Carlo Localization (AMCL) algorithm was utilized for pose
estimation, and the A* algorithm was applied for path planning. Gabriel
et al. (2023) proposed an adaptive path planning method that fused the
Rapidly-exploring Random Tree (RRT) and Deep Reinforcement Learning
(DRL) algorithms. The RRT algorithm was employed to find feasible
solutions in complex environments, while the DRL algorithm learned
environmental features and dynamic obstacles to perform obstacle
avoidance and generate smooth local path trajectories. However, Gabriel
only tested the algorithm’s effectiveness in simulated environments and
did not conduct experiments in real-world settings. OpenPlanner is a
robot path planning algorithm within the Autoware framework (ROS-based),
comprising a global path planner, a behavior state generator, and a
local planner (Darweesh et al., 2017). It is suitable for navigation and
obstacle avoidance in dynamic environments. However, the latest
OpenPlanner framework is only applicable to urban settings,
necessitating modifications for adaptation to greenhouse agricultural
environments.
Nevertheless, in contrast to traditional agricultural settings,
greenhouse agricultural environments are characterized by low GPS
accuracy, fluctuating illumination conditions, and significant occlusion
among plants (Lv et al., 2022; Yan et al., 2022; Zangina et al., 2021).
These factors impede robotic perception. Additionally, the narrow
pathways and dense vegetation present substantial challenges for robot
path planning (Harik et al., 2018). Therefore, a precise and robust
navigation system that integrates localization, mapping, and path
planning is critical for autonomous robot navigation in greenhouse
agricultural environments. In this study, we thoroughly consider the
unique attributes of greenhouse agricultural environments and present an
advanced multisensor fusion-based environmental perception and
autonomous path planning approach for accurate and highly robust
navigation. The main contributions of this paper are as follows:
1. Developing a multi-sensor fusion mobile robot specifically tailored
for greenhouse agricultural environments.
2. Improving the LeGO-LOAM by incorporating a relocalization module for
enhanced accuracy in pose relocalization.
3. Enhancing the local planner of OpenPlanner for enhanced safety and
stability in greenhouse environments.
4. Proposing a robust overall process for autonomous robot navigation in
dynamic greenhouse environments.
Materials and methods