Global Localization of Autonomous Robots
in Forest Environments
Marwan Hussein, Matthew Renner, and Karl Iagnemma
Abstract
A method for the autonomous geolocation of ground vehicles
in forest environments is presented. The method provides
an estimate of the global horizontal position of a vehicle
strictly based on finding a geometric match between a map
of observed tree stems, scanned in 3D by Light Detection and
Ranging (lidar) sensors onboard the vehicle, to another stem
map generated from the structure of tree crowns analyzed from
high resolution aerial orthoimagery of the forest canopy. The
method has been tested with real-world data and has been able
to estimate vehicle geoposition with an average error of less
than 2 m. The method has two key properties that are signifi-
cant: (a) The algorithm does not require
a priori
knowledge
of the area surrounding the robot, and (b) Based on estimated
vehicle state, it uses the geometry of detected tree stems as the
only input to determine horizontal geoposition.
Introduction
The Global Positioning System (
GPS
) has been utilized exten-
sively to localize ground and airborne vehicles for more than
three decades (Kee
et al
., 1991). Vehicles with
GPS
receivers
integrated into their navigation systems can localize instantly
with sub-meter accuracies (Liu
et al
., 2005). Centimeter accu-
racy has been achieved using augmentation systems and other
real-time correction methods such as Differential
GPS
(
D-GPS
)
and real-time Kinematic (
RTK
) (Parkinson and Spilker, 1996).
Mobile robots have utilized
GPS
for outdoor path planning
where sequential traverse waypoints with respect to a global
reference system are programmed into the robot and then
used during motion for course comparison and correction.
Despite advances in
GPS
that have resulted in drastic
improvements to localization accuracies,
GPS
can be rendered
ineffective under different scenarios, mainly: (a) attenuation
of signals in forest environments by dense tree canopies to
the point where signals can be hardly received, (b) Multi-path
of
GPS
signals in cluttered environments such as urban city
centers that result in significant positioning errors, and (c) In-
tentional
GPS
signal spoofing, dithering or denied access could
render the service inaccessible and unreliable for continuous
real-time localization purposes.
Non-
GPS
techniques have been developed over the past few
years to address the limitations discussed above. Of note is
Simultaneous Localization and Mapping (
SLAM
), which has
been successfully utilized to localize rovers in a variety of set-
tings and scenarios (Leonard and Durant-Whyte, 1991).
SLAM
focuses on building a local map of landmarks as observed by
a rover to estimate local position in an iterative way (Guivant
and Nebot, 2002). Landmarks are observed using external sen-
sors that rely on optical (i.e., visible camera or lidar), radar, or
sonar based means. Positions of landmarks and the rover are
estimated
a priori
and
post priori
for each pose and then are it-
eratively refined. Position errors start high but tend to converge
as more landmarks are observed and as errors are filtered.
We build upon the concept of map based navigation within
the context of dense forest environments by developing a
vision-based algorithm that provides a non-iterative estimate
of the global position of a rover by comparing landmarks,
in this case tree stems observed on the ground, to tree stems
detected in overhead georeferenced imagery of the forest
canopy. Only vision-based data products are used to esti-
mate the global position of a rover without reliance on
GPS
.
The presented method is envisaged to be complementary in
nature to traditional navigation frameworks where it could be
invoked on-demand by the rover as the need arises.
This paper is comprised of four main sections. The next
Section discusses the top-level architecture while the follow-
ing sections discuss the different components of the local-
ization algorithm, its properties and constraints. In the next
Section, test results based on real-world data obtained from
an outdoor test campaign are presented and discussed. Con-
cluding remarks are presented in the final section.
Algorithm Architecture
The localization framework is envisaged to be complementary
to traditional
GPS
and map based navigation frameworks where
it could be invoked on-demand by the mobile robot as the
need arises. In essence, we anticipate that the algorithm is run
in the background at all times as a backup whenever the pri-
mary
GPS
localization service is down or becomes unreliable.
A typical operational scenario involves a single mobile
robot that is tasked to explore a particular area of interest. The
mobile robot is initially assumed to be driving autonomously
or using teleoperation (human in-the-loop) in a forest area
that includes heavy tree canopy and underbrush of various
densities and proportions. The mobile robot is anticipated to
initially use
GPS
for localization prior to entering the dense
forest area. Once the rover enters the heavy forest,
GPS
signals
may become inaccessible or unreliable due to attenuation by
the forest canopy. The localization method is assumed to kick
in for the duration of the
GPS
blackout.
The localization method requires two types of input data.
The first input is an overhead high-resolution image of the
exploration site. The overhead image can be acquired either
by orbital or aerial means and needs to be orthorectified and
georeferenced. Processing of aerial imagery involves delinea-
tion of individual tree crowns followed by tree stem center
location estimation as discussed in the next Section. The
second input is 3
D
scans of the environment surrounding
Marwan Hussein and Karl Iagnemma are with the
Robotic Mobility Group, Massachusetts Institute of
Technology (MIT), 77 Massachusetts Ave., Cambridge, MA,
02139 (
).
Mathew Renner is with the U.S. Army Engineer Research and
Development Center, 7701 Telegraph Rd., Alexandria, VA,
22315
Photogrammetric Engineering & Remote Sensing
Vol. 81, No. 11, November 2015, pp. 839–846.
0099-1112/15/839–846
© 2015 American Society for Photogrammetry
and Remote Sensing
doi: 10.14358/PERS.81.11.839
PHOTOGRAMMETRIC ENGINEERING & REMOTE SENSING
November 2015
839