PE&RS September 2016 Public - page 699

PHOTOGRAMMETRIC ENGINEERING & REMOTE SENSING
September 2016
699
An Individual Tree-Based Automated Registration
of Aerial Images to Lidar Data in a Forested Area
Jun-Hak Lee, Gregory S. Biging, and Joshua B. Fisher
Abstract
In this paper, we demonstrate an approach to align aerial images
to airborne lidar data by using common object features (tree
tops) from both data sets under the condition that conventional
correlation-based approaches are challenging due to the fact
that the spatial pattern of pixel gray-scale values in aerial images
hardly exist in lidar data. We extracted tree tops by using an
image processing technique called extended-maxima transfor-
mation from both aerial images and lidar data. Our approach
was tested at the Angelo Coast Range Reserve on the South Fork
Eel River forests in Mendocino County, California. Although the
aerial images were acquired simultaneously with the lidar data,
the images had only approximate exposure point locations and
average flight elevation information, which mimicked the condi-
tion of limited information availability about the aerial images.
Our results showed that this approach enabled us to align aerial
images to airborne lidar data at the single-tree level with reason-
able accuracy. With a local transformation model (piecewise lin-
ear model), the
RMSE
and the median absolute deviation (
MAD
) of
the registration were 9.2 pixels (2.3 meters) and 6.8 pixels (1.41
meters), respectively. We expect our approach to be applicable to
fine scale change detection for forest ecosystems and may serve
to extract detailed forest biophysical parameters.
Introduction
In forestry, there have been numerous attempts to combine
lidar and aerial images to obtain further information and
create information-rich products in high spatial resolution
(e.g., species identification (Holmgren
et al
., 2008; Ke
et al
.,
2010; Puttonen
et al
., 2009), fuel mapping (Mutlu
et al
., 2008;
García
et al
., 2011), habitat modeling (Hartfield
et al
., 2011;
Estes
et al
., 2010), land cover classification (Antonarakis
et
al
., 2008; Tooke
et al
., 2009)). However, the major problem
that continues to arise is image rectification, particularly
when automated (Leckie
et al
., 2003; McCombs
et al
., 2003;
Suárez
et al
., 2005). The rectification of photographic and
lidar data is an essential prerequisite to make better use of
both systems (Habib
et al
., 2005; Schenk and Csathó, 2002).
The registration of different datasets requires common fea-
tures, which are used to geometrically align multiple images.
Traditional procedures employ a manual selection of common
control points from two datasets for the registration (Habib
et
al
., 2004). However, these manual approaches can be subjec-
tive and labor intensive. Also, they may identify a limited
number of usable points with poor spatial distribution so that
the overall registration accuracy is not guaranteed (Liu
et al
.,
2006). Automated procedures for common feature extraction
are able to cope with some of the limitations of manual pro-
cedures (Kennedy and Cohen, 2003). There have been various
studies about automatic image registration techniques (Zitova
and Flusser, 2003). In brief, techniques for identifying control
points are divided into two categories: area-based methods
and feature-based methods (Kennedy and Cohen, 2003). Ar-
ea-based methods (using pixel intensity) have the advantage
of easy implementation and no preprocessing requirement for
feature extraction. In addition, the resulting control points are
evenly distributed over the image so that more robust regis-
tration can be performed (Liu
et al
., 2006; Zitova and Flusser,
2003). However, area-based methods may not be applicable
to rectifying aerial images and lidar data because the spatial
patterns of pixel grey-scale values (optical reflectance) in
aerial images hardly exist in the raw lidar point data. Con-
versely, feature-based methods use salient features rather
than direct intensity values so that feature-based methods
are more suitable for multi-sensor analysis (i.e., lidar and
aerial images in this study) (Zitova and Flusser, 2003). For
feature-based methods, distinctive and detectable common
objects (features) should be extracted from both datasets. The
common features selected for the registration greatly influ-
ence subsequent registration procedures. Hence, it is crucial
to decide on the appropriate common features to be used
for the registration between the datasets (Habib
et al
., 2004).
Since there is nothing in common at the level of raw data
(discrete points in lidar and pixels in aerial image), additional
processing is required to extract the common features (Schenk
and Csathó, 2002). Habib
et al
. (2005) used linear features for
matching. Mitishita and others (2008) extracted the centroids
of rectangular building roofs as common features. De Lara and
others (2009) proposed corner and edge detection methods for
automatic integration of digital aerial images and lidar data.
Abayowa
et al
. (2015) registered optical aerial imagery to a
lidar point cloud. However, these methods cannot be readily
applied to non-urban forested areas that do not have built-up
objects (e.g., buildings, roads and roofs) for the registration (Li
et al
., 2013).
Common features for the registration ideally should be
distinct, evenly distributed over the entire image, and easily
detectable. In forested areas, trees are usually detectable ob-
jects and spread across areas of interest so that there is great
potential in utilizing dominant trees as control points. Several
studies have verified the feasibility of automated tree apex
detection in both aerial images and lidar data. For aerial im-
ages, different approaches have been made to detect tree tops;
local maxima (Dralle and Rudemo, 1997; Pouliot
et al
., 2002;
Wulder
et al
., 2000), image binarization (Dralle and Rudemo,
1996; Walsworth and King, 1998), scale analysis (Brandtberg
and Walter, 1998; Pouliot and King, 2005),
Jun-Hak Lee is with the Department of Landscape Architec-
ture, University of Oregon, Eugene, OR 97403 (junhaklee@
uoregon.edu).
Gregory S. Biging is with the Department of Environmental
Science, Policy and Management, University of California,
Berkeley, CA 94720.
Joshua B. Fisher is with the Jet Propulsion Laboratory, Califor-
nia Institute of Technology, Pasadena, CA 91109.
Photogrammetric Engineering & Remote Sensing
Vol. 82, No. 9, September 2016, pp. 699–710.
0099-1112/16/699–710
© 2016 American Society for Photogrammetry
and Remote Sensing
doi: 10.14358/PERS.82.9.699
651...,689,690,691,692,693,694,695,696,697,698 700,701,702,703,704,705,706,707,708,709,...742
Powered by FlippingBook