Toward Optimum Fusion of Thermal Hyperspectral
and Visible Images in Classification of Urban Area
Farhad Samadzadegan, Hadiseh Hasani, and Peter Reinartz
Abstract
Recently, classification of urban area based on multi-sensor
fusion has been widely investigated. In this paper, the
potential of using visible (
VIS
) and thermal infrared (
TIR
)
hyperspectral images fusion for classification of urban area is
evaluated. For this purpose, comprehensive spatial-spectral
feature space is generated which includes vegetation index,
differential morphological profile (
DMP
), attribute profile (
AP
),
texture, geostatistical features, structural feature set (
SFS
) and
local statistical descriptors from both datasets in addition to
original datasets. Although Support Vector Machine (
SVM
) is
an appropriate tool in the classification of high dimensional
feature space, its performance is significantly affected by its
parameters and feature space. Cuckoo search (
CS
) optimiza-
tion algorithm with mixed binary-continuous coding is pro-
posed for feature selection and
SVM
parameter determination
simultaneously. Moreover, the significance of each selected
feature category in the classification of a specific object is
verified. Accuracy assessment on two subsets shows that
stacking of
VIS
and
TIR
bands can improve the classification
performance to 87 percent and 82 percent for two subsets,
compare to
VIS
image (72 percent and 80 percent) and
TIR
image (50 percent and 56 percent). However, the optimum re-
sults obtained based on the proposed method which gains 94
percent and 92 percent. Furthermore, results show that using
TIR
beside
VIS
image improves classification accuracy of roads
and buildings in urban area.
Introduction
Land cover mapping in the urban area is fundamental infor-
mation for urban planning, ecological research, change detec-
tion, disaster management, etc. However, material variation,
diverse size of objects and their adjacency, make the urban
area a complex scene for classification. Over the past decade,
different types of remotely sensed data have been applied for
land cover classification, such as Light Detection and Rang-
ing (lidar), airborne image, hyperspectral image, and Syn-
thetic Aperture Radar (
SAR
). Niemeyer
et al
. (2014) proposed
contextual classification for building detection in lidar point
cloud. Samadzadegan
et al
. (2012) applied the meta-heuristic
optimization algorithm to improve classification performance
of hyperspectral imagery. High spatial resolution images are
also used in classification of the urban area. For this purpose,
Huang and Zhang (2013) implemented ensemble method to
combine spectral, structural, and semantic features of high
resolution imagery. Nevertheless, data from a single sensor
contains incomplete information about objects in the urban
area and poses some challenges in the classification process
(Huang
et al.
, 2011; Kumar
et al
., 2015; Lio
et al
., 2015).
Recent developments in remote sensing technologies pro-
vide several data with different temporal, spatial, and spectral
resolution from the same area. In order to extract robust and
accurate information from multiple data, multi-sensor data
fusion methods have been investigated. Multi-sensor data
fusion seeks to integrate data from different sources to obtain
more information than can be derived from a single sensor
(Kumar
et al
., 2015; Zhang, 2010). The assessment of the
capacity of several combinations of datasets are studied in
the literature. Liao
et al
. (2015) fused hyperspectral and lidar
data in classification of urban area. Zhu
et al
. (2012) used a
Landsat image and
SAR
data for urban and peri-urban envi-
ronment classification. Huang
et al
. (2011) evaluated three
feature level fusion methods for high resolution aerial image
and elevation information from lidar data. Jung and Park
(2014) fused Landsat-8 panchromatic image with high spatial
resolution and thermal image with coarser resolution using an
optimal scaling factor in order to control the trade-off between
the spatial details and the thermal information.
Aerial visible images with high spatial resolution play an
important role in urban land use/cover classification. The
visible camera observes a radiation reflected from Earth’s
surface over a visible wavelength range, whereas the ther-
mal infrared camera observes radiation emitted from Earth’s
surface over the infrared wavelength range. However, thermal
imagery has the coarse spatial resolution, and its application
has been restricted in urban areas. The brightness values of
the visible and thermal images correspond to Earth’s surface
reflectance and temperature, respectively (Liao
et al.
, 2015;
Liu
et al
., 2016). Fusion of the aerial visible and thermal im-
ages can enhance the spatial details of the thermal image and
also add the temperature information to the visible image.
Consequently, fusion of visible and thermal imaging sensors
provides additional information and makes the classification
performance better than would be possible when the sensors
are used individually.
Fusion/integration of visible and thermal images with the
aim of classification is a new topic in remote sensing commu-
nities and limited research have been carried out in this field.
The literature can be categorized into two groups: hierarchical
classification process and simultaneous classification process.
Liao
et al.
(2015) presented the outcomes of 2014
IEEE GRSS
data fusion contest where thermal hyperspectral and visible
images are provided. The winner of classification contest
adopted hierarchical classification strategy to fuse thermal
hyperspectral and visible images. First, vegetation and mor-
phological building indices are extracted, and then each land
cover is identified successively using a binary
SVM
classifier.
Farhad Samadzadegan and Hadiseh Hasani are with the
School of Surveying and Geospatial Engineering, College of
Engineering, University of Tehran, Tehran, North Amirabad,
Tehran, Iran
).
Peter Reinartz is with the Department of Photogrammetry
and Image Analysis, Remote Sensing Technology Institute,
German Aerospace Center (DLR), Weßling, Germany.
Photogrammetric Engineering & Remote Sensing
Vol. 83, No. 4, April 2017, pp. 269–280.
0099-1112/17/269–280
© 2017 American Society for Photogrammetry
and Remote Sensing
doi: 10.14358/PERS.83.4.269
PHOTOGRAMMETRIC ENGINEERING & REMOTE SENSING
April 2017
269