Robust Multisource Remote Sensing
Image Registration Method Based
on Scene Shape Similarity
Ming Hao, Jian Jin, Mengchao Zhou, Yi Tian, and Wenzhong Shi
Abstract
Image registration is an indispensable component of remote
sensing applications, such as disaster monitoring, change
detection, and classification. Grayscale differences and
geometric distortions often occur among multisource images
due to their different imaging mechanisms, thus making it
difficult to acquire feature points and match correspond-
ing points. This article proposes a scene shape similarity
feature (SSSF) descriptor based on scene shape features and
shape context algorithms. A new similarity measure called
SSSF
ncc
is then defined by computing the normalized correla-
tion coefficient of the SSSF descriptors between multisource
remote sensing images. Furthermore, the tie points between
the reference and the sensed image are extracted via a tem-
plate matching strategy. A global consistency check method
is then used to remove the mismatched tie points. Finally,
a piecewise linear transform model is selected to rectify the
remote sensing image. The proposed
SSSF
ncc
aims to extract
the scene shape similarity between multisource images. The
accuracy of the proposed
SSSF
ncc
is evaluated using five pairs
of experimental images from optical, synthetic aperture
radar, and map data. Registration results demonstrate that
the
SSSF
ncc
similarity measure is robust enough for complex
nonlinear grayscale differences among multisource remote
sensing images. The proposed method achieves more reliable
registration outcomes compared with other popular methods.
Introduction
Remote sensing applications require th
multisource remote sensing images du
remote sensing image acquisition methods. Multisource
remote sensing images usually reflect different characteristics
of ground features and complement surface monitoring. These
images must be registered in geospatial settings to integrate
the required information. Image registration, which is a basic
preprocessing step in remote sensing data processing, is the
process of superimposing images acquired under different
time phases, different sensors, and different shooting condi-
tions (Zitová and Flusser 2003). Its accuracy has an essential
influence on subsequent image processing, such as detec-
tion change, object identification, and image fusion (Hao
et
al.
2014). Although automatic image registration techniques
have greatly progressed, selecting tie points manually is often
necessary when these techniques are applied to multisource
image registration due to time and geometric differences and
local distortion between multispectral images (Goncalves
et
al.
2011). Grayscale differences among multisource images
make it more challenging to detect tie points rather than
among linear single-spectral images. This study addresses this
issue by defining an accurate matching strategy that is robust
for multispectral images with nonlinear grayscale differences.
In accordance with the image registration process, remote
sensing image registration methods can be divided into fea-
ture-based and area-based methods (Zitová and Flusser 2003).
Feature-based methods must first detect and then describe
the features from a reference image. Afterward, the tie points
are extracted using the similarity of these features. Common
features consist of point (Yu
et al.
2008), line (Sui
et al.
2015),
and region features (Goncalves
et al.
2011). Recently, local
feature descriptors have also been applied to image registra-
tion. The scale-invariant feature transform (
SIFT
) (Lowe 2004)
is the most representative local feature descriptor. This de-
scriptor has been successfully used for remote sensing image
registration due to its invariance to geometric differences and
image scales (Ma
et al.
2010; Sedaghat
et al.
2015). According-
ly, many improved
SIFT
algorithms, such as the
C-SIFT
(Abdel-
Hakim
et al.
2006), have been proposed. However, these
algorithms are not effective for optical and synthetic aperture
radar (
SAR
) images with significant nonlinear grayscale differ-
ences (Kelman
et al.
2007; Ye
et al.
2014). These feature-based
methods can adapt well to geometric differences between
images. When large nonlinear grayscale differences exist be-
tween multisource images, extracting common features from
robustly is difficult. The repetition rate
ction is greatly reduced, thus resulting in
ion (Suri and Reinartz 2010). Therefore,
these methods are not reliable for multisource remote sensing
image registration.
Area-based methods, which are sometimes called “tem-
plate matching”, are another kind of image registration. These
methods first define a template window of a certain size on
the sensed image and then uses a similarity measure as a basis
to find a corresponding template area on the reference image.
The method then selects the center point of the template as
the tie point. Finally, the optimal geometric transformation re-
lationship between images is determined in accordance with
the tie points. As mentioned above, similarity measures have
a vital influence on area-based methods. Commonly used
similarity measures mainly consist of the sum of squared dif-
ferences (
SSD
), normalized correlation coefficient (
NCC
), and
mutual information (MI). Among these similarity measures,
the
SSD
is probably the simplest similarity measure in image
Ming Hao, Jian Jin, Mengchao Zhou, and Yi Tian are with
NASG Key Laboratory of Land Environment and Disaster
Monitoring, China University of Mining and Technology,
Xuzhou, China (
).
Wenzhong Shi is with Department of Land Surveying and
Geo-Informatics, The Hong Kong Polytechnic University,
Hong Kong, China (
).
Photogrammetric Engineering & Remote Sensing
Vol. 85, No. 10, October 2019, pp. 725–736.
0099-1112/19/725–736
© 2019 American Society for Photogrammetry
and Remote Sensing
doi: 10.14358/PERS.85.10.725
PHOTOGRAMMETRIC ENGINEERING & REMOTE SENSING
October 2019
725